Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-set-mariadb
Packt
16 Jun 2015
8 min read
Save for later

Set Up MariaDB

Packt
16 Jun 2015
8 min read
In this article, by Daniel Bartholomew, author of Getting Started with MariaDB - Second Edition, you will learn to set up MariaDB with a generic configuration suitable for general use. This is perfect for giving MariaDB a try but might not be suitable for a production database application under heavy load. There are thousands of ways to tweak the settings to get MariaDB to perform just the way we need it to. Many books have been written on this subject. In this article, we'll cover enough of the basics so that we can comfortably edit the MariaDB configuration files and know our way around. The MariaDB filesystem layout A MariaDB installation is not a single file or even a single directory, so the first stop on our tour is a high-level overview of the filesystem layout. We'll start with Windows and then move on to Linux. The MariaDB filesystem layout on Windows On Windows, MariaDB is installed under a directory named with the following pattern: C:Program FilesMariaDB <major>.<minor> In the preceding command, <major> and <minor> refer to the first and second number in the MariaDB version string. So for MariaDB 10.1, the location would be: C:Program FilesMariaDB 10.1 The only alteration to this location, unless we change it during the installation, is when the 32-bit version of MariaDB is installed on a 64-bit version of Windows. In that case, the default MariaDB directory is at the following location: C:Program Files x86MariaDB <major>.<minor> Under the MariaDB directory on Windows, there are four primary directories: bin, data, lib, and include. There are also several configuration examples and other files under the MariaDB directory and a couple of additional directories (docs and Share), but we won't go into their details here. The bin directory is where the executable files of MariaDB are located. The data directory is where databases are stored; it is also where the primary MariaDB configuration file, my.ini, is stored. The lib directory contains various library and plugin files. Lastly, the include directory contains files that are useful for application developers. We don't generally need to worry about the bin, lib, and include directories; it's enough for us to be aware that they exist and know what they contain. The data directory is where we'll spend most of our time in this article and when using MariaDB. On Linux distributions, MariaDB follows the default filesystem layout. For example, the MariaDB binaries are placed under /usr/bin/, libraries are placed under /usr/lib/, manual pages are placed under /usr/share/man/, and so on. However, there are some key MariaDB-specific directories and file locations that we should know about. Two of them are locations that are the same across most Linux distributions. These locations are the /usr/share/mysql/ and /var/lib/mysql/ directories. The /usr/share/mysql/ directory contains helper scripts that are used during the initial installation of MariaDB, translations (so we can have error and system messages in different languages), and character set information. We don't need to worry about these files and scripts; it's enough to know that this directory exists and contains important files. The /var/lib/mysql/ directory is the default location for our actual database data and the related files such as logs. There is not much need to worry about this directory as MariaDB will handle its contents automatically; for now it's enough to know that it exists. The next directory we should know about is where the MariaDB plugins are stored. Unlike the previous two, the location of this directory varies. On Debian and Ubuntu systems, the directory is at the following location: /usr/lib/mysql/plugin/ In distributions such as Fedora, Red Hat, and CentOS, the location of the plugin directory varies depending on whether our system is 32 bit or 64 bit. If unsure, we can just look in both. The possible locations are: /lib64/mysql/plugin//lib/mysql/plugin/ The basic rule of thumb is that if we don't have a /lib64/ directory, we have the 32-bit version of Fedora, Red Hat, or CentOS installed. As with /usr/share/mysql/, we don't need to worry about the contents of the MariaDB plugin directory. It's enough to know that it exists and contains important files. Also, if in the future we install a new MariaDB plugin, this directory is where it will go. The last directory that we should know about is only found on Debian and the distributions based on Debian such as Ubuntu. Its location is as follows: /etc/mysql/ The /etc/mysql/ directory is where the configuration information for MariaDB is stored; specifically, in the following two locations: /etc/mysql/my.cnf/etc/mysql/conf.d/ Fedora, Red Hat, CentOS, and related systems don't have an /etc/mysql/ directory by default, but they do have a my.cnf file and a directory that serves the same purpose that the /etc/mysql/conf.d/ directory does on Debian and Ubuntu. They are at the following two locations: /etc/my.cnf/etc/my.cnf.d/ The my.cnf files, regardless of location, function the same on all Linux versions and on Windows, where it is often named my.ini. The /etc/my.cnf.d/ and /etc/mysql/conf.d/ directories, as mentioned, serve the same purpose. We'll spend the next section going over these two directories. Modular configuration on Linux The /etc/my.cnf.d/ and /etc/mysql/conf.d/ directories are special locations for the MariaDB configuration files. They are found on the MariaDB releases for Linux such as Debian, Ubuntu, Fedora, Red Hat, and CentOS. We will only have one or the other of them, never both, and regardless of which one we have, their function is the same. The basic idea behind these directories is to allow the package manager (APT or YUM) to be able to install packages for MariaDB, which include additions to MariaDB's configuration without needing to edit or change the main my.cnf configuration file. It's easy to imagine the harm that would be caused if we installed a new plugin package and it overwrote a carefully crafted and tuned configuration file. With these special directories, the package manager can simply add a file to the appropriate directory and be done. When the MariaDB server and the clients and utilities included with MariaDB start up, they first read the main my.cnf file and then any files that they find under the /etc/my.cnf.d/ or /etc/mysql/conf.d/ directories that have the extension .cnf because of a line at the end of the default configuration files. For example, MariaDB includes a plugin called feedback whose sole purpose is to send back anonymous statistical information to the MariaDB developers. They use this information to help guide future development efforts. It is disabled by default but can easily be enabled by adding feedback=on to a [mysqld] group of the MariaDB configuration file (we'll talk about configuration groups in the following section). We could add the required lines to our main my.cnf file or, better yet, we can create a file called feedback.cnf (MariaDB doesn't care what the actual filename is, apart from the .cnf extension) with the following content: [mysqld]feedback=on All we have to do is put our feedback.cnf file in the /etc/my.cnf.d/ or /etc/mysql/conf.d/ directory and when we start or restart the server, the feedback.cnf file will be read and the plugin will be turned on. Doing this for a single plugin on a solitary MariaDB server may seem like too much work, but suppose we have 100 servers, and further assume that since the servers are doing different things, each of them has a slightly different my.cnf configuration file. Without using our small feedback.cnf file to turn on the feedback plugin on all of them, we would have to connect to each server in turn and manually add feedback=on to the [mysqld] group of the file. This would get tiresome and there is also a chance that we might make a mistake with one, or several of the files that we edit, even if we try to automate the editing in some way. Copying a single file to each server that only does one thing (turning on the feedback plugin in our example) is much faster, and much safer. And, if we have an automated deployment system in place, copying the file to every server can be almost instant. Caution! Because the configuration settings in the /etc/my.cnf.d/ or /etc/mysql/conf.d/ directory are read after the settings in the my.cnf file, they can override or change the settings in our main my.cnf file. This can be a good thing if that is what we want and expect. Conversely, it can be a bad thing if we are not expecting that behavior. Summary That's it for our configuration highlights tour! In this article, we've learned where the various bits and pieces of MariaDB are installed and about the different parts that make up a typical MariaDB configuration file. Resources for Article: Building a Web Application with PHP and MariaDB – Introduction to caching Installing MariaDB on Windows and Mac OS X Questions & Answers with MariaDB's Michael "Monty" Widenius- Founder of MySQL AB
Read more
  • 0
  • 0
  • 1861

article-image-neural-network-azure-ml
Packt
15 Jun 2015
3 min read
Save for later

Neural Network in Azure ML

Packt
15 Jun 2015
3 min read
In this article written by Sumit Mund, author of the book Microsoft Azure Machine Learning, we will learn about neural network, which is a kind of machine learning algorithm inspired by the computational models of a human brain. It builds a network of computation units, neurons, or nodes. In a typical network, there are three layers of nodes. First, the input layer, followed by the middle layer or hidden layer, and in the end, the output layer. Neural network algorithms can be used for both classification and regression problems. (For more resources related to this topic, see here.) The number of nodes in a layer depends on the problem and how you construct the network to get the best result. Usually, the number of nodes in an input layer is equal to the number of features in the dataset. For a regression problem, the number of nodes in the output layer is one while for a classification problem, it is equal to the number of class or label. Each node in a layer gets connected to all the nodes in the next layer. Each edge that connects between nodes is assigned a weight. So, a neural network can well be imagined as a weighted directed acyclic graph. In a typical neural network, as shown in the preceding figure, the middle layer or hidden layer contains the number nodes, which are chosen to make the computation right. While there is no formula or agreed convention for this, it is often optimized after trying out different options. Azure Machine Learning supports neural network for regression, two-class classification, and multiclass classification. It provides a separate module for each kind of problem and lets the users tune it with different parameters, such as the number of hidden nodes, number of iterations to train the model, and so on. A special kind of neural network algorithms where there are more than one hidden layers is known as deep networks or deep learning algorithms. Azure Machine Learning allows us to choose the number of hidden nodes as a property value of the neural network module. These kind of neural networks are getting increasingly popular these days because of their remarkable results and because they allow us to model complex and nonlinear scenarios. There are many kinds of deep networks, but recently, a special kind of deep network known as the convolutional neural network got very popular because of its significant performance in image recognition or classification problems. Azure Machine Learning supports the convolutional neural network. For simple networks with three layers, this can be done through a UI just by choosing parameters. However, to build a deep network like a convolutional deep network, it’s not easy to do so through a UI. So, Azure Machine Learning supports a new kind of language called Net#, which allows you to script different kinds of neural network inside ML Studio by defining different node, the connections (edges), and kind of connections. While deep networks are complex to build and train, Net# makes things relatively easy and simple. Though complex, neural networks are very powerful and Azure Machine Learning makes it fun to work with these be it three-layered shallow networks or multilayer deep networks. Resources for Article: Further resources on this subject: Security in Microsoft Azure [article] High Availability, Protection, and Recovery using Microsoft Azure [article] Managing Microsoft Cloud [article]
Read more
  • 0
  • 0
  • 4680

article-image-linear-regression
Packt
12 Jun 2015
19 min read
Save for later

Linear Regression

Packt
12 Jun 2015
19 min read
In this article by Rui Miguel Forte, the author of the book, Mastering Predictive Analytics with R, we'll learn about linear regression. Regression problems involve predicting a numerical output. The simplest but most common type of regression is linear regression. In this article, we'll explore why linear regression is so commonly used, its limitations, and extensions. (For more resources related to this topic, see here.) Introduction to linear regression In linear regression, the output variable is predicted by a linearly weighted combination of input features. Here is an example of a simple linear model:   The preceding model essentially says that we are estimating one output, denoted by ˆy, and this is a linear function of a single predictor variable (that is, a feature) denoted by the letter x. The terms involving the Greek letter β are the parameters of the model and are known as regression coefficients. Once we train the model and settle on values for these parameters, we can make a prediction on the output variable for any value of x by a simple substitution in our equation. Another example of a linear model, this time with three features and with values assigned to the regression coefficients, is given by the following equation:   In this equation, just as with the previous one, we can observe that we have one more coefficient than the number of features. This additional coefficient, β0, is known as the intercept and is the expected value of the model when the value of all input features is zero. The other β coefficients can be interpreted as the expected change in the value of the output per unit increase of a feature. For example, in the preceding equation, if the value of the feature x1 rises by one unit, the expected value of the output will rise by 1.91 units. Similarly, a unit increase in the feature x3 results in a decrease of the output by 7.56 units. In a simple one-dimensional regression problem, we can plot the output on the y axis of a graph and the input feature on the x axis. In this case, the model predicts a straight-line relationship between these two, where β0 represents the point at which the straight line crosses or intercepts the y axis and β1 represents the slope of the line. We often refer to the case of a single feature (hence, two regression coefficients) as simple linear regression and the case of two or more features as multiple linear regression. Assumptions of linear regression Before we delve into the details of how to train a linear regression model and how it performs, we'll look at the model assumptions. The model assumptions essentially describe what the model believes about the output variable y that we are trying to predict. Specifically, linear regression models assume that the output variable is a weighted linear function of a set of feature variables. Additionally, the model assumes that for fixed values of the feature variables, the output is normally distributed with a constant variance. This is the same as saying that the model assumes that the true output variable y can be represented by an equation such as the following one, shown for two input features: Here, ε represents an error term, which is normally distributed with zero mean and constant variance σ2: We might hear the term homoscedasticity as a more formal way of describing the notion of constant variance. By homoscedasticity or constant variance, we are referring to the fact that the variance in the error component does not vary with the values or levels of the input features. In the following plot, we are visualizing a hypothetical example of a linear relationship with heteroskedastic errors, which are errors that do not have a constant variance. The data points lie close to the line at low values of the input feature, because the variance is low in this region of the plot, but lie farther away from the line at higher values of the input feature because of the higher variance. The ε term is an irreducible error component of the true function y and can be used to represent random errors, such as measurement errors in the feature values. When training a linear regression model, we always expect to observe some amount of error in our estimate of the output, even if we have all the right features, enough data, and the system being modeled really is linear. Put differently, even with a true function that is linear, we still expect that once we find a line of best fit through our training examples, our line will not go through all, or even any of our data points because of this inherent variance exhibited by the error component. The critical thing to remember, though, is that in this ideal scenario, because our error component has zero mean and constant variance, our training criterion will allow us to come close to the true values of the regression coefficients given a sufficiently large sample, as the errors will cancel out. Another important assumption relates to the independence of the error terms. This means that we do not expect the residual or error term associated with one particular observation to be somehow correlated with that of another observation. This assumption can be violated if observations are functions of each other, which is typically the result of an error in the measurement. If we were to take a portion of our training data, double all the values of the features and outputs, and add these new data points to our training data, we could create the illusion of having a larger data set; however, there will be pairs of observations whose error terms will depend on each other as a result, and hence our model assumption would be violated. Incidentally, artificially growing our data set in such a manner is never acceptable for any model. Similarly, correlated error terms may occur if observations are related in some way by an unmeasured variable. For example, if we are measuring the malfunction rate of parts from an assembly line, then parts from the same factory might have a correlation in the error, for example, due to different standards and protocols used in the assembly process. Therefore, if we don't use the factory as a feature, we may see correlated errors in our sample among observations that correspond to parts from the same factory. The study of experimental design is concerned with identifying and reducing correlations in error terms, but this is beyond the scope of this book. Finally, another important assumption concerns the notion that the features themselves are statistically independent of each other. It is worth clarifying here that in linear models, although the input features must be linearly weighted, they themselves may be the output of another function. To illustrate this, one may be surprised to see that the following is a linear model of three features, sin(z1), ln(z2), and exp(z3): We can see that this is a linear model by making a few transformations on the input features and then making the replacements in our model: Now, we have an equation that is more recognizable as a linear regression model. If the previous example made us believe that nearly everything could be transformed into a linear model, then the following two examples will emphatically convince us that this is not in fact the case: Both models are not linear models because of the first regression coefficient (β1). The first model is not a linear model because β1 is acting as the exponent of the first input feature. In the second model, β1 is inside a sine function. The important lesson to take away from these examples is that there are cases where we can apply transformations on our input features in order to fit our data to a linear model; however, we need to be careful that our regression coefficients are always the linear weights of the resulting new features. Simple linear regression Before looking at some real-world data sets, it is very helpful to try to train a model on artificially generated data. In an artificial scenario such as this, we know what the true output function is beforehand, something that as a rule is not the case when it comes to real-world data. The advantage of performing this exercise is that it gives us a good idea of how our model works under the ideal scenario when all of our assumptions are fully satisfied, and it helps visualize what happens when we have a good linear fit. We'll begin by simulating a simple linear regression model. The following R snippet is used to create a data frame with 100 simulated observations of the following linear model with a single input feature: Here is the code for the simple linear regression model: > set.seed(5427395)> nObs = 100> x1minrange = 5> x1maxrange = 25> x1 = runif(nObs, x1minrange, x1maxrange)> e = rnorm(nObs, mean = 0, sd = 2.0)> y = 1.67 * x1 - 2.93 + e> df = data.frame(y, x1) For our input feature, we randomly sample points from a uniform distribution. We used a uniform distribution to get a good spread of data points. Note that our final df data frame is meant to simulate a data frame that we would obtain in practice, and as a result, we do not include the error terms, as these would be unavailable to us in a real-world setting. When we train a linear model using some data such as those in our data frame, we are essentially hoping to produce a linear model with the same coefficients as the ones from the underlying model of the data. Put differently, the original coefficients define a population regression line. In this case, the population regression line represents the true underlying model of the data. In general, we will find ourselves attempting to model a function that is not necessarily linear. In this case, we can still define the population regression line as the best possible linear regression line, but a linear regression model will obviously not perform equally well. Estimating the regression coefficients For our simple linear regression model, the process of training the model amounts to an estimation of our two regression coefficients from our data set. As we can see from our previously constructed data frame, our data is effectively a series of observations, each of which is a pair of values (xi, yi) where the first element of the pair is the input feature value and the second element of the pair is its output label. It turns out that for the case of simple linear regression, it is possible to write down two equations that can be used to compute our two regression coefficients. Instead of merely presenting these equations, we'll first take a brief moment to review some very basic statistical quantities that the reader has most likely encountered previously, as they will be featured very shortly. The mean of a set of values is just the average of these values and is often described as a measure of location, giving a sense of where the values are centered on the scale in which they are measured. In statistical literature, the average value of a random variable is often known as the expectation, so we often find that the mean of a random variable X is denoted as E(X). Another notation that is commonly used is bar notation, where we can represent the notion of taking the average of a variable by placing a bar over that variable. To illustrate this, the following two equations show the mean of the output variable y and input feature x: A second very common quantity, which should also be familiar, is the variance of a variable. The variance measures the average square distance that individual values have from the mean. In this way, it is a measure of dispersion, so that a low variance implies that most of the values are bunched up close to the mean, whereas a higher variance results in values that are spread out. Note that the definition of variance involves the definition of the mean, and for this reason, we'll see the use of the x variable with a bar on it in the following equation that shows the variance of our input feature x: Finally, we'll define the covariance between two random variables x and y using the following equation: From the previous equation, it should be clear that the variance, which we just defined previously, is actually a special case of the covariance where the two variables are the same. The covariance measures how strongly two variables are correlated with each other and can be positive or negative. A positive covariance implies a positive correlation; that is, when one variable increases, the other will increase as well. A negative covariance suggests the opposite; when one variable increases, the other will tend to decrease. When two variables are statistically independent of each other and hence uncorrelated, their covariance will be zero (although it should be noted that a zero covariance does not necessarily imply statistical independence). Armed with these basic concepts, we can now present equations for the estimates of the two regression coefficients for the case of simple linear regression: The first regression coefficient can be computed as the ratio of the covariance between the output and the input feature, and the variance of the input feature. Note that if the output feature were to be independent of the input feature, the covariance would be zero and therefore, our linear model would consist of a horizontal line with no slope. In practice, it should be noted that even when two variables are statistically independent, we will still typically see a small degree of covariance due to the random nature of the errors, so if we were to train a linear regression model to describe their relationship, our first regression coefficient would be nonzero in general. Later, we'll see how significance tests can be used to detect features we should not include in our models. To implement linear regression in R, it is not necessary to perform these calculations as R provides us with the lm() function, which builds a linear regression model for us. The following code sample uses the df data frame we created previously and calculates the regression coefficients: > myfit <- lm(y~x1, df)> myfitCall:lm(formula = y ~ x1, data = df) Coefficients:(Intercept)          x1     -2.380       1.641 In the first line, we see that the usage of the lm() function involves first specifying a formula and then following up with the data parameter, which in our case is our data frame. For the case of simple linear regression, the syntax of the formula that we specify for the lm() function is the name of the output variable, followed by a tilde (~) and then by the name of the single input feature. Finally, the output shows us the values for the two regression coefficients. Note that the β0 coefficient is labeled as the intercept, and the β1 coefficient is labeled by the name of the corresponding feature (in this case, x1) in the equation of the linear model. The following graph shows the population line and the estimated line on the same plot: As we can see, the two lines are so close to each other that they are barely distinguishable, showing that the model has estimated the true population line very closely. Multiple linear regression Whenever we have more than one input feature and want to build a linear regression model, we are in the realm of multiple linear regression. The general equation for a multiple linear regression model with k input features is: Our assumptions about the model and about the error component ε remain the same as with simple linear regression, remembering that as we now have more than one input feature, we assume that these are independent of each other. Instead of using simulated data to demonstrate multiple linear regression, we will analyze two real-world data sets. Predicting CPU performance Our first real-world data set was presented by the researchers Dennis F. Kibler, David W. Aha, and Marc K. Albert in a 1989 paper titled Instance-based prediction of real-valued attributes and published in Journal of Computational Intelligence. The data contain the characteristics of different CPU models, such as the cycle time and the amount of cache memory. When deciding between processors, we would like to take all of these things into account, but ideally, we'd like to compare processors on a single numerical scale. For this reason, we often develop programs to benchmark the relative performance of a CPU. Our data set also comes with the published relative performance of our CPUs and our objective will be to use the available CPU characteristics as features to predict this. The data set can be obtained online from the UCI Machine Learning Repository via this link: http://archive.ics.uci.edu/ml/datasets/Computer+Hardware. The UCI Machine Learning Repository is a wonderful online resource that hosts a large number of data sets, many of which are often cited by authors of books and tutorials. It is well worth the effort to familiarize yourself with this website and its data sets. A very good way to learn predictive analytics is to practice using the techniques you learn in this book on different data sets, and the UCI repository provides many of these for exactly this purpose. The machine.data file contains all our data in a comma-separated format, with one line per CPU model. We'll import this in R and label all the columns. Note that there are 10 columns in total, but we don't need the first two for our analysis, as these are just the brand and model name of the CPU. Similarly, the final column is a predicted estimate of the relative performance that was produced by the researchers themselves; our actual output variable, PRP, is in column 9. We'll store the data that we need in a data frame called machine: > machine <- read.csv("machine.data", header = F)> names(machine) <- c("VENDOR", "MODEL", "MYCT", "MMIN", "MMAX", "CACH", "CHMIN", "CHMAX", "PRP", "ERP")> machine <- machine[, 3:9]> head(machine, n = 3)MYCT MMIN MMAX CACH CHMIN CHMAX PRP1 125 256 6000 256   16   128 1982   29 8000 32000   32     8   32 2693   29 8000 32000   32     8   32 220 The data set also comes with the definition of the data columns: Column name Definition MYCT The machine cycle time in nanoseconds MMIN The minimum main memory in kilobytes MMAX The maximum main memory in kilobytes CACH The cache memory in kilobytes CHMIN The minimum channels in units CHMAX The maximum channels in units PRP The published relative performance (our output variable) The data set contains no missing values, so no observations need to be removed or modified. One thing that we'll notice is that we only have roughly 200 data points, which is generally considered a very small sample. Nonetheless, we will proceed with splitting our data into a training set and a test set, with an 85-15 split, as follows: > library(caret)> set.seed(4352345)> machine_sampling_vector <- createDataPartition(machine$PRP,    p = 0.85, list = FALSE)> machine_train <- machine[machine_sampling_vector,]> machine_train_features <- machine[, 1:6]> machine_train_labels <- machine$PRP[machine_sampling_vector]> machine_test <- machine[-machine_sampling_vector,]> machine_test_labels <- machine$PRP[-machine_sampling_vector] Now that we have our data set up and running, we'd usually want to investigate further and check whether some of our assumptions for linear regression are valid. For example, we would like to know whether we have any highly correlated features. To do this, we can construct a correlation matrix with the cor()function and use the findCorrelation() function from the caret package to get suggestions for which features to remove: > machine_correlations <- cor(machine_train_features)> findCorrelation(machine_correlations)integer(0)> findCorrelation(machine_correlations, cutoff = 0.75)[1] 3> cor(machine_train$MMIN, machine_train$MMAX)[1] 0.7679307 Using the default cutoff of 0.9 for a high degree of correlation, we found that none of our features should be removed. When we reduce this cutoff to 0.75, we see that caret recommends that we remove the third feature (MMAX). As the final line of preceding code shows, the degree of correlation between this feature and MMIN is 0.768. While the value is not very high, it is still high enough to cause us a certain degree of concern that this will affect our model. Intuitively, of course, if we look at the definitions of our input features, we will certainly tend to expect that a model with a relatively high value for the minimum main memory will also be likely to have a relatively high value for the maximum main memory. Linear regression can sometimes still give us a good model with correlated variables, but we would expect to get better results if our variables were uncorrelated. For now, we've decided to keep all our features for this data set. Summary In this article, we studied linear regression, a method that allows us to fit a linear model in a supervised learning setting where we have a number of input features and a single numeric output. Simple linear regression is the name given to the scenario where we have only one input feature, and multiple linear regression describes the case where we have multiple input features. Linear regression is very commonly used as a first approach to solving a regression problem. It assumes that the output is a linear weighted combination of the input features in the presence of an irreducible error component that is normally distributed and has zero mean and constant variance. The model also assumes that the features are independent.
Read more
  • 0
  • 0
  • 13160

article-image-plotting-haskell
Packt
04 Jun 2015
10 min read
Save for later

Plotting in Haskell

Packt
04 Jun 2015
10 min read
In this article by James Church, author of the book Learning Haskell Data Analysis, we will see the different methods of data analysis by plotting data using Haskell. The other topics that this article covers is using GHCi, scaling data, and comparing stock prices. (For more resources related to this topic, see here.) Can you perform data analysis in Haskell? Yes, and you might even find that you enjoy it. We are going to take a few snippets of Haskell and put some plots of the stock market data together. To get started with, the following software needs to be installed: The Haskell platform (http://www.haskell.org/platform) Gnuplot (http://www.gnuplot.info/) The cabal command-line tool is the tool used to install packages in Haskell. There are three packages that we may need in order to analyze the stock market data. To use cabal, you will use the cabal install [package names] command. Run the following command to install the CSV parsing package, the EasyPlot package, and the Either package: $ cabal install csv easyplot either Once you have the necessary software and packages installed, we are all set for some introductory analysis in Haskell. We need data It is difficult to perform an analysis of data without data. The Internet is rich with sources of data. Since this tutorial looks at the stock market data, we need a source. Visit the Yahoo! Finance website to find the history of every publicly traded stock on the New York Stock Exchange that has been adjusted to reflect splits over time. The good folks at Yahoo! provide this resource in the csv file format. We begin with downloading the entire history of the Apple company from Yahoo! Finance (http://finance.yahoo.com). You can find the content for Apple by performing a quote look up from the Yahoo! Finance home page for the AAPL symbol (that is, 2 As, not 2 Ps). On this page, you can find the link for Historical Prices. On the Historical Prices page, identify the link that says Download to Spreadsheet. The complete link to Apple's historical prices can be found at the following link: http://real-chart.finance.yahoo.com/table.csv?s=AAPL. We should take a moment to explore our dataset. Here are the column headers in the csv file: Date: This is a string that represents the date of a particular date in Apple's history Open: This is the opening value of one share High: This is the high trade value over the course of this day Low: This is the low trade value of the course of this day Close: This is the final price of the share at the end of this trading day Volume: This is the total number of shares traded on this day Adj Close: This is a variation on the closing price that adjusts the dividend payouts and company splits Another feature of this dataset is that each of the rows are written in a table in a chronological reverse order. The most recent date in the table is the first. The oldest is the last. Yahoo! Finance provides this table (Apple's historical prices) under the unhelpful name table.csv. I renamed my csv file aapl.csv, which is provided by Yahoo! Finance. Start GHCi The interactive prompt for Haskell is GHCi. On the command line, type GHCi. We begin with importing our newly installed libraries from the prompt: > import Data.List< > import Text.CSV< > import Data.Either.Combinators< > import Graphics.EasyPlot Parse the csv file that you just downloaded using the parseCSVFromFile command. This command will return an Either type, which represents one of the two things that happened: your file was parsed (Right) or something went wrong (Left). We can inspect the type of our result with the :t command: > eitherErrorOrCells <- parseCSVFromFile "aapl.csv"< > :t eitherErrorOrCells < eitherErrorOrCells :: Either Text.Parsec.Error.ParseError CSV Did we get an error for our result? For this, we are going to use the fromRight and fromLeft commands. Remember, Right is right and Left is wrong. When we run the fromLeft command, we should see this message saying that our content is in the Right: > fromLeft' eitherErrorOrCells < *** Exception: Data.Either.Combinators.fromLeft: Argument takes from 'Right _' Pull the cells of our csv file into cells. We can see the first four rows of our content using take 5 (which will pull our header line and the first four cells): > let cells = fromRight' eitherErrorOrCells< > take 5 cells< [["Date","Open","High","Low","Close","Volume","Adj Close"],["2014-11-10","552.40","560.63","551.62","558.23","1298900","558.23"],["2014-11-07","555.60","555.60","549.35","551.82","1589100","551.82"],["2014-11-06","555.50","556.80","550.58","551.69","1649900","551.69"],["2014-11-05","566.79","566.90","554.15","555.95","1645200","555.95"]] The last column in our csv file is the Adj Close, which is the column we would like to plot. Count the columns (starting with 0), and you will find that Adj Close is number 6. Everything else can be dropped. (Here, we are also using the init function to drop the last row of the data, which is an empty list. Grabbing the 6th element of an empty list will not work in Haskell.): > map (x -> x !! 6) (take 5 (init cells))< ["Adj Close","558.23","551.82","551.69","555.95"] We know that this column represents the adjusted close prices. We should drop our header row. Since we use tail to drop the header row, take 5 returns the first five adjusted close prices: > map (x -> x !! 6) (take 5 (tail (init cells)))< ["558.23","551.82","551.69","555.95","564.19"] We should store all of our adjusted close prices in a value called adjCloseOriginal: > let adjCloseAAPLOriginal = map (x -> x !! 6) (tail (init cells)) These are still raw strings. We need to convert these to a Double type with the read function: > let adjCloseAAPL = map read adjCloseAaplOriginal :: [Double] We are almost done messaging our data. We need to make sure that every value in adjClose is paired with an index position for the purpose of plotting. Remember that our adjusted closes are in a chronological reverse order. This will create a tuple, which can be passed to the plot function: > let aapl = zip (reverse [1.0..length adjCloseAAPL]) adjCloseAAPL< > take 5 aapl < [(2577,558.23),(2576,551.82),(2575,551.69),(2574,555.95),(2573,564.19)] Plotting > plot (PNG "aapl.png") $ Data3D [Title "AAPL"] [] aapl< True The following chart is the result of the preceding command: Open aapl.png, which should be newly created in your current working directory. This is a typical default chart created by EasyPlot. We can see the entire history of the Apple stock price. For most of this history, the adjusted share price was less than $10 per share. At about the 6,000 trading day, we see the quick ascension of the share price to over $100 per share. Most of the time, when we take a look at a share price, we are only interested in the tail portion (say, the last year of changes). Our data is already reversed, so the newest close prices are at the front. There are 252 trading days in a year, so we can take the first 252 elements in our value and plot them. While we are at it, we are going to change the style of the plot to a line plot: > let aapl252 = take 252 aapl< > plot (PNG "aapl_oneyear.png") $ Data2D [Title "AAPL", Style Lines] [] aapl252< True The following chart is the result of the preceding command: Scaling data Looking at the share price of a single company over the course of a year will tell you whether the price is trending upward or downward. While this is good, we can get better information about the growth by scaling the data. To scale a dataset to reflect the percent change, we subtract each value by the first element in the list, divide that by the first element, and then multiply by 100. Here, we create a simple function called percentChange. We then scale the values 100 to 105, using this new function. (Using the :t command is not necessary, but I like to use it to make sure that I have at least the desired type signature correct.): > let percentChange first value = 100.0 * (value - first) / first< > :t percentChange< percentChange :: Fractional a => a -> a -> a< > map (percentChange 100) [100..105]< [0.0,1.0,2.0,3.0,4.0,5.0] We will use this new function to scale our Apple dataset. Our tuple of values can be split using the fst (for the first value containing the index) and snd (for the second value containing the adjusted close) functions: > let firstValue = snd (last aapl252)< > let aapl252scaled = map (pair -> (fst pair, percentChange firstValue (snd pair))) aapl252< > plot (PNG "aapl_oneyear_pc.png") $ Data2D [Title "AAPL PC", Style Lines] [] aapl252scaled< True The following chart is the result of the preceding command: Let's take a look at the preceding chart. Notice that it looks identical to the one we just made, except that the y axis is now changed. The values on the left-hand side of the chart are now the fluctuating percent changes of the stock from a year ago. To the investor, this information is more meaningful. Comparing stock prices Every publicly traded company has a different stock price. When you hear that Company A has a share price of $10 and Company B has a price of $100, there is almost no meaningful content to this statement. We can arrive at a meaningful analysis by plotting the scaled history of the two companies on the same plot. Our Apple dataset uses an index position of the trading day for the x axis. This is fine for a single plot, but in order to combine plots, we need to make sure that all plots start at the same index. In order to prepare our existing data of Apple stock prices, we will adjust our index variable to begin at 0: > let firstIndex = fst (last aapl252scaled)< > let aapl252scaled = map (pair -> (fst pair - firstIndex, percentChange firstValue (snd pair))) aapl252 We will compare Apple to Google. Google uses the symbol GOOGL (spelled Google without the e). I downloaded the history of Google from Yahoo! Finance and performed the same steps that I previously wrote with our Apple dataset: > -- Prep Google for analysis< > eitherErrorOrCells <- parseCSVFromFile "googl.csv"< > let cells = fromRight' eitherErrorOrCells< > let adjCloseGOOGLOriginal = map (x -> x !! 6) (tail (init cells))< > let adjCloseGOOGL = map read adjCloseGOOGLOriginal :: [Double]< > let googl = zip (reverse [1.0..genericLength adjCloseGOOGL]) adjCloseGOOGL< > let googl252 = take 252 googl< > let firstValue = snd (last googl252)< > let firstIndex = fst (last googl252)< > let googl252scaled = map (pair -> (fst pair - firstIndex, percentChange firstValue (snd pair))) googl252 Now, we can plot the share prices of Apple and Google on the same chart, Apple plotted in red and Google plotted in blue: > plot (PNG "aapl_googl.png") [Data2D [Title "AAPL PC", Style Lines, Color Red] [] aapl252scaled, Data2D [Title "GOOGL PC", Style Lines, Color Blue] [] googl252scaled]< True The following chart is the result of the preceding command: You can compare for yourself the growth rate of the stock price for these two competing companies because I believe that the contrast is enough to let the image speak for itself. This type of analysis is useful in the investment strategy known as growth investing. I am not recommending this as a strategy, nor am I recommending either of these two companies for the purpose of an investment. I am recommending Haskell as your language of choice for performing data analysis. Summary In this article, we used data from a csv file and plotted data. The other topics covered in this article were using GHCi and EasyPlot for plotting, scaling data, and comparing stock prices. Resources for Article: Further resources on this subject: The Hunt for Data [article] Getting started with Haskell [article] Driving Visual Analyses with Automobile Data (Python) [article]
Read more
  • 0
  • 0
  • 7936

article-image-predicting-hospital-readmission-expense-using-cascading
Packt
04 Jun 2015
10 min read
Save for later

Predicting Hospital Readmission Expense Using Cascading

Packt
04 Jun 2015
10 min read
In this article by Michael Covert, author of the book Learning Cascading, we will look at a system that allows for health care providers to create complex predictive models that can assess who is most at risk for such readmission using Cascading. (For more resources related to this topic, see here.) Overview Hospital readmission is an event that health care providers are attempting to reduce, and it is the primary target of new regulations of the Affordable Care Act, passed by the US government. A readmission is defined as any reentry to a hospital 30 days or less from a prior discharge. The financial impact of this is that US Medicare and Medicaid will either not pay or will reduce the payment made to hospitals for expenses incurred. By the end of 2014, over 2600 hospitals will incur these losses from a Medicare and Medicaid tab that is thought to exceed $24 billion annually. Hospitals are seeking to find ways to predict when a patient is susceptible to readmission so that actions can be taken to fully treat the patient before discharge. Many of them are using big data and machine learning-based predictive analytics. One such predictive engine is MedPredict from Analytics Inside, a company based in Westerville, Ohio. MedPredict is the predictive modeling component of the MedMiner suite of health care products. These products use Concurrent Cascading products to perform nightly rescoring of inpatients using a highly customizable calculation known as LACE, which stands for the following: Length of stay: This refers to the number of days a patient been in hospital Acute admissions through emergency department: This refers to whether a patient has arrived through the ER Comorbidities: A comorbidity refers to the presence of a two or more individual conditions in a patient. Each condition is designated by a diagnosis code. Diagnosis codes can also indicate complications and severity of a condition. In LACE, certain conditions are associated with the probability of readmission through statistical analysis. For instance, a diagnosis of AIDS, COPD, diabetes, and so on will each increase the probability of readmission. So, each diagnosis code is assigned points, with other points indicating "seriousness" of the condition. Diagnosis codes: These refer to the International Classification of Disease codes. Version 9 (ICD-9) and now version 10 (ICD-10) standards are available as well. Emergency visits: This refers to the number of emergency room visits the patient has made in a particular window of time. The LACE engine looks at a patient's history and computes a score that is a predictor of readmissions. In order to compute the comorbidity score, the Charlson Comorbidity Index (CCI) calculation is used. It is a statistical calculation that factors in the age and complexity of the patient's condition. Using Cascading to control predictive modeling The full data workflow to compute the probability of readmissions is as follows: Read all hospital records and reformat them into patient records, diagnosis records, and discharge records. Read all data related to patient diagnosis and diagnosis records, that is, ICD-9/10, date of diagnosis, complications, and so on. Read all tracked diagnosis records and join them with patient data to produce a diagnosis (comorbidity) score by summing up comorbidity "points". Read all data related to patient admissions, that is, records associated with admission and discharge, length of stay, hospital, admittance location, stay type, and so on. Read patient profile record, that is, age, race, gender, ethnicity, eye color, body mass indicator, and so on. Compute all intermediate scores for age, emergency visits, and comorbidities. Calculate the LACE score (refer to Figure 2). Assign a date and time to it. Take all the patient information, as mentioned in the preceding points, and run it through MedPredict to produce these variety of metrics: Expected length of stay Expected expense Expected outcome Probability of readmission Figure 1 – The data workflow The Cascading LACE engine The calculational aspects of computing LACE scores makes it ideal for Cascading as a series of reusable subassemblies. Firstly, the extraction, transformation, and loading (ETL) of patient data is complex and costly. Secondly, the calculations are data-intensive. The CCI alone has to examine a patient's medical history and must find all matching diagnosis codes (such as ICD-9 or ICD-10) to assign a score. This score must be augmented by the patient's age, and lastly, a patient's inpatient discharge records must be examined for admittance to the ER as well as emergency room visits. Also, many hospitals desire to customize these calculations. The LACE engine supports and facilitates this since scores are adjustable at the diagnosis code level, and MedPredict automatically produces metrics about how significant an individual feature is to the resulting score. Medical data is quite complex too. For instance, the particular diagnosis codes that represent cancer are many, and their meanings are quite nuanced. In some cases, metastasis (spreading of cancer to other locations in the body) may have occurred, and this is treated as a more severe situation. In other situations, measured values may be "bucketed", so this implies that we track the number of emergency room visits over 1 year, 6 months, 90 days, and 30 days. The Cascading LACE engine performs these calculations easily. It is customized through a set of hospital supplied parameters, and it has the capability to perform full calculations nightly due to its usage of Hadoop. Using this capability, a patient's record can track the full history of the LACE index over time. Additionally, different sets of LACE indices can be computed simultaneously, maybe one used for diabetes, the other for Chronic Obstructive Pulmonary Disorder (COPD), and so on. Figure 2 – The LACE subassembly MedPredict tracking The Lace engine metrics feed into MedPredict along with many other variables cited previously. These records are rescored nightly and the patient history is updated. This patient history is then used to analyze trends and generate alerts when the patient is showing an increased likelihood of variance to the desired metric values. What Cascading does for us We chose Cascading to help reduce the complexity of our development efforts. MapReduce provided us with the scalability that we desired, but we found that we were developing massive amounts of code to do so. Reusability was difficult, and the Java code library was becoming large. By shifting to Cascading, we found that we could encapsulate our code better and achieve significantly greater reusability. Additionally, we reduced complexity as well. The Cascading API provides simplification and understandability, which accelerates our development velocity metrics and also reduces bugs and maintenance cycles. We allow Cascading to control the end-to-end workflow of these nightly calculations. It handles preprocessing and formatting of data. Then, it handles running these calculations in parallel, allowing high speed hash joins to be performed, and also for each leg of the calculation to be split into a parallel pipe. Next, all these calculations are merged and the final score is produced. The last step is to analyze the patient trends and generate alerts where potential problems are likely to occur. Cascading has allowed us to produce a reusable assembly that is highly parameterized, thereby allowing hospitals to customize their usage. Not only can thresholds, scores, and bucket sizes be varied, but if it's desired, additional information could be included for things, such as medical procedures performed on the patient. The local mode of Cascading allows for easy testing, and it also provides a scaled down version that can be run against a small number of patients. However, by using Cascading in the Hadoop mode, massive scalability can be achieved against very large patient populations and ICD-9/10 code sets. Concurrent also provides an excellent framework for predictive modeling using machine learning through its Pattern component. MedPredict uses this to integrate its predictive engine, which is written using Cascading, MapReduce, and Mahout. Pattern provides an interface for the integration of other external analysis products through the exchange of Predictive Model Markup Language (PMML), an XML dialect that allows many of the MedPredict proprietary machine learning algorithms to be directly incorporated into the full Cascading LACE workflow. MedPredict then produces a variety of predictive metrics in a single pass of the data. The LACE scores (current and historical trends) are used as features for these predictions. Additionally, Concurrent provides a product called Driven that greatly reduces the development cycle time for such large, complex applications. Their lingual product provides seamless integration with relational databases, which is also key to enterprise integration. Results Numerous studies have now been performed using LACE risk estimates. Many hospitals have shown the ability to reduce readmission rates by 5-10 percent due to early intervention and specific guidance given to a patient as a result of an elevated LACE score. Other studies are examining the efficacy of additional metrics, and of segmentation of the patients into better identifying groups, such as heart failure, cancer, diabetes, and so on. Additional effort is being put in to study the ability of modifying the values of the comorbidity scores, taking into account combinations and complications. In some cases, even more dramatic improvements have taken place using these techniques. For up-to-date information, search for LACE readmissions, which will provide current information about implementations and results. Analytics Inside LLC Analytics Inside is based in Westerville, Ohio. It was founded in 2005 and specializes in advanced analytical solutions and services. Analytics Inside produces the RelMiner family of relationship mining systems. These systems are based on machine learning, big data, graph theories, data visualizations, and Natural Language Processing (NLP). For further information, visit our website at http://www.AnalyticsInside.us, or e-mail us at info@AnalyticsInside.us. MedMiner Advanced Analytics for Health Care is an integrated software system designed to help an organization or patient care team in the following ways: Predicting the outcomes of patient cases and tracking these predictions over time Generating alerts based on patient case trends that will help direct remediation Complying better with ARRA value-based purchasing and meaningful use guidelines Providing management dashboards that can be used to set guidelines and track performance Tracking performance of drug usage, interactions, potentials for drug diversion, and pharmaceutical fraud Extracting medical information contained within text documents Designating data security is a key design point PHI can be hidden through external linkages, so data exchange is not required If PHI is required, it is kept safe through heavy encryption, virus scanning, and data isolation Using both cloud-based and on premise capabilities to meet client needs Concurrent Inc. Concurrent Inc. is the leader in big data application infrastructure, delivering products that help enterprises create, deploy, run, and manage data applications at scale. The company's flagship enterprise solution, Driven, was designed to accelerate the development and management of enterprise data applications. Concurrent is the team behind Cascading, the most widely deployed technology for data applications with more than 175,000 user downloads a month. Used by thousands of businesses, including eBay, Etsy, The Climate Corporation, and Twitter, Cascading is the defacto standard in open source application infrastructure technology. Concurrent is headquartered in San Francisco and can be found online at http://concurrentinc.com. Summary Hospital readmission is an event that health care providers are attempting to reduce, and it is a primary target of new regulation from the Affordable Care Act, passed by the US government. This article describes a system that allows for health care providers to create complex predictive models that can assess who is most at risk for such readmission using Cascading. Resources for Article: Further resources on this subject: Hadoop Monitoring and its aspects [article] Introduction to Hadoop [article] YARN and Hadoop [article]
Read more
  • 0
  • 0
  • 1933

article-image-splunk-web-framework
Packt
04 Jun 2015
10 min read
Save for later

The Splunk Web Framework

Packt
04 Jun 2015
10 min read
In this article by the author, Kyle Smith, of the book, Splunk Developer's Guide, we learn about search-related and view-related modules. We will be covering the following topics: Search-related modules View-related modules (For more resources related to this topic, see here.) Search-related modules Let's talk JavaScript modules. For each module, we will review their primary purpose, their module path, the default variable used in an HTML dashboard, and the JavaScript instantiation of the module. We will also cover which attributes are required and which are optional. SearchManager The SearchManager is a primary driver of any dashboard. This module contains an entire search job, including the query, properties, and the actual dispatch of the job. Let's instantiate an object, and dissect the options from this sample code: Module Path: splunkjs/mvc/searchmanager Default Variable: SearchManager JavaScript Object instantiation    Var mySearchManager = new SearchManager({        id: "search1",        earliest_time: "-24h@h",        latest_time: "now",        preview: true,        cache: false,        search: "index=_internal | stats count by sourcetype"    }, {tokens: true, tokenNamespace: "submitted"}); The only required property is the id property. This is a reference ID that will be used to access this object from other instantiated objects later in the development of the page. It is best to name it something concise, yet descriptive with no spaces. The search property is optional, and contains the SPL query that will be dispatched from the module. Make sure to escape any quotes properly, if not, you may cause a JavaScript exception. earliest_time and latest_time are time modifiers that restrict the range of the events. At the end of the options object, notice the second object with token references. This is what automatically executes the search. Without these options, you would have to trigger the search manually. There are a few other properties shown, but you can refer to the actual documentation at the main documentation page http://docs.splunk.com/DocumentationStatic/WebFramework/1.1/compref_searchmanager.html. SearchManagers are set to autostart on page load. To prevent this, set autostart to false in the options. SavedSearchManager The SavedSearchManager is very similar in operation to the SearchManager, but works with a saved report, instead of an ad hoc query. The advantage to using a SavedSearchManager is in performance. If the report is scheduled, you can configure the SavedSearchManager to use the previously run jobs to load the data. If any other user runs the report within Splunk, the SavedSearchManager can reuse that user's results in the manager to boost performance. Let's take a look at a few sections of our code: Module Path: splunkjs/mvc/savedsearchmanager Default Variable: SavedSearchManager JavaScript Object instantiation        Var mySavedSearchManager = new SavedSearchManager({            id: "savedsearch1",        searchname: "Saved Report 1"            "dispatch.earliest_time": "-24h@h",            "dispatch.latest_time": "now",            preview: true,            cache: true        }); The only two required properties are id and searchname. Both of those must be present in order for this manager to run correctly. The other options are very similar to the SearchManager, except for the dispatch options. The SearchManager has the option "earliest_time", whereas the SavedSearchManager uses the option "dispatch.earliest_time". They both have the same restriction but are named differently. The additional options are listed in the main documentation page available at http://docs.splunk.com/DocumentationStatic/WebFramework/1.1/compref_savedsearchmanager.html. PostProcessManager The PostProcessManager does just that, post processes the results of a main search. This works in the same way as the post processing done in SimpleXML; a main search to load the event set, and a secondary search to perform an additional analysis and transformation. Using this manager has its own performance considerations as well. By loading a single job first, and then performing additional commands on those results, you avoid having concurrent searches for the same information. Your usage of CPU and RAM will be less, as you only store one copy of the results, instead of multiple. Module Path: splunkjs/mvc/postprocessmanager Default Variable: PostProcessManager JavaScript Object instantiation        Var mysecondarySearch = new PostProcessManager({            id: "after_search1",        search: "stats count by sourcetype",    managerid: "search1"        }); The property id is the only required property. The module won't do anything when instantiated with only an id property, but you can set it up to populate later. The other options are similar to the SearchManager, the major difference being that the search property in this case is appended to the search property of the manager listed in the managerid property. For example, if the manager search is search index=_internal source=*splunkd.log, and the post process manager search is stats count by host, then the entire search for the post process manager is search index=_internal source=*splunkd.log | stats count by host. The additional options are listed at the main documentation page http://docs.splunk.com/DocumentationStatic/WebFramework/1.1/compref_postprocessmanager.html. View-related modules These modules are related to the views and data visualizations that are native to Splunk. They range in use from charts that display data, to control groups, such as radio groups or dropdowns. These are also included with Splunk and are included by default in the RequireJS declaration. ChartView The ChartView displays a series of data in the formats in the list as follows. Item number one shows an example of how each different chart is described and presented. Each ChartView is instantiated in the same way, the only difference is in what searches are used with which chart. Module Path: splunkjs/mvc/chartview Default Variable: ChartView JavaScript Object instantiation        Var myBarChart = new ChartView({            id: "myBarChart",             managerid: "searchManagerId",            type: "bar",            el: $("#mybarchart")        }); The only required property is the id property. This assigns the object an id that can be later referenced as needed. The el option refers to the HTML element in the page that this view will be assigned and created within. The managerid relates to an existing search, saved search, or post process manager object. The results are passed from the manager into the chart view and displayed as indicated. Each chart view can be customized extensively using the charting.* properties. For example, charting.chart.overlayFields, when set to a comma separated list of field names, will overlay those fields over the chart of other data, making it possible to display SLA times over the top of Customer Service Metrics. The full list of configurable options can be found at the following link: http://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference. The different types of ChartView Now that we've introduced the ChartView module, let's look at the different types of charts that are built-in. This section has been presented in the following format: Name of Chart Short description of the chart type Type property for use in the JavaScript configuration Example chart command that can be displayed with this chart type Example image of the chart The different ChartView types we will cover in this section include the following: Area The area chart is similar to the line chart, and compares quantitative data. The graph is filled with color to show volume. This is commonly used to show statistics of data over time. An example of an area chart is as follows: timechart span=1h max(results.collection1{}.meh_clicks) as MehClicks max(results.collection1{}.visitors) as Visits Bar The bar chart is similar to the column chart, except that the x and y axes have been switched, and the bars run horizontally and not vertically. The bar chart is used to compare different categories. An example of a bar chart is as follows: stats max(results.collection1{}.visitors) as Visits max(results.collection1{}.meh_clicks) as MehClicks by results.collection1{}.title.text Column The column chart is similar to the bar chart, but the bars are displayed vertically. An example of a column chart is as follows: timechart span=1h avg(DPS) as "Difference in Products Sold" Filler gauge The filler gauge is a Splunk-provided visualization. It is intended for single values, normally as a percentage, but can be adjusted to use discrete values as well. The gauge uses different colors for different ranges of values, by default using green, yellow, and red, in that order. These colors can also be changed using the charting.* properties. One of the differences between this gauge and the other single value gauges is that it shows both the color and value close together, whereas the others do not. An example of a filler gauge chart is as follows: eval diff = results.collection1{}.meh_clicks / results.collection1{}.visitors * 100 | stats latest(diff) as D Line The line chart is similar to the area chart but does not fill the area under the line. This chart can be used to display discrete measurements over time. An example of a line chart is as follows: timechart span=1h max(results.collection1{}.meh_clicks) as MehClicks max(results.collection1{}.visitors) as Visits Marker gauge The marker gauge is a Splunk native visualization intended for use with a single value. Normally this will be a percentage of a value, but can be adjusted as needed. The gauge uses different colors for different ranges of values, by default using green, yellow, and red, in that order. These colors can also be changed using the charting.* properties. An example of a marker gauge chart is as follows: eval diff = results.collection1{}.meh_clicks / results.collection1{}.visitors * 100 | stats latest(diff) as D Pie Chart A pie chart is useful for displaying percentages. It gives you the ability to quickly see which part of the "pie" is disproportionate to the others. Actual measurements may not be relevant. An example of a pie chart is as follows: top op_action Radial gauge The radial gauge is another single value chart provided by Splunk. It is normally used to show percentages, but can be adjusted to show discrete values. The gauge uses different colors for different ranges of values, by default using green, yellow, and red, in that order. These colors can also be changed using the charting.* properties. An example of a radial gauge is as follows: eval diff = MC / V * 100 | stats latest(diff) as D Scatter The scatter plot can plot two sets of data on an x and y axis chart (Cartesian coordinates). This chart is primarily time independent, and is useful for finding correlations (but not necessarily causation) in data. An example of a scatter plot is as follows: table MehClicks Visitors Summary We covered some deeper elements of Splunk applications and visualizations. We reviewed each of the SplunkJS modules, how to instantiate them, and gave an example of each search-related modules and view-related modules. Resources for Article: Further resources on this subject: Introducing Splunk [article] Lookups [article] Loading data, creating an app, and adding dashboards and reports in Splunk [article]
Read more
  • 0
  • 0
  • 3523
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-touch-gestures
Packt
03 Jun 2015
5 min read
Save for later

Working with Touch Gestures

Packt
03 Jun 2015
5 min read
 In this article by Ajit Kumar, the author Sencha Charts Essentials, we will cover the following topics: Touch gestures support in Sencha Charts Using gestures on existing charts Out-of-the-box interactions Creating custom interactions using touch gestures (For more resources related to this topic, see here.) Interacting with interactions The interactions code is located under the ext/packages/sencha-charts/src/chart/interactions folder. The Ext.chart.interactions.Abstract class is the base class for all the chart interactions. Interactions must be associated with a chart by configuring interactions on it. Consider the following example: Ext.create('Ext.chart.PolarChart', {title: 'Chart',interactions: ['rotate'],... The gestures config is an important config. It is an integral part of an interaction where it tells the framework which touch gestures would be part of an interaction. It's a map where the event name is the key and the handler method name is its value. Consider the following example: gestures: {tap: 'onTapGesture',doubletap: 'onDoubleTapGesture'} Once an interaction has been associated with a chart, the framework registers the events and their handlers, as listed in the gestures config, on the chart as part of the chart initialization, as shown here:   Here is what happens during each stage of the preceding flowchart: The chart's construction starts when its constructor is called either by a call to Ext.create or xtype usage. The interactions config is applied to the AbstractChart class by the class system, which calls the applyInteractions method. The applyInteractions method sets the chart object on each of the interaction objects. This setter operation will call the updateChart method of the interaction class—Ext.chart.interactions.Abstract. The updateChart calls addChartListener to add the gesture-related events and their handlers. The addChartListener iterates through the gestures object and registers the listed events and their handlers on the chart object. Interactions work on touch as well as non-touch devices (for example, desktop). On non-touch devices, the gestures are simulated based on their mouse or pointer events. For example, mousedown is treated as a tap event. Using built-in interactions Here is a list of the built-in interactions: Crosshair: This interaction helps the user to get precise x and y values for a specific point on a chart. Because of this, it is applicable to Cartesian charts only. The x and y values are obtained by single-touch dragging on the chart. The interaction also offers additional configs: axes: This can be used to provide label text and label rectangle configs on a per axis basis using left, right, top, and bottom configs or a single config applying to all the axes. If the axes config is not specified, the axis label value is shown as the text and the rectangle will be filled with white color. lines: The interaction draws horizontal and vertical lines through the point on the chart. Line sprite attributes can be passed using horizontal or vertical configs. For example, we configure the following crosshair interaction on a CandleStick chart: interactions: [{type: 'crosshair',axes: {left: {label: { fillStyle: 'white' },rect: {fillStyle: 'pink',radius: 2}},bottom: {label: {fontSize: '14px',fontWeight: 'bold'},rect: { fillStyle: 'cyan' }}}}] The preceding configuration will produce the following output, where the labels and rectangles on the two axes have been styled as per the configuration: CrossZoom:This interaction allows the user to zoom in on a selected area of a chart using drag events. It is very useful in getting the microscopic view of your macroscopic data view. For example, the chart presents month-wise data for two years; using zoom, you can look at the values for, say, a specific month. The interaction offers an additional config—axes—using which we indicate the axes, which will be zoomed. Consider the following configuration on a CandleStick chart: interactions: [{type: 'crosszoom',axes: ['left', 'bottom']}] This will produce the following output where a user will be able to zoom in to both the left and bottom axes:   Additionally, we can control the zoom level by passing minZoom and maxZoom, as shown in the following snippet: interactions: [{type: 'crosszoom',axes: {left: {maxZoom: 8,startZoom: 2},bottom: true}}] The zoom is reset when the user double-clicks on the chart. ItemHighlight: This interaction allows the user to highlight series items in the chart. It works in conjunction with highlight config that is configured on a series. The interaction identifies and sets the highlightItem on a chart, on which the highlight and highlightCfg configs are applied. PanZoom: This interaction allows the user to navigate the data for one or more chart axes by panning and/or zooming. Navigation can be limited to particular axes. Pinch gestures are used for zooming whereas drag gestures are used for panning. For devices which do not support multiple-touch events, zooming cannot be done via pinch gestures; in this case, the interaction will allow the user to perform both zooming and panning using the same single-touch drag gesture. By default, zooming is not enabled. We can enable it by setting zoomOnPanGesture:true on the interaction, as shown here: interactions: {type: 'panzoom',zoomOnPanGesture: true} By default, all the axes are navigable. However, the panning and zooming can be controlled at axis level, as shown here: {type: 'panzoom',axes: {left: {maxZoom: 5,allowPan: false},bottom: true}} Rotate: This interaction allows the user to rotate a polar chart about its centre. It implements the rotation using the single-touch drag gestures. This interaction does not have any additional config. RotatePie3D: This is an extension of the Rotate interaction to rotate a Pie3D chart. This does not have any additional config. Summary In this article, you learned how Sencha Charts offers interaction classes to build interactivity into the charts. We looked at the out-of-the-box interactions, their specific configurations, and how to use them on different types of charts. Resources for Article: Further resources on this subject: The Various Components in Sencha Touch [Article] Creating a Simple Application in Sencha Touch [Article] Sencha Touch: Catering Form Related Needs [Article]
Read more
  • 0
  • 0
  • 2044

article-image-basic-image-processing
Packt
02 Jun 2015
8 min read
Save for later

Basic Image Processing

Packt
02 Jun 2015
8 min read
In this article, Ashwin Pajankar, the author of the book, Raspberry PI Computer Vision Programming, takes us through basic image processing in OpenCV. We will do this with the help of the following topics: Image arithmetic operations—adding, subtracting, and blending images Splitting color channels in an image Negating an image Performing logical operations on an image This article is very short and easy to code with plenty of hands-on activities. (For more resources related to this topic, see here.) Arithmetic operations on images In this section, we will have a look at the various arithmetic operations that can be performed on images. Images are represented as matrices in OpenCV. So, arithmetic operations on images are similar to the arithmetic operations on matrices. Images must be of the same size for you to perform arithmetic operations on the images, and these operations are performed on individual pixels. cv2.add(): This function is used to add two images, where the images are passed as parameters. cv2.subtract(): This function is used to subtract an image from another. We know that the subtraction operation is not commutative. So, cv2.subtract(img1,img2) and cv2.(img2,img1) will yield different results, whereas cv2.add(img1,img2) and cv2.add(img2,img1) will yield the same result as the addition operation is commutative. Both the images have to be of same size and type, as explained before. Check out the following code: import cv2 img1 = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) img2 = cv2.imread('/home/pi/book/test_set/4.2.04.tiff',1) cv2.imshow('Image1',img1) cv2.waitKey(0) cv2.imshow('Image2',img2) cv2.waitKey(0) cv2.imshow('Addition',cv2.add(img1,img2)) cv2.waitKey(0) cv2.imshow('Image1-Image2',cv2.subtract(img1,img2)) cv2.waitKey(0) cv2.imshow('Image2-Image1',cv2.subtract(img2,img1)) cv2.waitKey(0) cv2.destroyAllWindows() The preceding code demonstrates the usage of arithmetic functions on images. Here's the output window of Image1: Here is the output window of Addition: The output window of Image1-Image2 looks like this: Here is the output window of Image2-Image1: Blending and transitioning images The cv2.addWeighted() function calculates the weighted sum of two images. Because of the weight factor, it provides a blending effect to the images. Add the following lines of code before destroyAllWindows() in the previous code listing to see this function in action: cv2.addWeighted(img1,0.5,img2,0.5,0) cv2.waitKey(0) In the preceding code, we passed the following five arguments to the addWeighted() function: Img1: This is the first image. Alpha: This is the weight factor for the first image (0.5 in the example). Img2: This is the second image. Beta: This is the weight factor for the second image (0.5 in the example). Gamma: This is the scalar value (0 in the example). The output image value is calculated with the following formula: This operation is performed on every individual pixel. Here is the output of the preceding code: We can create a film-style transition effect on the two images by using the same function. Check out the output of the following code that creates a smooth image transition from an image to another image: import cv2 import numpy as np import time   img1 = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) img2 = cv2.imread('/home/pi/book/test_set/4.2.04.tiff',1)   for i in np.linspace(0,1,40): alpha=i beta=1-alpha print 'ALPHA ='+ str(alpha)+' BETA ='+str (beta) cv2.imshow('Image Transition',    cv2.addWeighted(img1,alpha,img2,beta,0)) time.sleep(0.05) if cv2.waitKey(1) == 27 :    break   cv2.destroyAllWindows() Splitting and merging image colour channels On several occasions, we may be interested in working separately with the red, green, and blue channels. For example, we might want to build a histogram for every channel of an image. Here, cv2.split() is used to split an image into three different intensity arrays for each color channel, whereas cv2.merge() is used to merge different arrays into a single multi-channel array, that is, a color image. The following example demonstrates this: import cv2 img = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) b,g,r = cv2.split (img) cv2.imshow('Blue Channel',b) cv2.imshow('Green Channel',g) cv2.imshow('Red Channel',r) img=cv2.merge((b,g,r)) cv2.imshow('Merged Output',img) cv2.waitKey(0) cv2.destroyAllWindows() The preceding program first splits the image into three channels (blue, green, and red) and then displays each one of them. The separate channels will only hold the intensity values of the particular color and the images will essentially be displayed as grayscale intensity images. Then, the program merges all the channels back into an image and displays it. Creating a negative of an image In mathematical terms, the negative of an image is the inversion of colors. For a grayscale image, it is even simpler! The negative of a grayscale image is just the intensity inversion, which can be achieved by finding the complement of the intensity from 255. A pixel value ranges from 0 to 255, and therefore, negation involves the subtracting of the pixel value from the maximum value, that is, 255. The code for the same is as follows: import cv2 img = cv2.imread('/home/pi/book/test_set/4.2.07.tiff') grayscale = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) negative = abs(255-grayscale) cv2.imshow('Original',img) cv2.imshow('Grayscale',grayscale) cv2.imshow('Negative',negative) cv2.waitKey(0) cv2.destroyAllWindows() Here is the output window of Greyscale: Here's the output window of Negative: The negative of a negative will be the original grayscale image. Try this on your own by taking the image negative of the negative again Logical operations on images OpenCV provides bitwise logical operation functions for images. We will have a look at the functions that provide the bitwise logical AND, OR, XOR (exclusive OR), and NOT (inversion) functionality. These functions can be better demonstrated visually with grayscale images. I am going to use barcode images in horizontal and vertical orientation for demonstration. Let's have a look at the following code: import cv2 import matplotlib.pyplot as plt   img1 = cv2.imread('/home/pi/book/test_set/Barcode_Hor.png',0) img2 = cv2.imread('/home/pi/book/test_set/Barcode_Ver.png',0) not_out=cv2.bitwise_not(img1) and_out=cv2.bitwise_and(img1,img2) or_out=cv2.bitwise_or(img1,img2) xor_out=cv2.bitwise_xor(img1,img2)   titles = ['Image 1','Image 2','Image 1 NOT','AND','OR','XOR'] images = [img1,img2,not_out,and_out,or_out,xor_out]   for i in xrange(6):    plt.subplot(2,3,i+1)    plt.imshow(images[i],cmap='gray')    plt.title(titles[i])    plt.xticks([]),plt.yticks([]) plt.show() We first read the images in grayscale mode and calculated the NOT, AND, OR, and XOR, functionalities and then with matplotlib, we displayed those in a neat way. We leveraged the plt.subplot() function to display multiple images. Here in the preceding example, we created a grid with two rows and three columns for our images and displayed each image in every part of the grid. You can modify this line and change it to plt.subplot(3,2,i+1) to create a grid with three rows and two columns. Also, we can use the technique without a loop in the following way. For each image, you have to write the following statements. I will write the code for the first image only. Go ahead and write it for the rest of the five images: plt.subplot(2,3,1) , plt.imshow(img1,cmap='gray') , plt.title('Image 1') , plt.xticks([]),plt.yticks([]) Finally, use plt.show() to display. This technique is to avoid the loop when a very small number of images, usually 2 or 3 in number, have to be displayed. The output of this is as follows: Make a note of the fact that the logical NOT operation is the negative of the image. Exercise You may want to have a look at the functionality of cv2.copyMakeBorder(). This function is used to create the borders and paddings for images, and many of you will find it useful for your projects. The exploring of this function is left as an exercise for the readers. You can check the python OpenCV API documentation at the following location: http://docs.opencv.org/modules/refman.html Summary In this article, we learned how to perform arithmetic and logical operations on images and split images by their channels. We also learned how to display multiple images in a grid by using matplotlib. Resources for Article: Further resources on this subject: Raspberry Pi and 1-Wire [article] Raspberry Pi Gaming Operating Systems [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 21390

article-image-developing-location-based-services-neo4j
Packt
02 Jun 2015
22 min read
Save for later

Developing Location-based Services with Neo4j

Packt
02 Jun 2015
22 min read
In this article by Ankur Goel, author of the book, Neo4j Cookbook, we will cover the following recipes: Installing the Neo4j Spatial extension Importing the Esri shapefiles Importing the OpenStreetMap files Importing data using the REST API Creating a point layer using the REST API Finding geometries within the bounding box Finding geometries within a distance Finding geometries within a distance using Cypher By definition, any database that is optimized to store and query the data that represents objects defined in a geometric space is called a spatial database. Although Neo4j is primarily a graph database, due to the importance of geospatial data in today's world, the spatial extension has been introduced in Neo4j as an unmanaged extension. It gives you most of the facilities, which are provided by common geospatial databases along with the power of connectivity through edges, which Neo4j, as a graph database, provides. In this article, we will take a look at some of the widely used use cases of Neo4j as a spatial database, and you will learn how typical geospatial operations can be performed on it. Before proceeding further, you need to install Neo4j on your system using one of the recipies that follow here. The installation will depend on your system type. (For more resources related to this topic, see here.) Single node installation of Neo4j over Linux Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as is or can be embedded inside applications as well. The following recipe will show you how to set up a single instance of Neo4j over the Linux operating system. Getting ready Perform the following steps to get started with this recipe: Download the community edition of Neo4j from http://www.neo4j.org/download for the Linux platform: $ wget http://dist.neo4j.org/neo4j-community-2.2.0-M02-unix.tar.gz Check whether JDK/JRE is installed for your operating system or not by typing this in the shell prompt: $ echo $JAVA_HOME If this command throws no output, install Java for your Linux distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the Linux operating system, which is simple, as shown in the following steps: Extract the TAR file using the following command: $ tar –zxvf neo4j-community-2.2.0-M02-unix.tar.gz $ ls Go to the bin directory under the root folder: $ cd <neo4j-community-2.2.0-M02>/bin/ Start the Neo4j graph database server: $ ./neo4j start Check whether Neo4j is running or not using the following command: $ ./neo4j status Neo4j can also be monitored using the web console. Open http://<ip>:7474/webadmin, as shown in the following screenshot: The preceding diagram is a screenshot of the web console of Neo4j through which the server can be monitored and different Cypher queries can be run over the graph database. How it works... Neo4j comes with prebuilt binaries over the Linux operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. See also During installation, you may face several kind of issues, such as max open files and so on. For more information, check out http://neo4j.com/docs/stable/server-installation.html#linux-install. Single node installation of Neo4j over Windows Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as is or can be embedded inside applications. The following recipe will show you how to set up a single instance of Neo4j over the Windows operating system. Getting ready Perform the following steps to get started with this recipe: Download the Windows installer from http://www.neo4j.org/download. This has both 32- and 64-bit prebuilt binaries Check whether Java is installed for the operating system or not by typing this in the cmd prompt: echo %JAVA_HOME% If this command throws no output, install Java for your Windows distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the Windows operating system, which is simple, as shown here: Run the installer by clicking on the downloaded file: The preceding screenshot shows the Windows installer running. After the installation is complete, when you run the software, it will ask for the database location. Carefully choose the location as the entire graph database will be stored in this folder: The preceding screenshot shows the Windows installer asking for the graph database's location. The Neo4j browser can be opened by entering http://localhost:7474/ in the browser. The following screenshot depicts Neo4j started over the Windows platform: How it works... Neo4j comes with prebuilt binaries over the Windows operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. See also During installation, you might face several kinds of issues such as max open files and so on. For more information, check out http://neo4j.com/docs/stable/server-installation.html#windows-install. Single node installation of Neo4j over Mac OS X Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as in a mode and can also be embedded inside applications. The following recipe will show you how to set up a single instance of Neo4j over the OS X operating system. Getting ready Perform the following steps to get started with this recipe: Download the binary version of Neo4j from http://www.neo4j.org/download for the Mac OS X platform and the community edition as shown in the following command: $ wget http://dist.neo4j.org/neo4j-community-2.2.0-M02-unix.tar.gz Check whether Java is installed for the operating system or not by typing this over the cmd prompt: $ echo $JAVA_HOME If this command throws no output, install Java for your Mac OS X distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the OS X operating system, which is very simple, as shown in the following steps: Extract the TAR file using the following command: $ tar –zxvf neo4j-community-2.2.0-M02-unix.tar.gz $ ls Go to the bin directory under the root folder: $ cd <neo4j-community-2.2.0-M02>/bin/ Start the Neo4j graph database server: $ ./neo4j start Check whether Neo4j is running or not using the following command: $ ./neo4j status How it works... Neo4j comes with prebuilt binaries over the OS X operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. There's more… Neo4j over Mac OS X can also be installed using brew, which has been explained here. Run the following commands over the shell: $ brew update $ brew install neo4j After this, Neo4j can be started using the start option with the Neo4j command: $ neo4j start This will start the Neo4j server, which can be accessed from the default URL (http://localhost:7474). The installation can be reached using the following commands: $ cd /usr/local/Cellar/neo4j/ $ cd {NEO4J_VERSION}/libexec/ You can learn more about OS X installation from http://neo4j.com/docs/stable/server-installation.html#osx-install. Due to the limitation of content that can provided in this article, we assume you would already know how to perform the basic operations using Neo4j such as creating a graph, importing data from different formats into Neo4j, the common configurations used for Neo4j. Installing the Neo4j Spatial extension Neo4j Spatial is a library of utilities for Neo4j that facilitates the enabling of spatial operations on the data. Even on the existing data, geospatial indexes can be added and many geospatial operations can be performed on it. In this recipe, you will learn how to install the Neo4j Spatial extension. Getting ready The following steps will get you started with this recipe: Install Neo4j using the earlier recipes in this article. Install the dependencies listed in the pom.xml file for this project from https://github.com/neo4j-contrib/spatial/blob/master/pom.xml. Install Maven using the following command for your operating system: For Debian systems: apt-get install maven For Redhat/Centos systems: yum install apache-maven To install on a Windows-based system, please refer to https://maven.apache.org/guides/getting-started/windows-prerequisites.html. How to do it... Now, let's install the Neo4j Spatial plugin, which is very simple to do, by following these steps: Clone the GitHub repository for spatial extension: git clone git://github.com/neo4j/spatial spatial Move into the spatial directory: cd spatial Build the code using Maven. This will download all the dependencies, compile the library, run the tests, and install the artifacts in the local repository: mvn clean install Move the built artifact to the Neo4j plugin directory: unzip target/neo4j/neo4j-spatial-0.11-SNAPSHOT-server-plugin.zip $NEO4J_ROOT_DIR/plugins/ Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart Check whether the Neo4j Spatial plugin is properly installed or not: curl –L http://<neo4j_server_ip>:<port>/db/data If you are using Neo4j 2.2 or higher, then use the following command: curl --user neo4j:<password> http://localhost:7474/db/data/ The output will look like what is shown in the following screenshot, which shows the Neo4j Spatial plugin installed: How it works... Neo4j Spatial is a library of utilities that helps perform geospatial operations on the dataset, which is present in the Neo4j graph database. You can add geospatial indexes on the existing data and perform operations, such as data within a specified region or within some distance of point of interest. Neo4j Spatial comes as an unmanaged extension, which can be easily installed as well as removed from Neo4j. The extension does not interfere with any of the core functionality. There's more… To read more about Neo4j Spatial extension, we encourage users to visit the GitHub repository at https://github.com/neo4j-contrib/spatial. Also, it will be good to read about the Neo4j unmanaged extension in general (http://neo4j.com/docs/stable/server-unmanaged-extensions.html). Importing the Esri shapefiles The shapefile format is a popular geospatial vector data format for the Geographic Information System (GIS) software. It is developed and regulated by Esri as an open specification for data interoperability among Esri. It is very popular among GIS products, and many times, the data format is in Esri shapefiles. The main file is the .shp file, which contains the geometry data. The binary data file consists of a single, fixed-length header followed by variable-length data records. In this recipe, you will learn how to import the Esri shapefiles in the Neo4j graph database. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Since the Esri shapefile format is, by default, supported by the Neo4j Spatial extension, it is very easy to import data using the Java API from it using the following steps: Download a sample .shp file from http://www.statsilk.com/maps/download-free-shapefile-maps. Execute the following commands: wget http://biogeo.ucdavis.edu/data/diva/adm/AFG_adm.zip unzip AFG_adm.zip mv AFG_adm1.* /data The ShapefileImporter method lets you import data from the Esri shapefile using the following code: GraphDatabaseService esri_database = new GraphDatabaseFactory().newEmbeddedDatabase(storeDir); try {    ShapefileImporter importer = new ShapefileImporter(esri_database); importer.importFile("/data/AFG_adm1.shp", "layer_afganistan");        } finally {            esri_database.shutdown(); } Using similar code, we can import multiple SHP files into the same layer or different layers, as shown in the following code snippet: File dir = new File("/data");      FilenameFilter filter = new FilenameFilter() {          public boolean accept(File dir, String name) {      return name.endsWith(".shp"); }};   File[] listOfFiles = dir.listFiles(filter); for (final File fileEntry : listOfFiles) {      System.out.println("FileEntry Directory "+fileEntry);    try {    importer.importFile(fileEntry.toString(), "layer_afganistan"); } catch(Exception e){    esri_database.shutdown(); }} How it works... The Neo4j Spatial extension natively supports the import of data in the shapefile format. Using the ShapefileImporter method, any SHP file can be easily imported into Neo4j. The ShapefileImporter method takes two arguments: the first argument is the path to the SHP files and the second is the layer in which it should be imported. There's more… We will encourage you to read more about shapefiles and layers in general; for this, please visit the following URLs for more information: http://en.wikipedia.org/wiki/Shapefile http://wiki.openstreetmap.org/wiki/Shapefiles http://www.gdal.org/drv_shapefile.html Importing the OpenStreetMap files OpenStreetMap is a powerhouse of data when it comes to geospatial data. It is a collaborative project to create a free, editable map of the world. OpenStreetMap provides geospatial data in the .osm file format. To read more about .osm files in general, check out http://wiki.openstreetmap.org/wiki/.osm. In this recipe, you will learn how to import the .osm files in the Neo4j graph database. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Since the OSM file format is, by default, supported by the Neo4j Spatial extension, it is very easy to import data from it using the following steps: Download one sample .osm file from http://wiki.openstreetmap.org/wiki/Planet.osm#Downloading. Execute the following commands: wget http://download.geofabrik.de/africa-latest.osm.bz2 bunzip2 africa-latest.osm.bz2 mv africa-latest.osm /data The importfile method lets you import data from the .osm file, as shown in the following code snippet: OSMImporter importer = new OSMImporter("africa"); try {    importer.importFile(osm_database, "/data/botswana-latest.osm", false, 5000, true); } catch(Exception e){    osm_database.shutdown(); } importer.reIndex(osm_database,10000); Using similar code, we can import multiple OSM files into the same layer or different layers, as shown here: File dir = new File("/data");      FilenameFilter filter = new FilenameFilter() {          public boolean accept(File dir, String name) {      return name.endsWith(".osm"); }}; File[] listOfFiles = dir.listFiles(filter); for (final File fileEntry : listOfFiles) {      System.out.println("FileEntry Directory "+fileEntry);    try {importer.importFile(osm_database, fileEntry.toString(), false, 5000, true); importer.reIndex(osm_database,10000); } catch(Exception e){    osm_database.shutdown(); } How it works... This is slightly more complex as it requires two phases: the first phase requires a batch inserter performing insertions into the database, and the second phase requires reindexing of the database with the spatial indexes. There's more… We will encourage you to read more about the OSM file and the batch inserter in general; for this, visit the following URLs: http://en.wikipedia.org/wiki/OpenStreetMap http://wiki.openstreetmap.org/wiki/OSM_file_formats http://neo4j.com/api_docs/2.0.2/org/neo4j/unsafe/batchinsert/BatchInserter.html Importing data using the REST API The recipes that you have learned until now consist of Java code, which is used to import spatial data into Neo4j. However, by using any other programming language, such as Python or Ruby, spatial data can be easily imported into Neo4j using the REST interface. In this recipe, you will learn how to import geospatial data using the REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Using the REST interface is a very simple three-stage process to import the geospatial data into the Neo4j graph database server. For the sake of simplicity, the code of the Python language has been used to explain this recipe, although you can also use curl for this recipe: Create the spatial index, as shown in the following code: # Create geom index url = http://<neo4j_server_ip>:<port>/db/data/index/node/ payload= {    "name" : "geom",    "config" : {        "provider" : "spatial",        "geometry_type" : "point",        "lat" : "lat",        "lon" : "lon"    } } Create nodes as lat/lng and data as properties, as shown in the following code: url = "http://<neo4j_server_ip>:<port>/db/data/node" payload = {'lon': 38.6, 'lat': 67.88, 'name': 'abc'} req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) node = json.loads(response.read())['self'] Add the preceding created node to the geospatial index, as shown in the following code snippet: #add node to geom index url = "http://<neo4j_server_ip>:<port>/db/data/index/node/geom" payload = {'value': 'dummy', 'key': 'dummy', 'uri': node} req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) print response.read() The data will look like what is shown in the following screenshot after the addition of a few more nodes; this screenshot depicts the Neo4j Spatial data that has been imported: The following screenshot depicts the properties of a single node, which has been imported into Neo4j: How it works... Adding geospatial data using the REST API is a three-step process, listed as follows: Create a geospatial index using an endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/index/node/ Add a node to the Neo4j graph database using an endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/node Add the created node to the geospatial index using the endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/index/node/geom There's more… We encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Creating a point layer using the REST API In this recipe, you will learn how to create a point layer using the REST API interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the http://<neo4j_server_ip>:/db/data/ext/ SpatialPlugin/graphdb/addSimplePointlayer endpoint to create a simple point layer. Let's add a simple point layer, as shown in the following code: "layer" : "geom", "lat"   : "lat" , "lon"   : "lon",url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb/addSimplePointlayer payload= { "layer" : "geom", "lat"   : "lat" , "lon"   : "lon", } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the create point in layer query: How it works... Creating a point in the layer query is based on the REST interface, which the Neo4j Spatial plugin already provides with it. There's more… We will encourage you to read more about spatial REST interfaces in general; to do this, visit http://neo4j-contrib.github.io/spatial/. Finding geometries within the bounding box In this recipe, you will learn how to find all the geometries within the bounding box using the spatial REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within the bounding box:http://<neo4j_server_ip>:<port>/db/data/ext/SpatialPlugin/graphdb/findGeometriesInBBox. Let's find the all the geometries, using the following information: "minx" : 0.0, "maxx" : 100.0, "miny" : 0.0, "maxy" : 100.0 url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb payload= { "layer" : "geom", "minx" : 0.0, "maxx" : 100.3, "miny" : 0.0, "maxy" : 100.0 } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the bounding box query: How it works... Finding geometries in the bounding box is based on the REST interface, which the Neo4j Spatial plugin provides. The output of the REST call contains an array of the nodes, containing the node's id, lat/lng, and its incoming/outgoing relationships. In the preceding output, you can see node id54 returned as the output. There's more… We will encourage you to read more about spatial REST interfaces in general; to do this, visit http://neo4j-contrib.github.io/spatial/. Finding geometries within a distance In this recipe, you will learn how to find all the geometries within a distance using the spatial REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within a certain distance: http://<neo4j_server_ip>:<port>/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance. Let's find all the geometries between the specified distance using the following information: "pointX" : -116.67, "pointY" : 46.89, "distanceinKm" : 500, url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance payload= { "layer" : "geom", "pointY" : 46.8625, "pointX" : -114.0117, "distanceInKm" : 500, } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of a withinDistance query: How it works... Finding geometries within a distance is based on the REST interface that the Neo4j Spatial plugin provides. The output of the REST call contains an array of the nodes, containing the node's id, lat/lng, and its incoming/outgoing relationships. In the preceding output, we can see node id71 returned as the output. There's more… We encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Finding geometries within a distance using Cypher In this recipe, you will learn how to find all the geometries within a distance using the Cypher query. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within a certain distance: http://<neo4j_server_ip>:<port>/db/data/cipher Let's find all the geometries within a distance using a Cypher query: "pointX" : -116.67, "pointY" : 46.89, "distanceinKm" : 500, url = "http://<neo4j_server_ip>:<port>//db/data/cypher payload= { "query" : "START n=node:geom('withinDistance:[46.9163, -114.0905, 500.0]') RETURN n" } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the withinDistance query that uses Cypher: The following is the Cypher output in the Neo4j console: How it works... Cypher comes with a withinDistance query, which takes three parameters: lat, lon, and search distance. There's more… We will encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Summary Developing Location-based Services with Neo4j, teaches you the most important aspect of today's data, location, and how to deal with it in Neo4j. You have learnt how to import geospatial data into Neo4j and run queries, such as proximity searches, bounding boxes, and so on. Resources for Article:   Further resources on this subject: Recommender systems dissected Components [article] Working with a Neo4j Embedded Database [article] Differences in style between Java and Scala code [article]
Read more
  • 0
  • 0
  • 3421

article-image-cleaning-data-pdf-files
Packt
25 May 2015
15 min read
Save for later

Cleaning Data in PDF Files

Packt
25 May 2015
15 min read
In this article by Megan Squire, author of the book Clean Data, we will experiment with several data decanters to extract all the good stuff hidden inside inscrutable PDF files. We will explore the following topics: What PDF files are for and why it is difficult to extract data from them How to copy and paste from PDF files, and what to do when this does not work How to shrink a PDF file by saving only the pages that we need How to extract text and numbers from a PDF file using the tools inside a Python package called pdfMiner How to extract tabular data from within a PDF file using a browser-based Java application called Tabula How to use the full, paid version of Adobe Acrobat to extract a table of data (For more resources related to this topic, see here.) Why is cleaning PDF files difficult? Files saved in Portable Document Format (PDF) are a little more complicated than some of the text files. PDF is a binary format that was invented by Adobe Systems, which later evolved into an open standard so that multiple applications could create PDF versions of their documents. The purpose of a PDF file is to provide a way of viewing the text and graphics in a document independent of the software that did the original layout. In the early 1990s, the heyday of desktop publishing, each graphic design software package had a different proprietary format for its files, and the packages were quite expensive. In those days, in order to view a document created in Word, Pagemaker, or Quark, you would have to open the document using the same software that had created it. This was especially problematic in the early days of the Web, since there were not many available techniques in HTML to create sophisticated layouts, but people still wanted to share files with each other. PDF was meant to be a vendor-neutral layout format. Adobe made its Acrobat Reader software free for anyone to download, and subsequently the PDF format became widely used. Here is a fun fact about the early days of Acrobat Reader. The words click here when entered into Google search engine still bring up Adobe's Acrobat PDF Reader download website as the first result, and have done so for years. This is because so many websites distribute PDF files along with a message saying something like, "To view this file you must have Acrobat Reader installed. Click here to download it." Since Google's search algorithm uses the link text to learn what sites go with what keywords, the keyword click here is now associated with Adobe Acrobat's download site. PDF is still used to make vendor- and application-neutral versions of files that have layouts that are more complicated than what could be achieved with plain text. For example, viewing the same document in the various versions of Microsoft Word still sometimes causes documents with lots of embedded tables, styles, images, forms, and fonts to look different from one another. This can be due to a number of factors, such as differences in operating systems or versions of the installed Word software itself. Even with applications that are intended to be compatible between software packages or versions, subtle differences can result in incompatibilities. PDF was created to solve some of this. Right away we can tell that PDF is going to be more difficult to deal with than a text file, because it is a binary format, and because it has embedded fonts, images, and so on. So most of the tools in our trusty data cleaning toolbox, such as text editors and command-line tools (less) are largely useless with PDF files. Fortunately there are still a few tricks we can use to get the data out of a PDF file. Try simple solutions first – copying Suppose that on your way to decant your bottle of fine red wine, you spill the bottle on the floor. Your first thought might be that this is a complete disaster and you will have to replace the whole carpet. But before you start ripping out the entire floor, it is probably worth trying to clean the mess with an old bartender's trick: club soda and a damp cloth. In this section, we outline a few things to try first, before getting involved in an expensive file renovation project. They might not work, but they are worth a try. Our experimental file Let's practice cleaning PDF data by using a real PDF file. We also do not want this experiment to be too easy, so let's choose a very complicated file. Suppose we are interested in pulling the data out of a file we found on the Pew Research Center's website called "Is College Worth It?". Published in 2011, this PDF file is 159 pages long and contains numerous data tables showing various ways of measuring if attaining a college education in the United States is worth the investment. We would like to find a way to quickly extract the data within these numerous tables so that we can run some additional statistics on it. For example, here is what one of the tables in the report looks like: This table is fairly complicated. It only has six columns and eight rows, but several of the rows take up two lines, and the header row text is only shown on five of the columns. The complete report can be found at the PewResearch website at http://www.pewsocialtrends.org/2011/05/15/is-college-worth-it/, and the particular file we are using is labeled Complete Report: http://www.pewsocialtrends.org/files/2011/05/higher-ed-report.pdf. Step one – try copying out the data we want The data we will experiment on in this example is found on page 149 of the PDF file (labeled page 143 in their document). If we open the file in a PDF viewer, such as Preview on Mac OSX, and attempt to select just the data in the table, we already see that some strange things are happening. For example, even though we did not mean to select the page number (143); it got selected anyway. This does not bode well for our experiment, but let's continue. Copy the data out by using Command-C or select Edit | Copy. How text looks when selected in this PDF from within Preview Step two – try pasting the copied data into a text editor The following screenshot shows how the copied text looks when it is pasted into Text Wrangler, our text editor: Clearly, this data is not in any sensible order after copying and pasting it. The page number is included, the numbers are horizontal instead of vertical, and the column headers are out of order. Even some of the numbers have been combined; for example, the final row contains the numbers 4,4,3,2; but in the pasted version, this becomes a single number 4432. It would probably take longer to clean up this data manually at this point than it would have taken just to retype the original table. We can conclude that with this particular PDF file, we are going to have to take stronger measures to clean it. Step three – make a smaller version of the file Our copying and pasting procedures have not worked, so we have resigned ourselves to the fact that we are going to need to prepare for more invasive measures. Perhaps if we are not interested in extracting data from all 159 pages of this PDF file, we can identify just the area of the PDF that we want to operate on, and save that section to a separate file. To do this in Preview on MacOSX, launch the File | Print… dialog box. In the Pages area, we will enter the range of pages we actually want to copy. For the purpose of this experiment, we are only interested in page 149; so enter 149 in both the From: and to: boxes as shown in the following screenshot. Then from the PDF dropdown box at the bottom, select Open PDF in Preview. You will see your single-page PDF in a new window. From here, we can save this as a new file and give it a new name, such as report149.pdf or the like. Another technique to try – pdfMiner Now that we have a smaller file to experiment with, let's try some programmatic solutions to extract the text and see if we fare any better. pdfMiner is a Python package with two embedded tools to operate on PDF files. We are particularly interested in experimenting with one of these tools, a command-line program called pdf2txt that is designed to extract text from within a PDF document. Maybe this will be able to help us get those tables of numbers out of the file correctly. Step one – install pdfMiner Launch the Canopy Python environment. From the Canopy Terminal Window, run the following command: pip install pdfminer This will install the entire pdfMiner package and all its associated command-line tools. The documentation for pdfMiner and the two tools that come with it, pdf2txt and dumpPDF, is located at http://www.unixuser.org/~euske/python/pdfminer/. Step two – pull text from the PDF file We can extract all text from a PDF file using the command-line tool called pdf2txt.py. To do this, use the Canopy Terminal and navigate to the directory where the file is located. The basic format of the command is pdf2txt.py <filename>. If you have a larger file that has multiple pages (or you did not already break the PDF into smaller ones), you can also run pdf2txt.py –p149 <filename> to specify that you only want page 149. Just as with the preceding copy-and-paste experiment, we will try this technique not only on the tables located on page 149, but also on the Preface on page 3. To extract just the text from page 3, we run the following command: pdf2txt.py –p3 pewReport.pdf After running this command, the extracted preface of the Pew Research report appears in our command-line window: To save this text to a file called pewPreface.txt, we can simply add a redirect to our command line as follows: pdf2txt.py –p3 pewReport.pdf > pewPreface.txt But what about those troublesome data tables located on page 149? What happens when we use pdf2txt on those? We can run the following command: pdf2txt.py pewReport149.pdf The results are slightly better than copy and paste, but not by much. The actual data output section is shown in the following screenshot. The column headers and data are mixed together, and the data from different columns are shown out of order. We will have to declare the tabular data extraction portion of this experiment a failure, though pdfMiner worked reasonably well on line-by-line text-only extraction. Remember that your success with each of these tools may vary. Much of it depends on the particular characteristics of the original PDF file. It looks like we chose a very tricky PDF for this example, but let's not get disheartened. Instead, we will move on to another tool and see how we fare with it. Third choice – Tabula Tabula is a Java-based program to extract data within tables in PDF files. We will download the Tabula software and put it to work on the tricky tables in our page 149 file. Step one – download Tabula Tabula is available to be downloaded from its website at http://tabula.technology/. The site includes some simple download instructions. On Mac OSX version 10.10.1, I had to download the legacy Java 6 application before I was able to run Tabula. The process was straightforward and required only following the on-screen instructions. Step two – run Tabula Launch Tabula from inside the downloaded .zip archive. On the Mac, the Tabula application file is called simply Tabula.app. You can copy this to your Applications folder if you like. When Tabula starts, it launches a tab or window within your default web browser at the address http://127.0.0.1:8080/. The initial action portion of the screen looks like this: The warning that auto-detecting tables takes a long time is true. For the single-page perResearch149.pdf file, with three tables in it, table auto-detection took two full minutes and resulted in an error message about an incorrectly formatted PDF file. Step three – direct Tabula to extract the data Once Tabula reads in the file, it is time to direct it where the tables are. Using your mouse cursor, select the table you are interested in. I drew a box around the entire first table. Tabula took about 30 seconds to read in the table, and the results are shown as follows: Compared to the way the data was read with copy and paste and pdf2txt, this data looks great. But if you are not happy with the way Tabula reads in the table, you can repeat this process by clearing your selection and redrawing the rectangle. Step four – copy the data out We can use the Download Data button within Tabula to save the data to a friendlier file format, such as CSV or TSV. Step five – more cleaning Open the CSV file in Excel or a text editor and take a look at it. At this stage, we have had a lot of failures in getting this PDF data extracted, so it is very tempting to just quit now. Here are some simple data cleaning tasks: We can combine all the two-line text cells into a single cell. For example, in column B, many of the phrases take up more than one row. Prepare students to be productive and members of the workforce should be in one cell as a single phrase. The same is true for the headers in Rows 1 and 2 (4-year and Private should be in a single cell). To clean this in Excel, create a new column between columns B and C. Use the concatenate() function to join B3:B4, B5:B6, and so on. Use Paste-Special to add the new concatenated values into a new column. Then remove the two columns you no longer need. Do the same for rows 1 and 2. Remove blank lines between rows. When these procedures are finished, the data looks like this: Tabula might seem like a lot of work compared to cutting and pasting data or running a simple command-line tool. That is true, unless your PDF file turns out to be finicky like this one was. Remember that specialty tools are there for a reason—but do not use them unless you really need them. Start with a simple solution first and only proceed to a more difficult tool when you really need it. When all else fails – fourth technique Adobe Systems sells a paid, commercial version of their Acrobat software that has some additional features above and beyond just allowing you to read PDF files. With the full version of Acrobat, you can create complex PDF files and manipulate existing files in various ways. One of the features that is relevant here is the Export Selection As… option found within Acrobat. To get started using this feature, launch Acrobat and use the File Open dialog to open the PDF file. Within the file, navigate to the table holding the data you want to export. The following screenshot shows how to select the data from the page 149 PDF we have been operating on. Use your mouse to select the data, then right-click and choose Export Selection As… At this point, Acrobat will ask you how you want the data exported. CSV is one of the choices. Excel Workbook (.xlsx) would also be a fine choice if you are sure you will not want to also edit the file in a text editor. Since I know that Excel can also open CSV files, I decided to save my file in that format so I would have the most flexibility between editing in Excel and my text editor. After choosing the format for the file, we will be prompted for a filename and location for where to save the file. When we launch the resulting file, either in a text editor or in Excel, we can see that it looks a lot like the Tabula version we saw in the previous section. Here is how our CSV file will look when opened in Excel: At this point, we can use the exact same cleaning routine we used with the Tabula data, where we concatenated the B2:B3 cells into a single cell and then removed the empty rows. Summary The goal of this article was to learn how to export data out of a PDF file. Like sediment in a fine wine, the data in PDF files can appear at first to be very difficult to separate. Unlike decanting wine, however, which is a very passive process, separating PDF data took a lot of trial and error. We learned four ways of working with PDF files to clean data: copying and pasting, pdfMiner, Tabula, and Acrobat export. Each of these tools has certain strengths and weaknesses: Copying and pasting costs nothing and takes very little work, but is not as effective with complicated tables. pdfMiner/Pdf2txt is also free, and as a command-line tool, it could be automated. It also works on large amounts of data. But like copying and pasting, it is easily confused by certain types of tables. Tabula takes some work to set up, and since it is a product undergoing development, it does occasionally give strange warnings. It is also a little slower than the other options. However, its output is very clean, even with complicated tables. Acrobat gives similar output to Tabula, but with almost no setup and very little effort. It is a paid product. By the end, we had a clean dataset that was ready for analysis or long-term storage. Resources for Article: Further resources on this subject: Machine Learning Using Spark MLlib [article] Data visualization [article] First steps with R [article]
Read more
  • 0
  • 1
  • 59000
article-image-query-completesuggest
Packt
25 May 2015
37 min read
Save for later

Query complete/suggest

Packt
25 May 2015
37 min read
This article by the authors David Smiley, Eric Pugh, Kranti Parisa, and Matt Mitchel of the book, Apache Solr Enterprise Search Server - Third Edition, covers one of the most effective features of a search user interface—automatic/instant-search or completion of query input in a search input box. It is typically displayed as a drop-down menu that appears automatically after typing. There are several ways this can work: (For more resources related to this topic, see here.) Instant-search: Here, the menu is populated with search results. Each row is a document, just like the regular search results are, and as such, choosing one takes the user directly to the information instead of a search results page. At your discretion, you might opt to consider the last word partially typed. Examples of this are the URL bar in web browsers and various person search services. This is particularly effective for quick lookup scenarios against identifying information such as a name/title/identifier. It's less effective for broader searches. It's commonly implemented either with edge n-grams or with the Suggester component. Query log completion: If your application has sufficient query volume, then you should perform the query completion against previously executed queries that returned results. The pop-up menu is then populated with queries that others have typed. This is what Google does. It takes a bit of work to set this up. To get the query string and other information, you could write a custom search component, or parse Solr's log files, or hook into the logging system and parse it there. The query strings could be appended to a plain query log file, or inserted into a database, or added directly to a Solr index. Putting the data into a database before it winds up in a Solr index affords more flexibility on how to ultimately index it in Solr. Finally, at this point, you could index the field with an EdgeNGramTokenizer and perform searches against it, or use a KeywordTokenizer and then use one of the approaches listed for query term completion below. We recommend reading this excellent article by Jay Hill on doing this with EdgeNGrams at http://lucidworks.com/blog/auto-suggest-from-popular-queries-using-edgengrams/. Monitor your user's queries! Even if you don't plan to do query log completion, you should capture useful information about each request for ancillary usage analysis, especially to monitor which searches return no results. Capture the request parameters, the response time, the result count, and add a timestamp. Query term completion: The last word of the user's query is searched within the index as a prefix, and other indexed words starting with that prefix are provided. This type is an alternative to query log completion and it's easy to implement. There are several implementation approaches: facet the word using facet.prefix, use Solr's Suggester feature, or use the Terms component. You should consider these choices in that order. Facet/Field value completion: This is similar to query term completion, but it is done on data that you would facet or filter on. The pop-up menu of choices will ideally give suggestions across multiple fields with a label telling you which field each suggestion is for, and the value will be the exact field value, not the subset of it that the user typed. This is particularly useful when there are many possible filter choices. We've seen it used at Mint.com and elsewhere to great effect, but it is under-utilized in our opinion. If you don't have many fields to search, then the Suggester component could be used with one dictionary per field. Otherwise, build a search index dedicated to this information that contains one document per field and value pair, and use an edge n-gram approach to search it. There are other interesting query completion concepts we've seen on sites too, and some of these can be combined effectively. First, we'll cover a basic approach to instant-search using edge n-grams. Next, we'll describe three approaches to implementing query term completion—it's a popular type of query completion, and these approaches highlight different technologies within Solr. Lastly, we'll cover an approach to implement field-value suggestions for one field at a time, using the Suggester search component. Instant-search via edge n-grams As mentioned in the beginning of this section, instant-search is a technique in which a partial query is used to suggest a set of relevant documents, not terms. It's great for quickly finding documents by name or title, skipping the search results page. Here, we'll briefly describe how you might implement this approach using edge n-grams, which you can think of as a set of token prefixes. This is much faster than the equivalent wildcard query because the prefixes are all indexed. The edge n-gram technique is arguably more flexible than other suggest approaches: it's possible to do custom sorting or boosting, to use the highlighter easily to highlight the query, to offer infix suggestions (it isn't limited to matching titles left-to-right), and it's possible to filter the suggestions with a filter query, such as the current navigation filter state in the UI. It should be noted, though, that this technique is more complicated and increases indexing time and index size. It's also not quite as fast as the Suggester component. One of the key components to this approach is the EdgeNGramFilterFactory component, which creates a series of tokens for each input token for all possible prefix lengths. The field type definition should apply this filter to the index analyzer only, not the query analyzer. Enhancements to the field type could include adding filters such as LowerCaseFilterFactory, TrimFilterFactory, ASCIIFoldingFilterFactory, or even a PatternReplaceFilterFactory for normalizing repetitive spaces. Furthermore, you should set omitTermFreqAndPositions=true and omitNorms=true in the field type since these index features consume a lot of space and won't be needed. The Solr Admin Analysis tool can really help with the design of the perfect field type configuration. Don't hesitate to use this tool! A minimalist query for this approach is to simply query the n-grams field directly; since the field already contains prefixes, this just works. It's even better to have only the last word in the query search this field while the other words search a field indexed normally for keyword search. Here's an example: assuming a_name_wordedge is an n-grams based field and the user's search text box contains simple mi: http://localhost:8983/solr/mbartists/select?defType=edismax&qf=a_name&q.op=AND&q=simple a_name_wordedge:mi. The search client here inserted a_name_wordedge: before the last word. The combination of field type definition flexibility (custom filters and so on), and the ability to use features such as DisMax, custom boosting/sorting, and even highlighting, really make this approach worth exploring. Query term completion via facet.prefix Most people don't realize that faceting can be used to implement query term completion, but it can. This approach has the unique and valuable benefit of returning completions filtered by filter queries (such as faceted navigation state) and by query words prior to the last one being completed. This means the completion suggestions should yield matching results, which is not the case for the other techniques. However, there are limits to its scalability in terms of memory use and inappropriateness for real-time search applications. Faceting on a tokenized field is going to use an entry in the field value cache (based on UnInvertedField) to hold all words in memory. It will use a hefty chunk of memory for many words, and it's going to take a non-trivial amount of time to build this cache on every commit during the auto-warming phase. For a data point, consider MusicBrainz's largest field: t_name (track name). It has nearly 700K words in it. It consumes nearly 100 MB of memory and it took 33 seconds to initialize on my machine. The mandatory initialization per commit makes this approach unsuitable for real-time-search applications. Measure this for yourself. Perform a trivial query to trigger its initialization and measure how long it takes. Then search Solr's statistics page for fieldValueCache. The size is given in bytes next to memSize. This statistic is also logged quite clearly. For this example, we have a search box searching track names and it contains the following: michael ja All of the words here except the last one become the main query for the term suggest. For our example, this is just michael. If there isn't anything, then we'd want to ensure that the request handler used would search for all documents. The faceted field is a_spell, and we want to sort by frequency. We also want there to be at least one occurrence, and we don't want more than five suggestions. We don't need the actual search results, either. This leaves the facet.prefix faceting parameter to make this work. This parameter filters the facet values to those starting with this value. Remember that facet values are the final result of text analysis, and therefore are probably lowercased for fields you might want to do term completion on. You'll need to pre-process the prefix value similarly, or else nothing will be found. We're going to set this to ja, the last word that the user has partially typed. Here is the URL for such a search http://localhost:8983/solr/mbartists/select?q=michael&df=a_spell&wt=json&omitHeader=true&indent=on&facet=on&rows=0&facet.limit=5&facet.mincount=1&facet.field=a_spell&facet.prefix=ja. When setting this up for real, we recommend creating a request handler just for term completion with many of these parameters defined there, so that they can be configured separately from your application. In this example, we're going to use Solr's JSON response format. Here is the result: { "response":{"numFound":1919,"start":0,"docs":[]}, "facet_counts":{    "facet_queries":{},    "facet_fields":{      "a_spell":[        "jackson",17,        "james",15,        "jason",4,        "jay",4,        "jacobs",2]},    "facet_dates":{},    "facet_ranges":{}}} This is exactly the information needed to populate a pop-up menu of choices that the user can conveniently choose from. However, there are some issues to be aware of with this feature: You may want to retain the case information of what the user is typing so that it can then be re-applied to the Solr results. Remember that facet.prefix will probably need to be lowercased, depending on text analysis. If stemming text analysis is performed on the field at the time of indexing, then the user might get completion choices that are clearly wrong. Most stemmers, namely Porter-based ones, stem off the suffix to an invalid word. Consider using a minimal stemmer, if any. For stemming and other text analysis reasons, you might want to create a separate field with suitable text analysis just for this feature. In our example here, we used a_spell on purpose because spelling suggestions and term completion have the same text analysis requirements. If you would like to perform term completion of multiple fields, then you'll be disappointed that you can't do so directly. The easiest way is to combine several fields at index time. Alternatively, a query searching multiple fields with faceting configured for multiple fields can be performed. It would be up to you to merge the faceting results based on ordered counts. Query term completion via the Suggester A high-speed approach to implement term completion, called the Suggester, was introduced in Version 3 of Solr. Until Solr 4.7, the Suggester was an extension of the spellcheck component. It can still be used that way, but it now has its own search component, which is how you should use it. Similar to spellcheck, it's not necessarily as up to date as your index and it needs to be built. However, the Suggester only takes a couple of seconds or so for this usually, and you are not forced to do this per commit, unlike with faceting. The Suggester is generally very fast—a handful of milliseconds per search at most for common setups. The performance characteristics are largely determined by a configuration choice (shown later) called lookupImpl, in which we recommend WFSTLookupFactory for query term completion (but not for other suggestion types). Additionally, the Suggester uniquely includes a method of loading its dictionary from a file that optionally includes a sorting weight. We're going to use it for MusicBrainz's artist name completion. The following is in our solrconfig.xml: <requestHandler name="/a_term_suggest" class="solr.SearchHandler" startup="lazy"> <lst name="defaults">    <str name="suggest">true</str>    <str name="suggest.dictionary">a_term_suggest</str>    <str name="suggest.count">5</str> </lst> <arr name="components">    <str>aTermSuggester</str> </arr> </requestHandler>    <searchComponent name="aTermSuggester" class="solr.SuggestComponent"> <lst name="suggester">    <str name="name">a_term_suggest</str>    <str name="lookupImpl">WFSTLookupFactory</str>    <str name="field">a_spell</str>    <!-- <float name="threshold">0.005</float> -->    <str name="buildOnOptimize">true</str> </lst> </searchComponent> The first part of this is a request handler definition just for using the Suggester. The second part of this is an instantiation of the SuggestComponent search component. The dictionary here is loaded from the a_spell field in the main index, but if a file is desired, then you can provide the sourceLocation parameter. The document frequency threshold for suggestions is commented here because MusicBrainz has unique names that we don't want filtered out. However, in common scenarios, this threshold is advised. The Suggester needs to be built, which is the process of building the dictionary from its source into an optimized memory structure. If you set storeDir, it will also save it such that the next time Solr starts, it will load automatically and be ready. If you try to get suggestions before it's built, there will be no results. The Suggester only takes a couple of seconds or so to build and so we recommend building it automatically on startup via a firstSearcher warming query in solrconfig.xml. If you are using Solr 5.0, then this is simplified by adding a buildOnStartup Boolean to the Suggester's configuration. To be kept up to date, it needs to be rebuilt from time to time. If commits are infrequent, you should use the buildOnCommit setting. We've chosen the buildOnOptimize setting as the dataset is optimized after it's completely indexed; and then, it's never modified. Realistically, you may need to schedule a URL fetch to trigger the build, as well as incorporate it into any bulk data loading scripts you develop. Now, let's issue a request to the Suggester. Here's a completion for the incomplete query string sma http://localhost:8983/solr/mbartists/a_term_suggest?q=sma&wt=json. And here is the output, indented: { "responseHeader":{    "status":0,    "QTime":1}, "suggest":{"a_term_suggest":{    "sma":{      "numFound":5,      "suggestions":[{        "term":"sma",        "weight":3,        "payload":""},      {        "term":"small",        "weight":110,        "payload":""},      {        "term":"smart",        "weight":50,        "payload":""},      {        "term":"smash",        "weight":36,        "payload":""},      {        "term":"smalley",        "weight":9,        "payload":""}]}}}} If the input is found, it's listed first; then suggestions are presented in weighted order. In the case of an index-based source, the weights are, by default, the document frequency of the value. For more information about the Suggester, see the Solr Reference Guide at https://cwiki.apache.org/confluence/display/solr/Suggester. You'll find information on lookupImpl alternatives and other details. However, some secrets of the Suggester are still undocumented, buried in the code. Look at the factories for more configuration options. Query term completion via the Terms component The Terms component is used to expose raw indexed term information, including term frequency, for an indexed field. It has a lot of options for paging into this voluminous data and filtering out terms by term frequency. The Terms component has the benefit of using no Java heap memory, and consequently, there is no initialization penalty. It's always up to date with the indexed data, like faceting but unlike the Suggester. The performance is typically good, but for high query load on large indexes, it will suffer compared to the other approaches. An interesting feature unique to this approach is a regular expression term match option. This can be used for case-insensitive matching, but it probably doesn't scale to many terms. For more information about this component, visit the Solr Reference Guide at https://cwiki.apache.org/confluence/display/solr/The+Terms+Component. Field-value completion via the Suggester In this example, we'll show you how to suggest complete field values. This might be used for instant-search navigation by a document name or title, or it might be used to filter results by a field. It's particularly useful for fields that you facet on, but it will take some work to integrate into the search user experience. This can even be used to complete multiple fields at once by specifying suggest.dictionary multiple times. To complete values across many fields at once, you should consider an alternative approach than what is described here. For example, use a dedicated suggestion index of each name-value pair and use an edge n-gram technique or shingling. We'll use the Suggester once again, but using a slightly different configuration. Using AnalyzingLookupFactory as the lookupImpl, this Suggester will be able to specify a field type for query analysis and another as the source for suggestions. Any tokenizer or filter can be used in the analysis chain (lowercase, stop words, and so on). We're going to reuse the existing textSpell field type for this example. It will take care of lowercasing the tokens and throwing out stop words. For the suggestion source field, we want to return complete field values, so a string field will be used; we can use the existing a_name_sort field for this, which is close enough. Here's the required configuration for the suggest component: <searchComponent name="aNameSuggester" class="solr.SuggestComponent"> <lst name="suggester">    <str name="name">a_name_suggest</str>    <str name="lookupImpl">AnalyzingLookupFactory</str>    <str name="field">a_name_sort</str>    <str name="buildOnOptimize">true</str>    <str name="storeDir">a_name_suggest</str>    <str name="suggestAnalyzerFieldType">textSpell</str> </lst> </searchComponent> And here is the request handler and component: <requestHandler name="/a_name_suggest" class="solr.SearchHandler" startup="lazy"> <lst name="defaults">    <str name="suggest">true</str>    <str name="suggest.dictionary">a_name_suggest</str>    <str name="suggest.count">5</str> </lst> <arr name="components">    <str>aNameSuggester</str> </arr> </requestHandler> We've set up the Suggester to build the index of suggestions after an optimize command. On a modestly powered laptop, the build time was about 5 seconds. Once the build is complete, the /a_name_suggest handler will return field values for any matching query. Here's an example that will make use of this Suggester: http://localhost:8983/solr/mbartists/a_name_suggest?wt=json&omitHeader=true&q=The smashing,pum. Here's the response from that query: { "spellcheck":{    "suggestions":[      "The smashing,pum",{        "numFound":1,        "startOffset":0,        "endOffset":16,        "suggestion":["Smashing Pumpkins, The"]},      "collation","(Smashing Pumpkins, The)"]}} As you can see, the Suggester is able to deal with the mixed case. Ignore The (a stop word) and also the , (comma) we inserted, as this is how our analysis is configured. Impressive! It's worth pointing out that there's a lot more that can be done here, depending on your needs, of course. It's entirely possible to add synonyms, additional stop words, and different tokenizers to the analysis chain. There are other interesting lookupImpl choices. FuzzyLookupFactory can suggest completions that are similarly typed to the input query; for example, words that are similar in spelling, or just typos. AnalyzingInfixLookupFactory is a Suggester that can provide completions from matching prefixes anywhere in the field value, not just the beginning. Other ones are BlendedInfixLookupFactory and FreeTextLookupFactory. See the Solr Reference Guide for further information. Summary In this article we learned about the query complete/suggest feature. We saw the different ways by which we can implement this feature. This article by the authors David Smiley, Eric Pugh, Kranti Parisa, and Matt Mitchel of the book, Apache Solr Enterprise Search Server, Third Edition, covers one of the most effective features of a search user interface—automatic/instant-search or completion of query input in a search input box. It is typically displayed as a drop-down menu that appears automatically after typing. There are several ways this can work: Instant-search: Here, the menu is populated with search results. Each row is a document, just like the regular search results are, and as such, choosing one takes the user directly to the information instead of a search results page. At your discretion, you might opt to consider the last word partially typed. Examples of this are the URL bar in web browsers and various person search services. This is particularly effective for quick lookup scenarios against identifying information such as a name/title/identifier. It's less effective for broader searches. It's commonly implemented either with edge n-grams or with the Suggester component. Query log completion: If your application has sufficient query volume, then you should perform the query completion against previously executed queries that returned results. The pop-up menu is then populated with queries that others have typed. This is what Google does. It takes a bit of work to set this up. To get the query string and other information, you could write a custom search component, or parse Solr's log files, or hook into the logging system and parse it there. The query strings could be appended to a plain query log file, or inserted into a database, or added directly to a Solr index. Putting the data into a database before it winds up in a Solr index affords more flexibility on how to ultimately index it in Solr. Finally, at this point, you could index the field with an EdgeNGramTokenizer and perform searches against it, or use a KeywordTokenizer and then use one of the approaches listed for query term completion below. We recommend reading this excellent article by Jay Hill on doing this with EdgeNGrams at http://lucidworks.com/blog/auto-suggest-from-popular-queries-using-edgengrams/. Monitor your user's queries! Even if you don't plan to do query log completion, you should capture useful information about each request for ancillary usage analysis, especially to monitor which searches return no results. Capture the request parameters, the response time, the result count, and add a timestamp. Query term completion: The last word of the user's query is searched within the index as a prefix, and other indexed words starting with that prefix are provided. This type is an alternative to query log completion and it's easy to implement. There are several implementation approaches: facet the word using facet.prefix, use Solr's Suggester feature, or use the Terms component. You should consider these choices in that order. Facet/Field value completion: This is similar to query term completion, but it is done on data that you would facet or filter on. The pop-up menu of choices will ideally give suggestions across multiple fields with a label telling you which field each suggestion is for, and the value will be the exact field value, not the subset of it that the user typed. This is particularly useful when there are many possible filter choices. We've seen it used at Mint.com and elsewhere to great effect, but it is under-utilized in our opinion. If you don't have many fields to search, then the Suggester component could be used with one dictionary per field. Otherwise, build a search index dedicated to this information that contains one document per field and value pair, and use an edge n-gram approach to search it. There are other interesting query completion concepts we've seen on sites too, and some of these can be combined effectively. First, we'll cover a basic approach to instant-search using edge n-grams. Next, we'll describe three approaches to implementing query term completion—it's a popular type of query completion, and these approaches highlight different technologies within Solr. Lastly, we'll cover an approach to implement field-value suggestions for one field at a time, using the Suggester search component. Instant-search via edge n-grams As mentioned in the beginning of this section, instant-search is a technique in which a partial query is used to suggest a set of relevant documents, not terms. It's great for quickly finding documents by name or title, skipping the search results page. Here, we'll briefly describe how you might implement this approach using edge n-grams, which you can think of as a set of token prefixes. This is much faster than the equivalent wildcard query because the prefixes are all indexed. The edge n-gram technique is arguably more flexible than other suggest approaches: it's possible to do custom sorting or boosting, to use the highlighter easily to highlight the query, to offer infix suggestions (it isn't limited to matching titles left-to-right), and it's possible to filter the suggestions with a filter query, such as the current navigation filter state in the UI. It should be noted, though, that this technique is more complicated and increases indexing time and index size. It's also not quite as fast as the Suggester component. One of the key components to this approach is the EdgeNGramFilterFactory component, which creates a series of tokens for each input token for all possible prefix lengths. The field type definition should apply this filter to the index analyzer only, not the query analyzer. Enhancements to the field type could include adding filters such as LowerCaseFilterFactory, TrimFilterFactory, ASCIIFoldingFilterFactory, or even a PatternReplaceFilterFactory for normalizing repetitive spaces. Furthermore, you should set omitTermFreqAndPositions=true and omitNorms=true in the field type since these index features consume a lot of space and won't be needed. The Solr Admin Analysis tool can really help with the design of the perfect field type configuration. Don't hesitate to use this tool! A minimalist query for this approach is to simply query the n-grams field directly; since the field already contains prefixes, this just works. It's even better to have only the last word in the query search this field while the other words search a field indexed normally for keyword search. Here's an example: assuming a_name_wordedge is an n-grams based field and the user's search text box contains simple mi: http://localhost:8983/solr/mbartists/select?defType=edismax&qf=a_name&q.op=AND&q=simple a_name_wordedge:mi. The search client here inserted a_name_wordedge: before the last word. The combination of field type definition flexibility (custom filters and so on), and the ability to use features such as DisMax, custom boosting/sorting, and even highlighting, really make this approach worth exploring. Query term completion via facet.prefix Most people don't realize that faceting can be used to implement query term completion, but it can. This approach has the unique and valuable benefit of returning completions filtered by filter queries (such as faceted navigation state) and by query words prior to the last one being completed. This means the completion suggestions should yield matching results, which is not the case for the other techniques. However, there are limits to its scalability in terms of memory use and inappropriateness for real-time search applications. Faceting on a tokenized field is going to use an entry in the field value cache (based on UnInvertedField) to hold all words in memory. It will use a hefty chunk of memory for many words, and it's going to take a non-trivial amount of time to build this cache on every commit during the auto-warming phase. For a data point, consider MusicBrainz's largest field: t_name (track name). It has nearly 700K words in it. It consumes nearly 100 MB of memory and it took 33 seconds to initialize on my machine. The mandatory initialization per commit makes this approach unsuitable for real-time-search applications. Measure this for yourself. Perform a trivial query to trigger its initialization and measure how long it takes. Then search Solr's statistics page for fieldValueCache. The size is given in bytes next to memSize. This statistic is also logged quite clearly. For this example, we have a search box searching track names and it contains the following: michael ja All of the words here except the last one become the main query for the term suggest. For our example, this is just michael. If there isn't anything, then we'd want to ensure that the request handler used would search for all documents. The faceted field is a_spell, and we want to sort by frequency. We also want there to be at least one occurrence, and we don't want more than five suggestions. We don't need the actual search results, either. This leaves the facet.prefix faceting parameter to make this work. This parameter filters the facet values to those starting with this value. Remember that facet values are the final result of text analysis, and therefore are probably lowercased for fields you might want to do term completion on. You'll need to pre-process the prefix value similarly, or else nothing will be found. We're going to set this to ja, the last word that the user has partially typed. Here is the URL for such a search http://localhost:8983/solr/mbartists/select?q=michael&df=a_spell&wt=json&omitHeader=true&indent=on&facet=on&rows=0&facet.limit=5&facet.mincount=1&facet.field=a_spell&facet.prefix=ja. When setting this up for real, we recommend creating a request handler just for term completion with many of these parameters defined there, so that they can be configured separately from your application. In this example, we're going to use Solr's JSON response format. Here is the result: { "response":{"numFound":1919,"start":0,"docs":[]}, "facet_counts":{    "facet_queries":{},    "facet_fields":{      "a_spell":[        "jackson",17,        "james",15,        "jason",4,        "jay",4,        "jacobs",2]},    "facet_dates":{},    "facet_ranges":{}}} This is exactly the information needed to populate a pop-up menu of choices that the user can conveniently choose from. However, there are some issues to be aware of with this feature: You may want to retain the case information of what the user is typing so that it can then be re-applied to the Solr results. Remember that facet.prefix will probably need to be lowercased, depending on text analysis. If stemming text analysis is performed on the field at the time of indexing, then the user might get completion choices that are clearly wrong. Most stemmers, namely Porter-based ones, stem off the suffix to an invalid word. Consider using a minimal stemmer, if any. For stemming and other text analysis reasons, you might want to create a separate field with suitable text analysis just for this feature. In our example here, we used a_spell on purpose because spelling suggestions and term completion have the same text analysis requirements. If you would like to perform term completion of multiple fields, then you'll be disappointed that you can't do so directly. The easiest way is to combine several fields at index time. Alternatively, a query searching multiple fields with faceting configured for multiple fields can be performed. It would be up to you to merge the faceting results based on ordered counts. Query term completion via the Suggester A high-speed approach to implement term completion, called the Suggester, was introduced in Version 3 of Solr. Until Solr 4.7, the Suggester was an extension of the spellcheck component. It can still be used that way, but it now has its own search component, which is how you should use it. Similar to spellcheck, it's not necessarily as up to date as your index and it needs to be built. However, the Suggester only takes a couple of seconds or so for this usually, and you are not forced to do this per commit, unlike with faceting. The Suggester is generally very fast—a handful of milliseconds per search at most for common setups. The performance characteristics are largely determined by a configuration choice (shown later) called lookupImpl, in which we recommend WFSTLookupFactory for query term completion (but not for other suggestion types). Additionally, the Suggester uniquely includes a method of loading its dictionary from a file that optionally includes a sorting weight. We're going to use it for MusicBrainz's artist name completion. The following is in our solrconfig.xml: <requestHandler name="/a_term_suggest" class="solr.SearchHandler" startup="lazy"> <lst name="defaults">    <str name="suggest">true</str>    <str name="suggest.dictionary">a_term_suggest</str>    <str name="suggest.count">5</str> </lst> <arr name="components">    <str>aTermSuggester</str> </arr> </requestHandler>    <searchComponent name="aTermSuggester" class="solr.SuggestComponent"> <lst name="suggester">    <str name="name">a_term_suggest</str>    <str name="lookupImpl">WFSTLookupFactory</str>    <str name="field">a_spell</str>    <!-- <float name="threshold">0.005</float> -->    <str name="buildOnOptimize">true</str> </lst> </searchComponent> The first part of this is a request handler definition just for using the Suggester. The second part of this is an instantiation of the SuggestComponent search component. The dictionary here is loaded from the a_spell field in the main index, but if a file is desired, then you can provide the sourceLocation parameter. The document frequency threshold for suggestions is commented here because MusicBrainz has unique names that we don't want filtered out. However, in common scenarios, this threshold is advised. The Suggester needs to be built, which is the process of building the dictionary from its source into an optimized memory structure. If you set storeDir, it will also save it such that the next time Solr starts, it will load automatically and be ready. If you try to get suggestions before it's built, there will be no results. The Suggester only takes a couple of seconds or so to build and so we recommend building it automatically on startup via a firstSearcher warming query in solrconfig.xml. If you are using Solr 5.0, then this is simplified by adding a buildOnStartup Boolean to the Suggester's configuration. To be kept up to date, it needs to be rebuilt from time to time. If commits are infrequent, you should use the buildOnCommit setting. We've chosen the buildOnOptimize setting as the dataset is optimized after it's completely indexed; and then, it's never modified. Realistically, you may need to schedule a URL fetch to trigger the build, as well as incorporate it into any bulk data loading scripts you develop. Now, let's issue a request to the Suggester. Here's a completion for the incomplete query string sma http://localhost:8983/solr/mbartists/a_term_suggest?q=sma&wt=json. And here is the output, indented: { "responseHeader":{    "status":0,    "QTime":1}, "suggest":{"a_term_suggest":{    "sma":{      "numFound":5,      "suggestions":[{        "term":"sma",        "weight":3,        "payload":""},      {        "term":"small",        "weight":110,        "payload":""},      {        "term":"smart",        "weight":50,        "payload":""},      {        "term":"smash",        "weight":36,        "payload":""},      {        "term":"smalley",        "weight":9,        "payload":""}]}}}} If the input is found, it's listed first; then suggestions are presented in weighted order. In the case of an index-based source, the weights are, by default, the document frequency of the value. For more information about the Suggester, see the Solr Reference Guide at https://cwiki.apache.org/confluence/display/solr/Suggester. You'll find information on lookupImpl alternatives and other details. However, some secrets of the Suggester are still undocumented, buried in the code. Look at the factories for more configuration options. Query term completion via the Terms component The Terms component is used to expose raw indexed term information, including term frequency, for an indexed field. It has a lot of options for paging into this voluminous data and filtering out terms by term frequency. The Terms component has the benefit of using no Java heap memory, and consequently, there is no initialization penalty. It's always up to date with the indexed data, like faceting but unlike the Suggester. The performance is typically good, but for high query load on large indexes, it will suffer compared to the other approaches. An interesting feature unique to this approach is a regular expression term match option. This can be used for case-insensitive matching, but it probably doesn't scale to many terms. For more information about this component, visit the Solr Reference Guide at https://cwiki.apache.org/confluence/display/solr/The+Terms+Component. Field-value completion via the Suggester In this example, we'll show you how to suggest complete field values. This might be used for instant-search navigation by a document name or title, or it might be used to filter results by a field. It's particularly useful for fields that you facet on, but it will take some work to integrate into the search user experience. This can even be used to complete multiple fields at once by specifying suggest.dictionary multiple times. To complete values across many fields at once, you should consider an alternative approach than what is described here. For example, use a dedicated suggestion index of each name-value pair and use an edge n-gram technique or shingling. We'll use the Suggester once again, but using a slightly different configuration. Using AnalyzingLookupFactory as the lookupImpl, this Suggester will be able to specify a field type for query analysis and another as the source for suggestions. Any tokenizer or filter can be used in the analysis chain (lowercase, stop words, and so on). We're going to reuse the existing textSpell field type for this example. It will take care of lowercasing the tokens and throwing out stop words. For the suggestion source field, we want to return complete field values, so a string field will be used; we can use the existing a_name_sort field for this, which is close enough. Here's the required configuration for the suggest component: <searchComponent name="aNameSuggester" class="solr.SuggestComponent"> <lst name="suggester">    <str name="name">a_name_suggest</str>    <str name="lookupImpl">AnalyzingLookupFactory</str>    <str name="field">a_name_sort</str>    <str name="buildOnOptimize">true</str>    <str name="storeDir">a_name_suggest</str>    <str name="suggestAnalyzerFieldType">textSpell</str> </lst> </searchComponent> And here is the request handler and component: <requestHandler name="/a_name_suggest" class="solr.SearchHandler" startup="lazy"> <lst name="defaults">    <str name="suggest">true</str>    <str name="suggest.dictionary">a_name_suggest</str>    <str name="suggest.count">5</str> </lst> <arr name="components">    <str>aNameSuggester</str> </arr> </requestHandler> We've set up the Suggester to build the index of suggestions after an optimize command. On a modestly powered laptop, the build time was about 5 seconds. Once the build is complete, the /a_name_suggest handler will return field values for any matching query. Here's an example that will make use of this Suggester: http://localhost:8983/solr/mbartists/a_name_suggest?wt=json&omitHeader=true&q=The smashing,pum. Here's the response from that query: { "spellcheck":{    "suggestions":[      "The smashing,pum",{        "numFound":1,        "startOffset":0,        "endOffset":16,        "suggestion":["Smashing Pumpkins, The"]},      "collation","(Smashing Pumpkins, The)"]}} As you can see, the Suggester is able to deal with the mixed case. Ignore The (a stop word) and also the , (comma) we inserted, as this is how our analysis is configured. Impressive! It's worth pointing out that there's a lot more that can be done here, depending on your needs, of course. It's entirely possible to add synonyms, additional stop words, and different tokenizers to the analysis chain. There are other interesting lookupImpl choices. FuzzyLookupFactory can suggest completions that are similarly typed to the input query; for example, words that are similar in spelling, or just typos. AnalyzingInfixLookupFactory is a Suggester that can provide completions from matching prefixes anywhere in the field value, not just the beginning. Other ones are BlendedInfixLookupFactory and FreeTextLookupFactory. See the Solr Reference Guide for further information. Summary In this article we learned about the query complete/suggest feature. We saw the different ways by which we can implement this feature. Resources for Article: Further resources on this subject: Apache Solr and Big Data – integration with MongoDB [article] Tuning Solr JVM and Container [article] Apache Solr PHP Integration [article]
Read more
  • 0
  • 0
  • 3892

article-image-learning-d3js-mapping
Packt
08 May 2015
3 min read
Save for later

Learning D3.js Mapping

Packt
08 May 2015
3 min read
What is D3.js? D3.js (Data-Driven Documents) is a JavaScript library used for data visualization. D3.js is used to display graphical representations of information in a browser using JavaScript. Because they are, essentially, a collection of JavaScript code, graphical elements produced by D3.js can react to changes made on the client or server side. D3.js has seen major adoption as websites include more dynamic charts, graphs, infographics and other forms of visualized data. (For more resources related to this topic, see here.) Why this book? This book by the authors, Thomas Newton and Oscar Villarreal, explores the JavaScript library, D3.js, and its ability to help us create maps and amazing visualizations. You will no longer be confined to third-party tools in order to get a nice looking map. With D3.js, you can build your own maps and customize them as you please. This book will go from the basics of SVG and JavaScript to data trimming and modification with TopoJSON. Using D3.js to glue together these three key ingredients, we will create very attractive maps that will cover many common use cases for maps, such as choropleths, data overlays on maps, and interactivity. Key features Dive into D3.js and apply its powerful data binding ability in order to create stunning visualizations Learn the key concepts of SVG, JavaScript, CSS, and the DOM in order to project images onto the browser Solve a wide range of problems faced while building interactive maps with this solution-based guide Authors Thomas Newton has 20 years of experience in the technical industry, working on everything from low-level system designs and data visualization to software design and architecture. Currently, he is creating data visualizations to solve analytical problems for clients. Oscar Villarreal has been developing interfaces for the past 10 years, and most recently, he has been focusing on data visualization and web applications. In his spare time, he loves to write on his blog, oscarvillarreal.com. In short You will find a few books on D3.js but they require intermediate-level developers who already know how to use D3.js, a few of them will cover a much wider range of D3 usage, whereas this book is focused exclusively on mapping; it fully explores this core task, and takes a solution-based approach. Recommendations and all the wealthy knowledge that authors have shared in the book is based on many years of experience and many projects delivered using D3. What this book covers Learn all the tools you need to create a map using D3 A high-level overview of Scalable Vector Graphics (SVG) presentation with explanation on how it operates and what elements it encompasses Exploring D3.js—producing graphics from data Step-by-step guide to build a map with D3 Get you started with interactivity in your D3 map visualizations Most important aspects of map visualization in detail via the use of TopoJSON Assistance for the long-term maintenance of your D3 code base and different techniques to keep it healthy over the lifespan of your project Summary So far, you may have got an idea of what all be covered in the book. This book is carefully designed to allow the reader to jump between chapters based on what they are planning to get out of the book. Every chapter is full of pragmatic examples that can easily provide the foundation to more complex work. Authors have explained, step by step, how each example works. Resources for Article: Further resources on this subject: Using Canvas and D3 [article] Interacting with your Visualization [article] Simple graphs with d3.js [article]
Read more
  • 0
  • 0
  • 9366

article-image-why-big-data-financial-sector
Packt
06 May 2015
7 min read
Save for later

Why Big Data in the Financial Sector?

Packt
06 May 2015
7 min read
In this article by Rajiv Tiwari, author of the book, Hadoop for Finance Essentials, explains big data is not just changing the data landscape of healthcare, human science, telecom, and online retail industries, but is also transforming the way financial organizations treat their massive data. (For more resources related to this topic, see here.) As shown in the following figure, a study by McKinsey on big data use cases claims that the financial services industry is poised to gain the most from this remarkable technology: The data in financial organizations is relatively easier to capture compared to other industries, as it is easily accessible from internal systems—transactions and customer details—as well as external systems—FX Rates, legal entity data, and so on. Quite simply, the gain per byte of data is the maximum for financial services. Where do we get the big data in finance? The data is collected at every stage—be it onboarding of new customers, call center records, or financial transactions. The financial industry is rapidly moving online, and so it has never been easier to capture the data. There are other reasons as well, such as: Customers no longer need to visit branches to withdraw and deposit money or make investments. They can discuss their requirements with the Bank online or over the phone instead of physical meetings. According to SNL Financial, institutions have shut 2,599 branches in 2014 against 1,137 openings, a net loss of 1,462 branches that is just off 2013's record full-year total of 1,487 branches opened. The move brings total US branches down to 94,752, a decline of 1.5 percent. The trend is global and not just in the US. Electronic channels such as debit/credit cards and mobile devices, through which customers can interact with financial organizations, have increased in the UK, as shown in the following figure. The trend is global and not just in the UK. Mobile equipment such as computers, smartphones, telephones, or tablets make it easier and inexpensive for customers to transact, which means customers will transact more and generate more data. Since customer profiles and transaction patterns are rapidly changing, risk models based on smaller data sets are not very accurate. We need to analyze data for longer durations and be able to write complex data algorithms without worrying about computing and data storage capabilities. When financial organizations combine structured data with unstructured data on social media, the data analysis becomes very powerful. For example, they can get feedback on their new products or TV advertisements by analyzing Twitter, Facebook, and other social media comments. Big data use cases in the financial sector The financial sector is also sometimes called the BFSI sector; that is, banking, financial services, and insurance. Banking includes retail, corporate, business, investment (including capital markets), cards, and other core banking services Financial services include brokering, payment channels, mutual funds, asset management, and other services Insurance covers life and general insurance Financial organizations have been actively using big data platforms for the last few years and their key objectives are: Complying with regulatory requirements Better risk analytics Understanding customer behavior and improving services Understanding transaction patterns and monetizing using cross-selling of products Now I will define a few use cases within the financial services industry with real tangible business benefits. Data archival on HDFS Archiving data on HDFS is one of the basic use cases for Hadoop in financial organizations and is a quick win. It is likely to provide a very high return on investment. The data is archived on Hadoop and is still available to query (although not in real time), which is far more efficient than archiving on tape and far less expensive than keeping it on databases. Some of the use cases are: Migrate expensive and inefficient legacy mainframe data and load jobs to the Hadoop platform Migrate expensive older transaction data from high-end expensive databases to Hadoop HDFS Migrate unstructured legal, compliance, and onboarding documents to Hadoop HDFS Regulatory Financial organizations must comply with regulatory requirements. In order to meet these requirements, the use of traditional data processing platforms is becoming increasingly expensive and unsustainable. A couple of such use cases are: Checking customer names against a sanctions blacklist is very complicated due to the same or similar names. It is even more complicated when financial organizations have different names or aliases across different systems. With Hadoop, we can apply complex fuzzy matching on name and contact information across massive data sets at a much lower cost. The BCBS239 regulation states that financial organizations must be able to aggregate risk exposures across the whole group quickly and accurately. With Hadoop, financial organizations can consolidate and aggregate data on a single platform in the most efficient and cost-effective way. Fraud detection Fraud is estimated to cost the financial industry billions of US dollars per year. Financial organizations have invested in Hadoop platforms to identify fraudulent transactions by picking up unusual behavior patterns. Complex algorithms that need to be run on large volumes of transaction data to identify outliers are now possible on the Hadoop platform at a much lower expense. Tick data Stock market tick data is real-time data and generated on a massive scale. Live data streams can be processed using real-time streaming technology on the Hadoop infrastructure for quick trading decisions, and older tick data can be used for trending and forecasting using batch Hadoop tools. Risk management Financial organizations must be able to measure risk exposures for each customer and effectively aggregate it across entire business divisions. They should be able to score the credit risk for each customer using internal rules. They need to build risk models with intensive calculation on the underlying massive data. All these risk management requirements have two things in common—massive data and intensive calculation. Hadoop can handle both, given its inexpensive commodity hardware and parallel execution of jobs. Customer behavior prediction Once the customer data has been consolidated from a variety of sources on a Hadoop platform, it is possible to analyze data and: Predict mortgage defaults Predict spending for retail customers Analyze patterns that lead to customers leaving and customer dissatisfaction Sentiment analysis – unstructured Sentiment analysis is one of the best use cases to test the power of unstructured data analysis using Hadoop. Here are a few use cases: Analyze all e-mail text and call recordings from customers, which indicates whether they feel positive or negative about the products offered to them Analyze Facebook and Twitter comments to make buy or sell recommendations—analyze the market sentiments on which sectors or organizations will be a better buy for stock investments Analyze Facebook and Twitter comments to assess the feedback on new products Other use cases Big data has the potential to create new non-traditional income streams for financial organizations. As financial organizations store all the payment details of their retailers, they know exactly where, when, and how their customers are spending money. By analyzing this information, financial organizations can develop deep insight into customer intelligence and spending patterns, which they will be able to monetize. A few such possibilities include: Partner with a retailer to understand where the retailer's customers live, where and when they buy, what they buy, and how much they spend. This information will be used to recommend a sales strategy. Partner with a retailer to recommend discount offers to loyalty cardholders who use their loyalty cards in the vicinity of the retailer's stores. Summary In this article, we learned the use cases of Hadoop across different industry sectors and then detailed a few use cases within the financial sector. Resources for Article: Further resources on this subject: Hive in Hadoop [article] Hadoop and MapReduce [article] Hadoop Monitoring and its aspects [article]
Read more
  • 0
  • 0
  • 2627
article-image-introduction-hadoop
Packt
06 May 2015
11 min read
Save for later

Introduction to Hadoop

Packt
06 May 2015
11 min read
In this article by Shiva Achari, author of the book Hadoop Essentials, you'll get an introduction about Hadoop, its uses, and advantages (For more resources related to this topic, see here.) Hadoop In big data, the most widely used system is Hadoop. Hadoop is an open source implementation of big data, which is widely accepted in the industry, and benchmarks for Hadoop are impressive and, in some cases, incomparable to other systems. Hadoop is used in the industry for large-scale, massively parallel, and distributed data processing. Hadoop is highly fault tolerant and configurable to as many levels as we need for the system to be fault tolerant, which has a direct impact to the number of times the data is stored across. As we have already touched upon big data systems, the architecture revolves around two major components: distributed computing and parallel processing. In Hadoop, the distributed computing is handled by HDFS, and parallel processing is handled by MapReduce. In short, we can say that Hadoop is a combination of HDFS and MapReduce, as shown in the following image: Hadoop history Hadoop began from a project called Nutch, an open source crawler-based search, which processes on a distributed system. In 2003–2004, Google released Google MapReduce and GFS papers. MapReduce was adapted on Nutch. Doug Cutting and Mike Cafarella are the creators of Hadoop. When Doug Cutting joined Yahoo, a new project was created along the similar lines of Nutch, which we call Hadoop, and Nutch remained as a separate sub-project. Then, there were different releases, and other separate sub-projects started integrating with Hadoop, which we call a Hadoop ecosystem. The following figure and description depicts the history with timelines and milestones achieved in Hadoop: Description 2002.8: The Nutch Project was started 2003.2: The first MapReduce library was written at Google 2003.10: The Google File System paper was published 2004.12: The Google MapReduce paper was published 2005.7: Doug Cutting reported that Nutch now uses new MapReduce implementation 2006.2: Hadoop code moved out of Nutch into a new Lucene sub-project 2006.11: The Google Bigtable paper was published 2007.2: The first HBase code was dropped from Mike Cafarella 2007.4: Yahoo! Running Hadoop on 1000-node cluster 2008.1: Hadoop made an Apache Top Level Project 2008.7: Hadoop broke the Terabyte data sort Benchmark 2008.11: Hadoop 0.19 was released 2011.12: Hadoop 1.0 was released 2012.10: Hadoop 2.0 was alpha released 2013.10: Hadoop 2.2.0 was released 2014.10: Hadoop 2.6.0 was released Advantages of Hadoop Hadoop has a lot of advantages, and some of them are as follows: Low cost—Runs on commodity hardware: Hadoop can run on average performing commodity hardware and doesn't require a high performance system, which can help in controlling cost and achieve scalability and performance. Adding or removing nodes from the cluster is simple, as an when we require. The cost per terabyte is lower for storage and processing in Hadoop. Storage flexibility: Hadoop can store data in raw format in a distributed environment. Hadoop can process the unstructured data and semi-structured data better than most of the available technologies. Hadoop gives full flexibility to process the data and we will not have any loss of data. Open source community: Hadoop is open source and supported by many contributors with a growing network of developers worldwide. Many organizations such as Yahoo, Facebook, Hortonworks, and others have contributed immensely toward the progress of Hadoop and other related sub-projects. Fault tolerant: Hadoop is massively scalable and fault tolerant. Hadoop is reliable in terms of data availability, and even if some nodes go down, Hadoop can recover the data. Hadoop architecture assumes that nodes can go down and the system should be able to process the data. Complex data analytics: With the emergence of big data, data science has also grown leaps and bounds, and we have complex and heavy computation intensive algorithms for data analysis. Hadoop can process such scalable algorithms for a very large-scale data and can process the algorithms faster. Uses of Hadoop Some examples of use cases where Hadoop is used are as follows: Searching/text mining Log processing Recommendation systems Business intelligence/data warehousing Video and image analysis Archiving Graph creation and analysis Pattern recognition Risk assessment Sentiment analysis Hadoop ecosystem A Hadoop cluster can be of thousands of nodes, and it is complex and difficult to manage manually, hence there are some components that assist configuration, maintenance, and management of the whole Hadoop system. In this article, we will touch base upon the following components: Layer Utility/Tool name Distributed filesystem Apache HDFS Distributed programming Apache MapReduce Apache Hive Apache Pig Apache Spark NoSQL databases Apache HBase Data ingestion Apache Flume Apache Sqoop Apache Storm Service programming Apache Zookeeper Scheduling Apache Oozie Machine learning Apache Mahout System deployment Apache Ambari All the components above are helpful in managing Hadoop tasks and jobs. Apache Hadoop The open source Hadoop is maintained by the Apache Software Foundation. The official website for Apache Hadoop is http://hadoop.apache.org/, where the packages and other details are described elaborately. The current Apache Hadoop project (version 2.6) includes the following modules: Hadoop common: The common utilities that support other Hadoop modules Hadoop Distributed File System (HDFS): A distributed filesystem that provides high-throughput access to application data Hadoop YARN: A framework for job scheduling and cluster resource management Hadoop MapReduce: A YARN-based system for parallel processing of large datasets Apache Hadoop can be deployed in the following three modes: Standalone: It is used for simple analysis or debugging. Pseudo distributed: It helps you to simulate a multi-node installation on a single node. In pseudo-distributed mode, each of the component processes runs in a separate JVM. Instead of installing Hadoop on different servers, you can simulate it on a single server. Distributed: Cluster with multiple worker nodes in tens or hundreds or thousands of nodes. In a Hadoop ecosystem, along with Hadoop, there are many utility components that are separate Apache projects such as Hive, Pig, HBase, Sqoop, Flume, Zookeper, Mahout, and so on, which have to be configured separately. We have to be careful with the compatibility of subprojects with Hadoop versions as not all versions are inter-compatible. Apache Hadoop is an open source project that has a lot of benefits as source code can be updated, and also some contributions are done with some improvements. One downside for being an open source project is that companies usually offer support for their products, not for an open source project. Customers prefer support and adapt Hadoop distributions supported by the vendors. Let's look at some Hadoop distributions available. Hadoop distributions Hadoop distributions are supported by the companies managing the distribution, and some distributions have license costs also. Companies such as Cloudera, Hortonworks, Amazon, MapR, and Pivotal have their respective Hadoop distribution in the market that offers Hadoop with required sub-packages and projects, which are compatible and provide commercial support. This greatly reduces efforts, not just for operations, but also for deployment, monitoring, and tools and utility for easy and faster development of the product or project. For managing the Hadoop cluster, Hadoop distributions provide some graphical web UI tooling for the deployment, administration, and monitoring of Hadoop clusters, which can be used to set up, manage, and monitor complex clusters, which reduce a lot of effort and time. Some Hadoop distributions which are available are as follows: Cloudera: According to The Forrester Wave™: Big Data Hadoop Solutions, Q1 2014, this is the most widely used Hadoop distribution with the biggest customer base as it provides good support and has some good utility components such as Cloudera Manager, which can create, manage, and maintain a cluster, and manage job processing, and Impala is developed and contributed by Cloudera which has real-time processing capability. Hortonworks: Hortonworks' strategy is to drive all innovation through the open source community and create an ecosystem of partners that accelerates Hadoop adoption among enterprises. It uses an open source Hadoop project and is a major contributor to Hadoop enhancement in Apache Hadoop. Ambari was developed and contributed to Apache by Hortonworks. Hortonworks offers a very good, easy-to-use sandbox for getting started. Hortonworks contributed changes that made Apache Hadoop run natively on the Microsoft Windows platforms including Windows Server and Microsoft Azure. MapR: MapR distribution of Hadoop uses different concepts than plain open source Hadoop and its competitors, especially support for a network file system (NFS) instead of HDFS for better performance and ease of use. In NFS, Native Unix commands can be used instead of Hadoop commands. MapR have high availability features such as snapshots, mirroring, or stateful failover. Amazon Elastic MapReduce (EMR): AWS's Elastic MapReduce (EMR) leverages its comprehensive cloud services, such as Amazon EC2 for compute, Amazon S3 for storage, and other services, to offer a very strong Hadoop solution for customers who wish to implement Hadoop in the cloud. EMR is much advisable to be used for infrequent big data processing. It might save you a lot of money. Pillars of Hadoop Hadoop is designed to be highly scalable, distributed, massively parallel processing, fault tolerant and flexible and the key aspect of the design are HDFS, MapReduce and YARN. HDFS and MapReduce can perform very large scale batch processing at a much faster rate. Due to contributions from various organizations and institutions Hadoop architecture has undergone a lot of improvements, and one of them is YARN. YARN has overcome some limitations of Hadoop and allows Hadoop to integrate with different applications and environments easily, especially in streaming and real-time analysis. One such example that we are going to discuss are Storm and Spark, they are well known in streaming and real-time analysis, both can integrate with Hadoop via YARN. Data access components MapReduce is a very powerful framework, but has a huge learning curve to master and optimize a MapReduce job. For analyzing data in a MapReduce paradigm, a lot of our time will be spent in coding. In big data, the users come from different backgrounds such as programming, scripting, EDW, DBA, analytics, and so on, for such users there are abstraction layers on top of MapReduce. Hive and Pig are two such layers, Hive has a SQL query-like interface and Pig has Pig Latin procedural language interface. Analyzing data on such layers becomes much easier. Data storage component HBase is a column store-based NoSQL database solution. HBase's data model is very similar to Google's BigTable framework. HBase can efficiently process random and real-time access in a large volume of data, usually millions or billions of rows. HBase's important advantage is that it supports updates on larger tables and faster lookup. The HBase data store supports linear and modular scaling. HBase stores data as a multidimensional map and is distributed. HBase operations are all MapReduce tasks that run in a parallel manner. Data ingestion in Hadoop In Hadoop, storage is never an issue, but managing the data is the driven force around which different solutions can be designed differently with different systems, hence managing data becomes extremely critical. A better manageable system can help a lot in terms of scalability, reusability, and even performance. In a Hadoop ecosystem, we have two widely used tools: Sqoop and Flume, both can help manage the data and can import and export data efficiently, with a good performance. Sqoop is usually used for data integration with RDBMS systems, and Flume usually performs better with streaming log data. Streaming and real-time analysis Storm and Spark are the two new fascinating components that can run on YARN and have some amazing capabilities in terms of processing streaming and real-time analysis. Both of these are used in scenarios where we have heavy continuous streaming data and have to be processed in, or near, real-time cases. The example which we discussed earlier for traffic analyzer is a good example for use cases of Storm and Spark. Summary In this article, we explored a bit about Hadoop history, finally migrating to the advantages and uses of Hadoop. Hadoop systems are complex to monitor and manage, and we have separate sub-projects' frameworks, tools, and utilities that integrate with Hadoop and help in better management of tasks, which are called a Hadoop ecosystem. Resources for Article: Further resources on this subject: Hive in Hadoop [article] Hadoop and MapReduce [article] Evolution of Hadoop [article]
Read more
  • 0
  • 0
  • 3178

article-image-introducing-postgresql-9
Packt
06 May 2015
23 min read
Save for later

Introducing PostgreSQL 9

Packt
06 May 2015
23 min read
In this article by Simon Riggs, Gianni Ciolli, Hannu Krosing, Gabriele Bartolini, the authors of PostgreSQL 9 Administration Cookbook - Second Edition, we will introduce PostgreSQL 9. PostgreSQL is a feature-rich, general-purpose database management system. It's a complex piece of software, but every journey begins with the first step. (For more resources related to this topic, see here.) We'll start with your first connection. Many people fall at the first hurdle, so we'll try not to skip that too swiftly. We'll quickly move on to enabling remote users, and from there, we will move to access through GUI administration tools. We will also introduce the psql query tool. PostgreSQL is an advanced SQL database server, available on a wide range of platforms. One of the clearest benefits of PostgreSQL is that it is open source, meaning that you have a very permissive license to install, use, and distribute PostgreSQL without paying anyone fees or royalties. On top of that, PostgreSQL is well-known as a database that stays up for long periods and requires little or no maintenance in most cases. Overall, PostgreSQL provides a very low total cost of ownership. PostgreSQL is also noted for its huge range of advanced features, developed over the course of more than 20 years of continuous development and enhancement. Originally developed by the Database Research Group at the University of California, Berkeley, PostgreSQL is now developed and maintained by a huge army of developers and contributors. Many of those contributors have full-time jobs related to PostgreSQL, working as designers, developers, database administrators, and trainers. Some, but not many, of those contributors work for companies that specialize in support for PostgreSQL, like we (the authors) do. No single company owns PostgreSQL, nor are you required (or even encouraged) to register your usage. PostgreSQL has the following main features: Excellent SQL standards compliance up to SQL:2011 Client-server architecture Highly concurrent design where readers and writers don't block each other Highly configurable and extensible for many types of applications Excellent scalability and performance with extensive tuning features Support for many kinds of data models: relational, document (JSON and XML), and key/value What makes PostgreSQL different? The PostgreSQL project focuses on the following objectives: Robust, high-quality software with maintainable, well-commented code Low maintenance administration for both embedded and enterprise use Standards-compliant SQL, interoperability, and compatibility Performance, security, and high availability What surprises many people is that PostgreSQL's feature set is more comparable with Oracle or SQL Server than it is with MySQL. The only connection between MySQL and PostgreSQL is that these two projects are open source; apart from that, the features and philosophies are almost totally different. One of the key features of Oracle, since Oracle 7, has been snapshot isolation, where readers don't block writers and writers don't block readers. You may be surprised to learn that PostgreSQL was the first database to be designed with this feature, and it offers a complete implementation. In PostgreSQL, this feature is called Multiversion Concurrency Control (MVCC). PostgreSQL is a general-purpose database management system. You define the database that you would like to manage with it. PostgreSQL offers you many ways to work. You can use a normalized database model, augmented with features such as arrays and record subtypes, or use a fully dynamic schema with the help of JSONB and an extension named hstore. PostgreSQL also allows you to create your own server-side functions in any of a dozen different languages. PostgreSQL is highly extensible, so you can add your own data types, operators, index types, and functional languages. You can even override different parts of the system using plugins to alter the execution of commands or add a new optimizer. All of these features offer a huge range of implementation options to software architects. There are many ways out of trouble when building applications and maintaining them over long periods of time. In the early days, when PostgreSQL was still a research database, the focus was solely on the cool new features. Over the last 15 years, enormous amounts of code have been rewritten and improved, giving us one of the most stable and largest software servers available for operational use. You may have read that PostgreSQL was, or is, slower than My Favorite DBMS, whichever that is. It's been a personal mission of mine over the last ten years to improve server performance, and the team has been successful in making the server highly performant and very scalable. That gives PostgreSQL enormous headroom for growth. Who is using PostgreSQL? Prominent users include Apple, BASF, Genentech, Heroku, IMDB.com, Skype, McAfee, NTT, The UK Met Office, and The U. S. National Weather Service. 5 years ago, PostgreSQL received well in excess of 1 million downloads per year, according to data submitted to the European Commission, which concluded, "PostgreSQL is considered by many database users to be a credible alternative." We need to mention one last thing. When PostgreSQL was first developed, it was named Postgres, and therefore many aspects of the project still refer to the word "postgres"; for example, the default database is named postgres, and the software is frequently installed using the postgres user ID. As a result, people shorten the name PostgreSQL to simply Postgres, and in many cases use the two names interchangeably. PostgreSQL is pronounced as "post-grez-q-l". Postgres is pronounced as "post-grez." Some people get confused, and refer to "Postgre", which is hard to say, and likely to confuse people. Two names are enough, so please don't use a third name! The following sections explain the key areas in more detail. Robustness PostgreSQL is robust, high-quality software, supported by automated testing for both features and concurrency. By default, the database provides strong disk-write guarantees, and the developers take the risk of data loss very seriously in everything they do. Options to trade robustness for performance exist, though they are not enabled by default. All actions on the database are performed within transactions, protected by a transaction log that will perform automatic crash recovery in case of software failure. Databases may be optionally created with data block checksums to help diagnose hardware faults. Multiple backup mechanisms exist, with full and detailed Point-In-Time Recovery, in case of the need for detailed recovery. A variety of diagnostic tools are available. Database replication is supported natively. Synchronous Replication can provide greater than "5 Nines" (99.999 percent) availability and data protection, if properly configured and managed. Security Access to PostgreSQL is controllable via host-based access rules. Authentication is flexible and pluggable, allowing easy integration with any external security architecture. Full SSL-encrypted access is supported natively. A full-featured cryptographic function library is available for database users. PostgreSQL provides role-based access privileges to access data, by command type. Functions may execute with the permissions of the definer, while views may be defined with security barriers to ensure that security is enforced ahead of other processing. All aspects of PostgreSQL are assessed by an active security team, while known exploits are categorized and reported at http://www.postgresql.org/support/security/. Ease of use Clear, full, and accurate documentation exists as a result of a development process where doc changes are required. Hundreds of small changes occur with each release that smooth off any rough edges of usage, supplied directly by knowledgeable users. PostgreSQL works in the same way on small or large systems and across operating systems. Client access and drivers exist for every language and environment, so there is no restriction on what type of development environment is chosen now, or in the future. SQL Standard is followed very closely; there is no weird behavior, such as silent truncation of data. Text data is supported via a single data type that allows storage of anything from 1 byte to 1 gigabyte. This storage is optimized in multiple ways, so 1 byte is stored efficiently, and much larger values are automatically managed and compressed. PostgreSQL has a clear policy to minimize the number of configuration parameters, and with each release, we work out ways to auto-tune settings. Extensibility PostgreSQL is designed to be highly extensible. Database extensions can be loaded simply and easily using CREATE EXTENSION, which automates version checks, dependencies, and other aspects of configuration. PostgreSQL supports user-defined data types, operators, indexes, functions and languages. Many extensions are available for PostgreSQL, including the PostGIS extension that provides world-class Geographical Information System (GIS) features. Performance and concurrency PostgreSQL 9.4 can achieve more than 300,000 reads per second on a 32-CPU server, and it benchmarks at more than 20,000 write transactions per second with full durability. PostgreSQL has an advanced optimizer that considers a variety of join types, utilizing user data statistics to guide its choices. PostgreSQL provides MVCC, which enables readers and writers to avoid blocking each other. Taken together, the performance features of PostgreSQL allow a mixed workload of transactional systems and complex search and analytical tasks. This is important because it means we don't always need to unload our data from production systems and reload them into analytical data stores just to execute a few ad hoc queries. PostgreSQL's capabilities make it the database of choice for new systems, as well as the right long-term choice in almost every case. Scalability PostgreSQL 9.4 scales well on a single node up to 32 CPUs. PostgreSQL scales well up to hundreds of active sessions, and up to thousands of connected sessions when using a session pool. Further scalability is achieved in each annual release. PostgreSQL provides multinode read scalability using the Hot Standby feature. Multinode write scalability is under active development. The starting point for this is Bi-Directional Replication. SQL and NoSQL PostgreSQL follows SQL Standard very closely. SQL itself does not force any particular type of model to be used, so PostgreSQL can easily be used for many types of models at the same time, in the same database. PostgreSQL supports the more normal SQL language statement. With PostgreSQL acting as a relational database, we can utilize any level of denormalization, from the full Third Normal Form, to the more normalized Star Schema models. PostgreSQL extends the relational model to provide arrays, row types, and range types. A document-centric database is also possible using PostgreSQL's text, XML, and binary JSON (JSONB) data types, supported by indexes optimized for documents and by full text search capabilities. Key/value stores are supported using the hstore extension. Popularity When MySQL was taken over some years back, it was agreed in the EU monopoly investigation that followed that PostgreSQL was a viable competitor. That's been certainly true, with the PostgreSQL user base expanding consistently for more than a decade. Various polls have indicated that PostgreSQL is the favorite database for building new, enterprise-class applications. The PostgreSQL feature set attracts serious users who have serious applications. Financial services companies may be PostgreSQL's largest user group, though governments, telecommunication companies, and many other segments are strong users as well. This popularity extends across the world; Japan, Ecuador, Argentina, and Russia have very large user groups, and so do USA, Europe, and Australasia. Amazon Web Services' chief technology officer Dr. Werner Vogels described PostgreSQL as "an amazing database", going on to say that "PostgreSQL has become the preferred open source relational database for many enterprise developers and start-ups, powering leading geospatial and mobile applications". Commercial support Many people have commented that strong commercial support is what enterprises need before they can invest in open source technology. Strong support is available worldwide from a number of companies. 2ndQuadrant provides commercial support for open source PostgreSQL, offering 24 x 7 support in English and Spanish with bug-fix resolution times. EnterpriseDB provides commercial support for PostgreSQL as well as their main product, which is a variant of Postgres that offers enhanced Oracle compatibility. Many other companies provide strong and knowledgeable support to specific geographic regions, vertical markets, and specialized technology stacks. PostgreSQL is also available as hosted or cloud solutions from a variety of companies, since it runs very well in cloud environments. A full list of companies is kept up to date at http://www.postgresql.org/support/professional_support/. Research and development funding PostgreSQL was originally developed as a research project at the University of California, Berkeley in the late 1980s and early 1990s. Further work was carried out by volunteers until the late 1990s. Then, the first professional developer became involved. Over time, more and more companies and research groups became involved, supporting many professional contributors. Further funding for research and development was provided by the NSF. The project also received funding from the EU FP7 Programme in the form of the 4CaaST project for cloud computing and the AXLE project for scalable data analytics. AXLE deserves a special mention because it is a 3-year project aimed at enhancing PostgreSQL's business intelligence capabilities, specifically for very large databases. The project covers security, privacy, integration with data mining, and visualization tools and interfaces for new hardware. Further details of it are available at http://www.axleproject.eu. Other funding for PostgreSQL development comes from users who directly sponsor features and companies selling products and services based around PostgreSQL. Monitoring Databases are not isolated entities. They live on computer hardware using CPUs, RAM, and disk subsystems. Users access databases using networks. Depending on the setup, databases themselves may need network resources to function in any of the following ways: performing some authentication checks when users log in, using disks that are mounted over the network (not generally recommended), or making remote function calls to other databases. This means that monitoring only the database is not enough. As a minimum, one should also monitor everything directly involved in using the database. This means knowing the following: Is the database host available? Does it accept connections? How much of the network bandwidth is in use? Have there been network interruptions and dropped connections? Is there enough RAM available for the most common tasks? How much of it is left? Is there enough disk space available? When will it run out of disk space? Is the disk subsystem keeping up? How much more load can it take? Can the CPU keep up with the load? How many spare idle cycles do the CPUs have? Are other network services the database access depends on (if any) available? For example, if you use Kerberos for authentication, you need to monitor it as well. How many context switches are happening when the database is running? For most of these things, you are interested in history; that is, how have things evolved? Was everything mostly the same yesterday or last week? When did the disk usage start changing rapidly? For any larger installation, you probably have something already in place to monitor the health of your hosts and network. The two aspects of monitoring are collecting historical data to see how things have evolved and getting alerts when things go seriously wrong. Tools based on Round Robin Database Tool (RRDtool) such as Cacti and Munin are quite popular for collecting the historical information on all aspects of the servers and presenting this information in an easy-to-follow graphical form. Seeing several statistics on the same timescale can really help when trying to figure out why the system is behaving the way it is. Another popular open source solution is Ganglia, a distributed monitoring solution particularly suitable for environments with several servers and in multiple locations. Another aspect of monitoring is getting alerts when something goes really wrong and needs (immediate) attention. For alerting, one of the most widely used tools is Nagios, with its fork (Icinga) being an emerging solution. The aforementioned trending tools can integrate with Nagios. However, if you need a solution for both the alerting and trending aspects of a monitoring tool, you might want to look into Zabbix. Then, of course, there is Simple Network Management Protocol (SNMP), which is supported by a wide array of commercial monitoring solutions. Basic support for monitoring PostgreSQL through SNMP is found in pgsnmpd. This project does not seem very active though. However, you can find more information about pgsnmpd and download it from http://pgsnmpd.projects.postgresql.org/. Providing PostgreSQL information to monitoring tools Historical monitoring information is best to use when all of it is available from the same place and at the same timescale. Most monitoring systems are designed for generic purposes, while allowing application and system developers to integrate their specific checks with the monitoring infrastructure. This is possible through a plugin architecture. Adding new kinds of data inputs to them means installing a plugin. Sometimes, you may need to write or develop this plugin, but writing a plugin for something such as Cacti is easy. You just have to write a script that outputs monitored values in simple text format. In most common scenarios, the monitoring system is centralized and data is collected directly (and remotely) by the system itself or through some distributed components that are responsible for sending the observed metrics back to the main node. As far as PostgreSQL is concerned, some useful things to include in graphs are the number of connections, disk usage, number of queries, number of WAL files, most numbers from pg_stat_user_tables and pg_stat_user_indexes, and so on, as shown here: An example of a dashboard in Cacti The preceding Cacti screenshot includes data for CPU, disk, and network usage; pgbouncer connection pooler; and the number of PostgreSQL client connections. As you can see, they are nicely correlated. One Swiss Army knife script, which can be used from both Cacti and Nagios/Icinga, is check_postgres. It is available at http://bucardo.org/wiki/Check_postgres. It has ready-made reporting actions for a large array of things worth monitoring in PostgreSQL. For Munin, there are some PostgreSQL plugins available at the Munin plugin repository at https://github.com/munin-monitoring/contrib/tree/master/plugins/postgresql. The following screenshot shows a Munin graph about PostgreSQL buffer cache hits for a specific database, where cache hits (blue line) dominate reads from the disk (green line): Finding more information about generic monitoring tools Setting up the tools themselves is a larger topic. In fact, each of these tools has more than one book written about them. The basic setup information and the tools themselves can be found at the following URLs: RRDtool: http://www.mrtg.org/rrdtool/ Cacti: http://www.cacti.net/ Ganglia: http://ganglia.sourceforge.net/ Icinga: http://www.icinga.org Munin: http://munin-monitoring.org/ Nagios: http://www.nagios.org/ Zabbix: http://www.zabbix.org/ Real-time viewing using pgAdmin You can also use pgAdmin to get a quick view of what is going on in the database. For better control, you need to install the adminpack extension in the destination database, by issuing this command: CREATE EXTENSION adminpack; This extension is a part of the additionally supplied modules of PostgreSQL (aka contrib). It provides several administration functions that PgAdmin (and other tools) can use in order to manage, control, and monitor a Postgres server from a remote location. Once you have installed adminpack, connect to the database and then go to Tools | Server Status. This will open a window similar to what is shown in the following screenshot, reporting locks and running transactions: Loading data from flat files Loading data into your database is one of the most important tasks. You need to do this accurately and quickly. Here's how. Getting ready You'll need a copy of pgloader, which is available at http://github.com/dimitri/pgloader. At the time of writing this article, the current stable version is 3.1.0. The 3.x series is a major rewrite, with many additional features, and the 2.x series is now considered obsolete. How to do it… PostgreSQL includes a command named COPY that provides the basic data load/unload mechanism. The COPY command doesn't do enough when loading data, so let's skip the basic command and go straight to pgloader. To load data, we need to understand our requirements, so let's break this down into a step-by-step process, as follows: Identify the data files and where they are located. Make sure that pgloader is installed at the location of the files. Identify the table into which you are loading, ensure that you have the permissions to load, and check the available space. Work out the file type (fixed, text, or CSV) and check the encoding. Specify the mapping between columns in the file and columns on the table being loaded. Make sure you know which columns in the file are not needed—pgloader allows you to include only the columns you want. Identify any columns in the table for which you don't have data. Do you need them to have a default value on the table, or does pgloader need to generate values for those columns through functions or constants? Specify any transformations that need to take place. The most common issue is date formats, though possibly there may be other issues. Write the pgloader script. pgloader will create a log file to record whether the load has succeeded or failed, and another file to store rejected rows. You need a directory with sufficient disk space if you expect them to be large. Their size is roughly proportional to the number of failing rows. Finally, consider what settings you need for performance options. This is definitely last, as fiddling with things earlier can lead to confusion when you're still making the load work correctly. You must use a script to execute pgloader. This is not a restriction; actually it is more like best practice, because it makes it much easier to iterate towards something that works. Loads never work the first time, except in the movies! Let's look at a typical example from pgloader's documentation—the example.load file: LOAD CSV    FROM 'GeoLiteCity-Blocks.csv' WITH ENCODING iso-646-us        HAVING FIELDS        (            startIpNum, endIpNum, locId        )    INTO postgresql://user@localhost:54393/dbname?geolite.blocks        TARGET COLUMNS        (            iprange ip4r using (ip-range startIpNum endIpNum),          locId        )    WITH truncate,        skip header = 2,        fields optionally enclosed by '"',        fields escaped by backslash-quote,        fields terminated by 't'      SET work_mem to '32 MB', maintenance_work_mem to '64 MB'; We can use the load script like this: pgloader --summary summary.log example.load How it works… pgloader copes gracefully with errors. The COPY command loads all rows in a single transaction, so only a single error is enough to abort the load. pgloader breaks down an input file into reasonably sized chunks, and loads them piece by piece. If some rows in a chunk cause errors, then pgloader will split it iteratively until it loads all the good rows and skips all the bad rows, which are then saved in a separate "rejects" file for later inspection. This behavior is very convenient if you have large data files with a small percentage of bad rows; for instance, you can edit the rejects, fix them, and finally, load them with another pgloader run. Versions 2.x of pgloader were written in Python and connected to PostgreSQL through the standard Python client interface. Version 3.x is written in Common Lisp. Yes, pgloader is less efficient than loading data files using a COPY command, but running a COPY command has many more restrictions: the file has to be in the right place on the server, has to be in the right format, and must be unlikely to throw errors on loading. pgloader has additional overhead, but it also has the ability to load data using multiple parallel threads, so it can be faster to use as well. pgloader's ability to call out to reformat functions is often essential in most cases; straight COPY is just too simple. pgloader also allows loading from fixed-width files, which COPY does not. There's more… If you need to reload the table completely from scratch, then specify the –WITH TRUNCATE clause in the pgloader script. There are also options to specify SQL to be executed before and after loading the data. For instance, you may have a script that creates the empty tables before, or you can add constraints after, or both. After loading, if we have load errors, then there will be some junk loaded into the PostgreSQL tables. It is not junk that you can see, or that gives any semantic errors, but think of it more like fragmentation. You should think about whether you need to add a VACUUM command after the data load, though this will make the load take possibly much longer. We need to be careful to avoid loading data twice. The only easy way of doing that is to make sure that there is at least one unique index defined on every table that you load. The load should then fail very quickly. String handling can often be difficult, because of the presence of formatting or nonprintable characters. The default setting for PostgreSQL is to have a parameter named standard_conforming_strings set to off, which means that backslashes will be assumed to be escape characters. Put another way, by default, the n string means line feed, which can cause data to appear truncated. You'll need to turn standard_conforming_strings to on, or you'll need to specify an escape character in the load-parameter file. If you are reloading data that has been unloaded from PostgreSQL, then you may want to use the pg_restore utility instead. The pg_restore utility has an option to reload data in parallel, -j number_of_threads, though this is only possible if the dump was produced using the custom pg_dump format. This can be useful for reloading dumps, though it lacks almost all of the other pgloader features discussed here. If you need to use rows from a read-only text file that does not have errors, and you are using version 9.1 or later of PostgreSQL, then you may consider using the file_fdw contrib module. The short story is that it lets you create a "virtual" table that will parse the text file every time it is scanned. This is different from filling a table once and for all, either with COPY or pgloader; therefore, it covers a different use case. For example, think about an external data source that is maintained by a third party and needs to be shared across different databases. You may wish to send an e-mail to Dimitri Fontaine, the current author and maintainer of most of pgloader. He always loves to receive e-mails from users. Summary PostgreSQL provides a lot of features, which make it the most advanced open source database. Resources for Article: Further resources on this subject: Getting Started with PostgreSQL [article] Installing PostgreSQL [article] PostgreSQL – New Features [article]
Read more
  • 0
  • 0
  • 3530
Modal Close icon
Modal Close icon