Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-working-axes-should-know
Packt
30 Oct 2013
4 min read
Save for later

Working with axes (Should know)

Packt
30 Oct 2013
4 min read
(For more resources related to this topic, see here.) Getting ready We start with the same boilerplate that we used when creating basic charts. How to do it... The following code creates some sample data that grows exponentially. We then use the transform and tickSize setting on the Y axis to adjust how our data is displayed: ... <script> var data = [], i; for (i = 1; i <= 50; i++) { data.push([i, Math.exp(i / 10, 2)]); } $('#sampleChart').plot( [ data ], { yaxis: { transform: function (v) { return v == 0 ? v : Math.log(v); }, tickSize: 50 } } ); </script> ... Flot draws a chart with a logarithmic Y axis, so that our exponential data is easier to read: Next, we use Flot's ability to display multiple axes on the same chart as follows: ... var sine = []; for (i = 0; i < Math.PI * 2; i += 0.1) { sine.push([i, Math.sin(i)]); } var cosine = []; for (i = 0; i < Math.PI * 2; i += 0.1) { cosine.push([i, Math.cos(i) * 20]); } $('#sampleChart').plot( [ {label: 'sine', data: sine}, { label: 'cosine', data: cosine, yaxis: 2 } ], { yaxes: [ {}, { position: 'right' } ] } ); ... Flot draws the two series overlapping each other. The Y axis for the sine series is drawn on the left by default and the Y axis for the cosine series is drawn on the right as specified: How it works... The transform setting expects a function that takes a value, which is the y coordinate of our data, and returns a transformed value. In this case, we calculate the logarithm of our original data value so that our exponential data is displayed on a linear scale. We also use the tickSize setting to ensure that our labels do not overlap after the axis has been transformed. The yaxis setting under the series object is a number that specifies which axis the series should be associated with. When we specify the number 2, Flot automatically draws a second axis on the chart. We then use the yaxes setting to specify that the second axis should be positioned on the right of the chart. In this case, the sine data ranges from -1.0 to 1.0, whereas the cosine data ranges from -20 to 20. The cosine axis is drawn on the right and is independent of the sine axis. There's more... Flot doesn't have a built-in ability to interact with axes, but it does give you all the information you need to construct a solution. Making axes interactive Here, we use Flot's getAxes method to add interactivity to our axes as follows: ... var showFahrenheit = false, temperatureFormatter = function (val, axis) { if (showFahrenheit) { val = val * 9 / 5 + 32; } return val.toFixed(1); }, drawPlot = function () { var plot = $.plot( '#sampleChart', [[[0, 0], [1, 3], [3, 1]]], { yaxis: { tickFormatter: temperatureFormatter } } ); var plotPlaceholder = plot.getPlaceholder(); $.each(plot.getAxes(), function (i, axis) { var box = axis.box; var axisTarget = $('<div />'); axisTarget. css({ position: 'absolute', left: box.left, top: box.top, width: box.width, height: box.height }). click(function () { showFahrenheit = !showFahrenheit; drawPlot(); }). appendTo(plotPlaceholder); }); }; drawPlot(); ... First, note that we use a different way of creating a plot. Instead of calling the plot method on a jQuery collection that matches the placeholder element, we use the plot method directly from the jQuery object. This gives us immediate access to the Flot object, which we use to get the axes of our chart. You could have also used the following data method to gain access to the Flot object: var plot = $('#sampleChart').plot(...).data('plot'); Once we have the Flot object, we use the getAxes method to retrieve a list of axis objects. We use jQuery's each method to iterate over each axis and we create a div element that acts as a target for interaction. We set the div element's CSS so that it is in the same position and size as the axis' bounding box, and we attach an event handler to the click event before appending the div element to the plot's placeholder element. In this case, the event handler toggles a Boolean flag and redraws the plot. The flag determines whether the axis labels are displayed in Fahrenheit or Celsius, by changing the result of the function specified in the tickFormatter setting. Summary Now, we will be able to customize a chart's axes, transform the shape of a graph by using a logarithmic scale, display multiple data series with their own independent axes, and make the axes interactive. Resources for Article: Further resources on this subject: Getting started with your first jQuery plugin [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article] The Basics of WordPress and jQuery Plugin [Article]
Read more
  • 0
  • 0
  • 1214

article-image-advanced-data-operations
Packt
30 Oct 2013
11 min read
Save for later

Advanced Data Operations

Packt
30 Oct 2013
11 min read
(For more resources related to this topic, see here.) Recipe 1 – handling multi-valued cells It is a common problem in many tables: what do you do if multiple values apply to a single cell? For instance, consider a Clients table with the usual name, address, and telephone fields. A typist is adding new contacts to this table, when he/she suddenly discovers that Mr. Thompson has provided two addresses with a different telephone number for each of them. There are essentially three possible reactions to this: Adding only one address to the table: This is the easiest thing to do, as it eliminates half of the typing work. Unfortunately, this implies that half of the information is lost as well, so the completeness of the table is in danger. Adding two rows to the table: While the table is now complete, we now have redundant data. Redundancy is also dangerous, because it leads to error: the two rows might accidentally be treated as two different Mr. Thompsons, which can quickly become problematic if Mr. Thompson is billed twice for his subscription. Furthermore, as the rows have no connection, information updated in one of them will not automatically propagate to the other. Adding all information to one row: In this case, two addresses and two telephone numbers are added to the respective fields. We say the field is overloaded with regard to its originally envisioned definition. At first sight, this is both complete yet not redundant, but a subtle problem arises. While humans can perfectly make sense of this information, automated processes cannot. Imagine an envelope labeler, which will now print two addresses on a single envelope, or an automated dialer, which will treat the combined digits of both numbers as a single telephone number. The field has indeed lost its precise semantics. Note that there are various technical solutions to deal with the problem of multiple values, such as table relations. However, if you are not in control of the data model you are working with, you'll have to choose any of the preceding solutions. Luckily, OpenRefine is able to offer the best of both worlds. Since it is also an automated piece of software, it needs to be informed whether a field is multi-valued before it can perform sensible operations on it. In the Powerhouse Museum dataset, the Categories field is multi-valued, as each object in the collection can belong to different categories. Before we can perform meaningful operations on this field, we have to tell OpenRefine to somehow treat it a little different. Suppose we want to give the Categories field a closer look to check how many different categories are there and which categories are the most prominent. First, let's see what happens if we try to create a text facet on this field by clicking on the dropdown next to Categories and navigating to Facet| Text Facet as shown in the following screenshot. This doesn't work as expected because there are too many combinations of individual categories. OpenRefine simply gives up, saying that there are 14,805 choices in total, which is above the limit for display. While you can increase the maximum value by clicking on Set choice count limit, we strongly advise against this. First of all, it would make OpenRefine painfully slow as it would offer us a list of 14,805 possibilities, which is too large for an overview anyway. Second, it wouldn't help us at all because OpenRefine would only list the combined field values (such as Hen eggs | Sectional models | Animal Samples and Products). This does not allow us to inspect the individual categories, which is what we're interested in. To solve this, leave the facet open, but go to the Categories dropdown again and select Edit Cells| Split multi-valued cells…as shown in the following screenshot: OpenRefine now asks What separator currently separates the values?. As we can see in the first few records, the values are separated by a vertical bar or pipe character, as the horizontal line tokens are called. Therefore, enter a vertical bar |in the dialog. If you are not able to find the corresponding key on your keyboard, try selecting the character from one of the Categories cells and copying it so you can paste it in the dialog. Then, click on OK. After a few seconds, you will see that OpenRefine has split the cell values, and the Categories facet on the left now displays the individual categories. By default, it shows them in alphabetical order, but we will get more valuable insights if we sort them by the number of occurrences. This is done by changing the Sort by option from name to count, revealing the most popular categories. One thing we can do now, which we couldn't do when the field was still multi-valued is changing the name of a single category across all records. For instance, to change the name of Clothing and Dress, hover over its name in the created Categories facet and click on the edit link, as you can see in the following screenshot: Enter a new name such as Clothing and click on Apply. OpenRefine changes all occurrences of Clothing and Dress into Clothing, and the facet is updated to reflect this modification. Once you are done editing the separate values, it is time to merge them back together. Go to the Categories dropdown, navigate to Edit cells| Join multi-valued cells…, and enter the separator of your choice. This does not need to be the same separator as before, and multiple characters are also allowed. For instance, you could opt to separate the fields with a comma followed by a space. Recipe 3 – clustering similar cells Thanks to OpenRefine, you don't have to worry about inconsistencies that slipped in during the creation process of your data. If you have been investigating the various categories after splitting the multi-valued cells, you might have noticed that the same category labels do not always have the same spelling. For instance, there is Agricultural Equipment and Agricultural equipment(capitalization differences), Costumes and Costume(pluralization differences), and various other issues. The good news is that these can be resolved automatically; well, almost. But, OpenRefine definitely makes it a lot easier. The process of finding the same items with slightly different spelling is called clustering. After you have split multi-valued cells, you can click on the Categories dropdown and navigate to Edit cells| Cluster and edit…. OpenRefine presents you with a dialog box where you can choose between different clustering methods, each of which can use various similarity functions. When the dialog opens, key collision and fingerprint have been chosen as default settings. After some time (this can take a while, depending on the project size), OpenRefine will execute the clustering algorithm on the Categories field. It lists the found clusters in rows along with the spelling variations in each cluster and the proposed value for the whole cluster, as shown in the following screenshot: Note that OpenRefine does not automatically merge the values of the cluster. Instead, it wants you to confirm whether the values indeed point to the same concept. This avoids similar names, which still have a different meaning, accidentally ending up as the same. Before we start making decisions, let's first understand what all of the columns mean. The Cluster Size column indicates how many different spellings of a certain concept were thought to be found. The Row Count column indicates how many rows contain either of the found spellings. In Values in Cluster, you can see the different spellings and how many rows contain a particular spelling. Furthermore, these spellings are clickable, so you can indicate which one is correct. If you hover over the spellings, a Browse this cluster link appears, which you can use to inspect all items in the cluster in a separate browser tab. The Merge column contains a checkbox. If you check it, all values in that cluster will be changed to the value in the New Cell Value column when you click on one of the Merge Selected buttons. You can also manually choose a new cell value if the automatic value is not the best choice. So, let's perform our first clustering operation. I strongly advise you to scroll carefully through the list to avoid clustering values that don't belong together. In this case, however, the algorithm hasn't acted too aggressively: in fact, all suggested clusters are correct. Instead of manually ticking the Merge? checkbox on every single one of them, we can just click on Select All at the bottom. Then, click on the Merge Selected & Re-Cluster button, which will merge all the selected clusters but won't close the window yet, so we can try other clustering algorithms as well. OpenRefine immediately reclusters with the same algorithm, but no other clusters are found since we have merged all of them. Let's see what happens when we try a different similarity function. From the Keying Function menu, click on ngram fingerprint. Note that we get an additional parameter, Ngram Size, which we can experiment with to obtain less or more aggressive clustering. We see that OpenRefine has found several clusters again. It might be tempting to click on the Select All button again, but remember we warned to carefully inspect all rows in the list. Can you spot the mistake? Have a closer look at the following screenshot: Indeed, the clustering algorithm has decided that Shirts and T-shirts are similar enough to be merged. Unfortunately, this is not true. So, either manually select all correct suggestions, or deselect the ones that are not. Then, click on the Merge Selected & Re-Cluster button. Apart from trying different similarity functions, we can also try totally different clustering methods. From the Method menu, click on nearest neighbor. We again see new clustering parameters appear (Radius and Block Chars, but we will use their default settings for now). OpenRefine again finds several clusters, but now, it has been a little too aggressive. In fact, several suggestions are wrong, such as the Lockets / Pockets / Rockets cluster. Some other suggestions, such as "Photocopiers" and "Photocopier", are fine. In this situation, it might be best to manually pick the few correct ones among the many incorrect clusters. Assuming that all clusters have been identified, click on the Merge Selected & Close button, which will apply merging to the selected items and take you back into the main OpenRefine window. If you look at the data now or use a text facet on the Categories field, you will notice that the inconsistencies have disappeared. What are clustering methods? OpenRefine offers two different clustering methods, key collision and nearest neighbor, which fundamentally differ in how they function. With key collision, the idea is that a keying function is used to map a field value to a certain key. Values that are mapped to the same key are placed inside the same cluster. For instance, suppose we have a keying function which removes all spaces; then, A B C, AB C, and ABC will be mapped to the same key: ABC. In practice, the keying functions are constructed in a more sophisticated and helpful way. Nearest neighbor, on the other hand, is a technique in which each unique value is compared to every other unique value using a distance function. For instance, if we count every modification as one unit, the distance between Boot and Bots is 2: one addition and one deletion. This corresponds to an actual distance function in OpenRefine, namely levenshtein. In practice, it is hard to predict which combination of method and function is the best for a given field. Therefore, it is best to try out the various options, each time carefully inspecting whether the clustered values actually belong together. The OpenRefine interface helps you by putting the various options in the order they are most likely to help: for instance, trying key collision before nearest neighbor. Summary In this article we learned about how to handle multi-valued cells and clustering of similar cells in OpenRefine. Multi-valued cells are a common problem in many tables. This article showed us what to do if multiple values apply to a single cell. Since OpenRefine is an automated piece of software, it needs to be informed whether a field is multi-valued before it can perform sensible operations on it. This article also showed an example of how to go about it. It also shed light on clustering methods. OpenRefine offers two different clustering methods, key collision and nearest neighbor , which fundamentally differ in how they function. With key collision, the idea is that a keying function is used to map a field value to a certain key. Values that are mapped to the same key are placed inside the same cluster. Resources for Article : Further resources on this subject: Business Intelligence and Data Warehouse Solution - Architecture and Design [Article] Self-service Business Intelligence, Creating Value from Data [Article] Oracle Business Intelligence : Getting Business Information from Data [Article]
Read more
  • 0
  • 0
  • 1891

article-image-highlights-greenplum
Packt
30 Oct 2013
10 min read
Save for later

Highlights of Greenplum

Packt
30 Oct 2013
10 min read
(For more resources related to this topic, see here.) Big Data analytics – platform requirements Organizations are striving towards becoming more data driven and leverage data to gain the competitive advantage. It is inevitable that any current business intelligence infrastructure needs to be upgraded to include Big Data technologies and analytics needs to be embedded into every core business process. The following diagram depicts a matrix that connects requirements from low storage/cost to high storage/cost information management systems and analytics applications. The following section lists all the capabilities that an integrated platform for Big Data analytics should have: A data integration platform that can integrate data from any source, of any type, and highly voluminous in nature. This includes efficient data extraction, data cleansing, transformation, and loading capabilities. A data storage platform that can hold structured, unstructured, and semistructured data with a capability to slice and dice data to any degree, discarding the format. In short, while we store data, we should be able to use the best suited platform for a given data format (for example: structured data to use relational store, semi-structured data to use NoSQL store, and unstructured data to use a file store) and still be able to join data across platforms to run analytics. Support for running standard analytics functions and standard analytical tools on data that has characteristics described previously. Modular and elastically scalable hardware that wouldn't force changes to architecture/design with growing needs to handle bigger data and more complex processing requirements. A centralized management and monitoring system. Highly available and fault tolerant platform that can repair itself in times of any hardware failure seamlessly. Support for advanced visualizations to communicate insights in an effective way. A collaboration platform that can help end users perform the functions of loading, exploring, and visualizing data, and other workflow aspects as an end-to-end process. Core components The following figure depicts core software components of Greenplum UAP: In this section, we will take a brief look at what each component is and take a deep dive into their functions in the sections to follow. Greenplum Database Greenplum Database is a shared nothing, massively parallel processing solution built to support next generation data warehousing and Big Data analytics processing. It stores and analyzes voluminous structured data. It comes in a software-only version that works on commodity servers (this being its unique selling point) and additionally also is available as an appliance (DCA) that can take advantage of large clusters of powerful servers, storage, and switches. GPDB (Greenplum Database) comes with a parallel query optimizer that uses a cost-based algorithm to evaluate and select optimal query plans. Its high-speed interconnection supports continuous pipelining for data processing. In its new distribution under Pivotal, Greenplum Database is called Pivotal (Greenplum) Database. Shared nothing, massive parallel processing (MPP) systems, and elastic scalability Until now, our applications have been benchmarked for certain performance and the core hardware and its architecture determined its readiness for further scalability that came at a cost, be it in terms of changes to the design or hardware augmentation. With growing data volumes, scalability and total cost of ownership is becoming a big challenge and the need for elastic scalability has become prime. This section compares shared disk, shared memory, and shared nothing data architectures and introduces the concept of massive parallel processing. Greenplum Database and HD components implement shared nothing data architecture with master/worker paradigm demonstrating massive parallel processing capabilities. Shared disk data architecture Have a look at the following figure which gives an idea about shared disk data architecture: Shared disk data architecture refers to an architecture where there is a data disk that holds all the data and each node in the cluster accesses this data for processing. Any data operations can be performed by any node at a given point in time and in case two nodes attempt persisting/writing a tuple at the same time, to ensure consistency, a disk-based lock or intended lock communication is passed on thus affecting the performance. Further with increase in the number of nodes, contention at the database level increases. These architectures are write limited as there is a need to handle the locks across the nodes in the cluster. Even in case of the reads, partitioning should be implemented effectively to avoid complete table scans. Shared memory data architecture Have a look at the following figure which gives an idea about shared memory data architecture: In memory, data grids come under the shared memory data architecture category. In this architecture paradigm, data is held in memory that is accessible to all the nodes within the cluster. The major advantage with this architecture is that there would be no disk I/O involved and data access is very quick. This advantage comes with an additional need for loading and synchronizing data in memory with the underlying data store. The memory layer seen in the following figure can be distributed and local to the compute nodes or can exist as data node. Shared nothing data architecture Though an old paradigm, shared nothing data architecture is gaining traction in the context of Big Data. Here the data is distributed across the nodes in the cluster and every processor operates on the data local to itself. The location where data resides is referred to as data node and where the processing logic resides is called compute node. It can happen that both nodes, compute and data, are physically one. These nodes within the cluster are connected using high-speed interconnects. The following figure depicts two aspects of the architecture, the one on the left represents data and computes decoupled processes and the other to the right represents data and computes processes co-located: One of the most important aspects of shared nothing data architecture is the fact that there will not be any contention or locks that would need to be addressed. Data is distributed across the nodes within the cluster using a distribution plan that is defined as a part of the schema definition. Additionally, for higher query efficiency, partitioning can be done at the node level. Any requirement for a distributed lock would bring in complexity and an efficient distribution and partitioning strategy would be a key success factor. Reads are usually the most efficient relative to shared disk databases. Again, the efficiency is determined by the distribution policy, if a query needs to join data across the nodes in the cluster, users would see a temporary redistribution step that would bring required data elements together into another node before the query result is returned. Shared nothing data architecture thus supports massive parallel processing capabilities. Some of the features of shared nothing data architecture are as follows: It can scale extremely well on general purpose systems It provides automatic parallelization in loading and querying any database It has optimized I/O and can scan and process nodes in parallel It supports linear scalability, also referred to as elastic scalability, by adding a new node to the cluster, additional storage, and processing capability, both in terms of load performance and query performance is gained The Greenplum high-availability architecture In addition to primary Greenplum system components, we can also optionally deploy redundant components for high availability and avoiding single point of failure. The following components need to be implemented for data redundancy: Mirror segment instances: A mirror segment always resides on a different host than its primary segment. Mirroring provides you with a replica of the database contained in a segment. This may be useful in the event of disk/hardware failure. The metadata regarding the replica is stored on the master server in system catalog tables. Standby master host: For a fully redundant Greenplum Database system, a mirror of the Greenplum master can be deployed. A backup Greenplum master host serves as a warm standby in cases when the primary master host becomes unavailable. The standby master host is synchronized periodically and kept up-to-date using transaction replication log process that runs on the standby master to keep the master host and standby in sync. In the event of master host failure the standby master is activated and constructed using the transaction logs. Dual interconnect switches: A highly available interconnect can be achieved by deploying redundant network interfaces on all Greenplum hosts and a dual Gigabit Ethernet. The default configuration is to have one network interface per primary segment instance on a segment host (both the interconnects are by default 10Gig in DCA). External tables External tables in Greenplum refer to those database tables that help Greenplum Database access data from a source that is outside of the database. We can have different external tables for different formats. Greenplum supports fast, parallel, as well as nonparallel data loading and unloading. The external tables act as an interfacing point to external data source and give an impression of a local data source to the accessing function. File-based data sources are supported by external tables. The following file formats can be loaded onto external tables: Regular file-based source (supports Text, CSV, and XML data formats): file:// or gpfdist:// protocol Web-based file source (supports Text, CSV, OS commands, and scripts): http:// protocol Hadoop-based file source (supports Text and custom/user-defined formats): gphdfs:// protocol Following is the syntax for the creation and deletion of readable and writable external tables: To create a read-only external table: CREATE EXTERNAL (WEB) TABLE LOCATION (<<file paths>>) | EXECUTE '<<query>>' FORMAT '<<Format name for example: 'TEXT'>>' (DELIMITER, '<<name the delimiter>>'); To create a writable external table: CREATE WRITABLE EXTERNAL (WEB) TABLE LOCATION (<<file paths>>) | EXECUTE '<<query>>' FORMAT '<<Format name for example: 'TEXT'>>' (DELIMITER, '<<name the delimiter>>'); To drop an external table: DROP EXTERNAL (WEB) TABLE; Following are the examples on using file:// and gphdfs:// protocol: CREATE EXTERNAL TABLE test_load_file ( id int, name text, date date, description text ) LOCATION ( 'file://filehost:6781/data/folder1/*', 'file://filehost:6781/data/folder2/*' 'file://filehost:6781/data/folder3/*.csv' ) FORMAT 'CSV' (HEADER); In the preceding example, data is loaded from three different file server locations; also, as you can see, the wild card notation for each of the locations can be different. Now, in case where the files are located on HDFS, the following notation needs to be used (in the following example, the file is '|' delimited): CREATE EXTERNAL TABLE test_load_file ( id int, name text, date date, description text ) LOCATION ( 'gphdfs://hdfshost:8081/data/filename.txt' ) FORMAT 'TEXT' (DELIMITER '|'); Summary In this article, we have learned about Greenplum UAP and also Greenplum Database. This article also gives information about the core components of Greenplum UAP. Resources for Article: Further resources on this subject: Making Big Data Work for Hadoop and Solr [Article] Big Data Analysis [Article] Core Data iOS: Designing a Data Model and Building Data Objects [Article]
Read more
  • 0
  • 0
  • 2388

article-image-getting-started-haskell
Packt
30 Oct 2013
9 min read
Save for later

Getting started with Haskell

Packt
30 Oct 2013
9 min read
(For more resources related to this topic, see here.) So what is Haskell? It is a fast, type-safe, purely functional programming language with a powerful type inference. Having said that, let us try to understand what it gives us. First, a purely functional programming language means that, in general, functions in Haskell don't have side effects. There is a special type for impure functions, or functions with side effects. Then, Haskell has a strong, static type system with an automatic and robust type inference. This, in practice, means that you do not usually need to specify types of functions and also the type checker does not allow passing incompatible types. In strongly typed languages, types are considered to be a specification, due to the Curry-Howard correspondence, the direct relationship between programs and mathematical proofs. Under this great simplification, the theorem states that if a value of the type exists (or is inhabited), then the corresponding mathematical proof is correct. Or jokingly saying, if a program compiles, then there is 99 percent probability that it works according to specification. Though the question if the types conform, the specification in natural language is still open; Haskell won't help you with it. The Haskell platform The glorious Glasgow Haskell Compilation System, or simply Glasgow Haskell Compiler (GHC), is the most widely used Haskell compiler. It is the current de facto standard. The compiler is packaged into the Haskell platform that follows Python's principle, "Batteries included". The platform is updated twice a year with new compilers and libraries. It usually includes a compiler, an interactive Read-Evaluate-Print Loop (REPL) interpreter, Haskell 98/2010 libraries (so-called Prelude) that includes most of the common definitions and functions, and a set of commonly used libraries. If you are on Windows or Mac OS X, it is strongly recommended to use prepackaged installers of the Haskell platform at http://www.haskell.org/platform/. Quick tour of Haskell To start with development, first we should be familiar with a few basic features of Haskell. We really need to know about laziness, datatypes, pattern matching, type classes, and basic notion of monads to start with Haskell. Laziness Haskell is a language with lazy evaluation. From a programmer's point of view that means that the value is evaluated if and only if it is really needed. Imperative languages usually have a strict evaluation, that is, function arguments are evaluated before function application. To see the difference, let's take a look at a simple expression in Haskell: let x = 2 + 3 In a strict or eager language, the 2 + 3 expression would be immediately evaluated to 5, whereas in Haskell, only a promise to do this evaluation will be created and the value will be evaluated only when it is needed. In other words, this statement just introduces definition of x which might be used afterwards, unlike in strict language where it is an operator that assigns the computed value, 5 to a memory cell named x. Also, this strategy allows sharing of evaluations, because laziness assumes that computations can be executed whenever it is needed and therefore, the result can be memorized. It might reduce the running time by exponential factor over strict evaluation. Laziness also allows to manipulate with infinite data structures. For instance, we can construct an infinite list of natural numbers as follows: let ns = [1..] And moreover, we can manipulate it as if it is a normal list, even though some caution is needed, as you can get an infinite loop. We can take the first five elements of this infinite list by means of the built-in function, take: take 5 ns By running this example in GHCi you will get [1,2,3,4,5]. Functions as first-class citizens The notion of function is one of the core ideas in functional languages and Haskell is not an exception at all. The definition of a function includes a body of function and an optional type declaration. For instance, the take function is defined in Prelude as follows: take :: Int -> [a] -> [a] take = ... Here, the type declaration says that the function takes an integer as the argument and a list of objects of the a type, and returns a new list of the same type. Also Haskell allows partial application of a function. For example, you can construct a function that takes first five elements of the list as follows: take5 :: [a] -> [a] take5 = take 5 Also functions are themselves objects, that is, you may pass a function as an argument to another function. Prelude defines map function as a function of a function: map :: (a -> b) -> [a] -> [b] map takes a function and applies it to each element of the list. Thus functions are first-class citizens in the language and it is possible to manipulate with them as if they were normal objects. Data types Data type is a core of a strongly typed language as Haskell. The distinctive feature of Haskell data types is that they all are immutable, i.e. after object constructed it cannot be changed. It might be weird on the first sight but in the long run it has few positive consequences. First, it enables computation parallelization. Second, all data are referentially transparent, i.e. there is no difference between reference to object and object itself. Those two properties allow compiler to reason about code optimization on higher level than C/C++ compiler can. Let us consider the following data type definitions in Haskell: type TimeStamp = Integer newtype Price = Price Double data Quote = AskQuote TimeStamp Price | BidQuote TimeStamp Price There are three common ways to define data types in Haskell: The first declaration just creates a synonym for an existing data type and type checker won’t prevent you from using Integer instead of TimeStamp. Also you can use a value of type TimeStamp with any function that expects to work with an Integer. The second declaration creates a new type for prices and you are not allowed to use Double instead of Price. The compiler will raise an error in such cases and thus it will enforce the correct usage. The last declaration introduces Algebraic Data Type (ADT) and says that type Quote might be constructed either by data constructor AskQuote, or by BidQuote with time stamp and price as its parameters. Quote itself is called a type constructor. Type constructor might be parameterized by type variables. For example, types Maybe and Either, quite often used standard types, are defined as follows: data Maybe a = Nothing | Just a data Either a b = Left a | Right b Type variables a and b can be substituted by any other type. Type classes Type classes in Haskell are not classes like in object-oriented languages. It is more like interfaces with optional implementation. You can find them in other languages named traits, mix-ins and so on but unlike in them, this feature in Haskell enables ad-hoc polymorphism, i.e. function could be applied to arguments of different types. It is also known as function overloading or operator overloading. A polymorphic function can specify different implementation for different types. In principle, type class consists of function declaration over some objects. Eq type class is a standard type class that specifies how to compare two objects of same type: class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool (/=) x y = not $ (==) x y Here Eq is the name of type class, a is a type variable and == and /= are the operations defined in the type class. This definition means that some type a is of class Eq if it has defined operation == and /=. Moreover, the definition provides default implementation of the operation /=. And if you decide to implement this class for some data type, you need provide the single operation implementation. There are numbers of useful type classes defined in Prelude. For example, Eq is used to define equality between 2 objects; Ord specifies total ordering; Show and Read introduce a String representation of object; Enum type class describes enumerations, i.e. data types with null constructors. It might be quite boring to implement this quite trivial but useful type classes for each data type, so Haskell supports automatic derivation of most of standard type classes: newtype Price = Price Double deriving (Eq, Show, Read, Ord) Now objects of type Price could be converted from/to String by means of read/show functions and compared with themselves using equality operators. Those who want to know all the details can look them up in Haskell language report. IO monad Being a pure functional language Haskell requires a marker of impure functions and IO monad is the marker. Function Main is an entry point for any Haskell program. It cannot be a pure function because it changes state of the “World”, at least by creating a new process in OS. Let us take a look at its type: main :: IO () main = do putStrLn "Hello, World!" IO () type signs that there is a computation that performs some I/O operation and return empty result. There is really only one way to perform I/O in Haskell: use it in main procedure of your program. Also it is not possible to execute I/O action from arbitrary function, unless that function is in the IO monad and called from main, directly or indirectly. Summary In this article we walked through the main unique features of Haskell language and learned a bit of its history. Resources for Article : Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [Article] The OpenFlow Controllers [Article] Managing Network Layout [Article]
Read more
  • 0
  • 0
  • 7664

Packt
30 Oct 2013
16 min read
Save for later

IBM SPSS Modeler – Pushing the Limits

Packt
30 Oct 2013
16 min read
(For more resources related to this topic, see here.) Using the Feature Selection node creatively to remove or decapitate perfect predictors In this recipe, we will identify perfect or near perfect predictors in order to insure that they do not contaminate our model. Perfect predictors earn their name by being correct 100 percent of the time, usually indicating circular logic and not a prediction of value. It is a common and serious problem. When this occurs we have accidentally allowed information into the model that could not possibly be known at the time of the prediction. Everyone 30 days late on their mortgage receives a late letter, but receiving a late letter is not a good predictor of lateness because their lateness caused the letter, not the other way around. The rather colorful term decapitate is borrowed from the data miner Dorian Pyle. It is a reference to the fact that perfect predictors will be found at the top of any list of key drivers ("caput" means head in Latin). Therefore, to decapitate is to remove the variable at the top. Their status at the top of the list will be capitalized upon in this recipe. The following table shows the three time periods; the past, the present, and the future. It is important to remember that, when we are making predictions, we can use information from the past to predict the present or the future but we cannot use information from the future to predict the future. This seems obvious, but it is common to see analysts use information that was gathered after the date for which predictions are made. As an example, if a company sends out a notice after a customer has churned, you cannot say that the notice is predictive of churning.   Past Now Future   Contract Start Expiration Outcome Renewal Date Joe January 1, 2010 January 1, 2012 Renewed January 2, 2012 Ann February 15, 2010 February 15, 2012 Out of Contract Null Bill March 21, 2010 March 21, 2012 Churn NA Jack April 5, 2010 April 5, 2012 Renewed April 9, 2012 New Customer 24 Months Ago Today ??? ??? Getting ready We will start with a blank stream, and will be using the cup98lrn reduced vars2.txt data set. How to do it... To identify perfect or near-perfect predictors in order to insure that they do not contaminate our model: Build a stream with a Source node, a Type node, and a Table then force instantiation by running the Table node. Force TARGET_B to be flag and make it the target. Add a Feature Selection Modeling node and run it. Edit the resulting generated model and examine the results. In particular, focus on the top of the list. Review what you know about the top variables, and check to see if any could be related to the target by definition or could possibly be based on information that actually postdates the information in the target. Add a CHAID Modeling node, set it to run in Interactive mode, and run it. Examine the first branch, looking for any child node that might be perfectly predicted; that is, look for child nodes whose members are all found in one category. Continue steps 6 and 7 for the first several variables. Variables that are problematic (steps 5 and/or 7) need to be set to None in the Type node. How it works... Which variables need decapitation? The problem is information that, although it was known at the time that you extracted it, was not known at the time of decision. In this case, the time of decision is the decision that the potential donor made to donate or not to donate. Was the amount, Target_D known before the decision was made to donate? Clearly not. No information that dates after the information in the target variable can ever be used in a predictive model. This recipe is built of the following foundation—variables with this problem will float up to the top of the Feature Selection results. They may not always be perfect predictors, but perfect predictors always must go. For example, you might find that, if a customer initially rejects or postpones a purchase, there should be a follow up sales call in 90 days. They are recorded as rejected offer in the campaign, and as a result most of them had a follow up call in 90 days after the campaign. Since a couple of the follow up calls might not have happened, it won't be a perfect predictor, but it still must go. Note that variables such as RFA_2 and RFA_2A are both very recent information and highly predictive. Are they a problem? You can't be absolutely certain without knowing the data. Here the information recorded in these variables is calculated just prior to the campaign. If the calculation was made just after, they would have to go. The CHAID tree almost certainly would have shown evidence of perfect prediction in this case. There's more... Sometimes a model has to have a lot of lead time; predicting today's weather is a different challenge than next year's prediction in the farmer's almanac. When more lead time is desired you could consider dropping all of the _2 series variables. What would the advantage be? What if you were buying advertising space and there was a 45 day delay for the advertisement to appear? If the _2 variables occur between your advertising deadline and your campaign you might have to use information attained in the _3 campaign. Next-Best-Offer for large datasets Association models have been the basis for next-best-offer recommendation engines for a long time. Recommendation engines are widely used for presenting customers with cross-sell offers. For example, if a customer purchases a shirt, pants, and a belt; which shoes would he also likely buy? This type of analysis is often called market-basket analysis as we are trying to understand which items customers purchase in the same basket/transaction. Recommendations must be very granular (for example, at the product level) to be usable at the check-out register, website, and so on. For example, knowing that female customers purchase a wallet 63.9 percent of the time when they buy a purse is not directly actionable. However, knowing that customers that purchase a specific purse (for example, SKU 25343) also purchase a specific wallet (for example, SKU 98343) 51.8 percent of the time, can be the basis for future recommendations. Product level recommendations require the analysis of massive data sets (that is, millions of rows). Usually, this data is in the form of sales transactions where each line item (that is, row of data) represents a single product. The line items are tied together by a single transaction ID. IBM SPSS Modeler association models support both tabular and transactional data. The tabular format requires each product to be represented as column. As most product level recommendations would contain thousands of products, this format is not practical. The transactional format uses the transactional data directly and requires only two inputs, the transaction ID and the product/item. Getting ready This example uses the file stransactions.sav and scoring.csv. How to do it... To recommend the next best offer for large datasets: Start with a new stream by navigating to File | New Stream. Go to File | Stream Properties from the IBM SPSS Modeler menu bar. On the Options tab change the Maximum members for nominal fields to 50000. Click on OK. Add a Statistics File source node to the upper left of the stream. Set the file field by navigating to transactions.sav. On the Types tab, change the Product_Code field to Nominal and click on the Read Values button. Click on OK. Add a CARMA Modeling node connected to the Statistics File source node in step 3. On the Fields tab, click on the Use custom settings and check the Use transactional format check box. Select Transaction_ID as the ID field and Product_Code as the Content field. On the Model tab of the CARMA Modeling node, change the Minimum rule support (%) to 0.0 and the Minimum rule confidence (%) to 5.0. Click on the Run button to build the model. Double-click the generated model to ensure that you have approximately 40,000 rules. Add a Var File source node to the middle left of the stream. Set the file field by navigating to scoring.csv. On the Types tab, click on the Read Values button. Click on the Preview button to preview the data. Click on OK to dismiss all dialogs. Add a Sort node connected to the Var File node in step 6. Choose Transaction_ID and Line_Number (with Ascending sort) by clicking the down arrow on the right of the dialog. Click on OK. Connect the Sort node in step 7 to the generated model (replacing the current link). Add an Aggregate node connected to the generated model. Add a Merge node connected to the generated model. Connect the Aggregate node in step 9 to the Merge node. On the Merge tab, choose Keys as the Merge Method, select Transaction_ID, and click on the right arrow. Click on OK. Add a Select node connected to the Merge node in step 10. Set the condition to Record_Count = Line_Number. Click on OK. At this point, the stream should look as follows: Add a Table node connected to the Select node in step 11. Right-click on the Table node and click on Run to see the next-best-offer for the input data. How it works... In steps 1-5, we set up the CARMA model to use the transactional data (without needing to restructure the data). CARMA was selected over A Priori for its improved performance and stability with large data sets. For recommendation engines, the settings for the Model tab are somewhat arbitrary and are driven by the practical limitations of the number of rules generated. Lowering the thresholds for confidence and rule support generates more rules. Having more rules can have a negative impact on scoring performance but will result in more (albeit weaker) recommendations. Rule Support How many transactions contain the entire rule (that is, both antecedents ("if" products) and consequents ("then" products)) Confidence If a transaction contains all the antecedents ("if" products), what percentage of the time does it contain the consequents ("then" products) In step 5, when we examine the model we see the generated Association Rules with the corresponding rules support and confidences. In the remaining steps (7-12), we score a new transaction and generate 3 next-best-offers based on the model containing the Association Rules. Since the model was built with transactional data, the scoring data must also be transactional. This means that each row is scored using the current row and the prior rows with the same transaction ID. The only row we generally care about is the last row for each transaction where all the data has been presented to the model. To accomplish this, we count the number of rows for each transaction and select the line number that equals the total row count (that is, the last row for each transaction). Notice that the model returns 3 recommended products, each with a confidence, in order of decreasing confidence. A next-best-offer engine would present the customer with the best option first (or potentially all three options ordered by decreasing confidence). Note that, if there is no rule that applies to the transaction, nulls will be returned in some or all of the corresponding columns. There's more... In this recipe, you'll notice that we generate recommendations across the entire transactional data set. By using all transactions, we are creating generalized next-best-offer recommendations; however, we know that we can probably segment (that is, cluster) our customers into different behavioral groups (for example, fashion conscience, value shoppers, and so on.). Partitioning the transactions by behavioral segment and generating separate models for each segment will result in rules that are more accurate and actionable for each group. The biggest challenge with this approach is that you will have to identify the customer segment for each customer before making recommendations (that is, scoring). A unified approach would be to use the general recommendations for a customer until a customer segment can be assigned then use segmented models. Correcting a confusion matrix for an imbalanced target variable by incorporating priors Classification models generate probabilities and a classification predicted class value. When there is a significant imbalance in the proportion of True values in the target variable, the confusion matrix as seen in the Analysis node output will show that the model has all predicted class values equal to the False value, leading an analyst to conclude the model is not effective and needs to be retrained. Most often, the conventional wisdom is to use a Balance node to balance the proportion of True and False values in the target variable, thus eliminating the problem in the confusion matrix. However, in many cases, the classifier is working fine without the Balance node; it is the interpretation of the model that is biased. Each model generates a probability that the record belongs to the True class and the predicted class is derived from this value by applying a threshold of 0.5. Often, no record has a propensity that high, resulting in every predicted class value being assigned False. In this recipe we learn how to adjust the predicted class for classification problems with imbalanced data by incorporating the prior probability of the target variable. Getting ready This recipe uses the datafile cup98lrn_reduced_vars3.sav and the stream Recipe – correct with priors.str. How to do it... To incorporate prior probabilities when there is an imbalanced target variable: Open the stream Recipe – correct with priors.str by navigating to File | Open Stream. Make sure the datafile points to the correct path to the datafile cup98lrn_reduced_vars3.sav. Open the generated model TARGET_B, and open the Settings tab. Note that compute Raw Propensity is checked. Close the generated model. Duplicate the generated model by copying and pasting the node in the stream. Connect the duplicated model to the original generated model. Add a Type node to the stream and connect it to the generated model. Open the Type node and scroll to the bottom of the list. Note that the fields related to the two models have not yet been instantiated. Click on Read Values so that they are fully instantiated. Insert a Filler node and connect it to the Type node. Open the Filler node and, in the variable list, select $N1-TARGET_B. Inside the Condition section, type $RP1-TARGET_B' >= TARGET_B_Mean, Click on OK to dismiss the Filler node (after exiting the Expression Builder). Insert an Analysis node to the stream. Open the Analysis node and click on the check box for Coincidence Matrices. Click on OK. Run the stream to the Analysis node. Notice that the coincidence matrix (confusion matrix) for $N-TARGET_B has no predictions with value = 1, but the coincidence matrix for the second model, the one adjusted by step 7 ($N1-TARGET_B), has more than 30 percent of the records labeled as value = 1. How it works... Classification algorithms do not generate categorical predictions; they generate probabilities, likelihoods, or confidences. For this data set, the target variable, TARGET_B, has two values: 1 and 0. The classifier output from any classification algorithm will be a number between 0 and 1. To convert the probability to a 1 or 0 label, the probability is thresholded, and the default in Modeler (and all predictive analytics software) is the threshold at 0.5. This recipe changes that default threshold to the prior probability. The proportion of TARGET_B = 1 values in the data is 5.1 percent, and therefore this is the classic imbalanced target variable problem. One solution to this problem is to resample the data so that the proportion of 1s and 0s are equal, normally achieved through use of the Balance node in Modeler. Moreover, one can create the Balance node from running a Distribution node for TARGET_B, and using the Generate | Balance node (reduce) option. The justification for balancing the sample is that, if one doesn't do it, all the records will be classified with value = 0. The reason for all the classification decisions having value 0 is not because the Neural Network isn't working properly. Consider the histogram of predictions from the Neural Network shown in the following screenshot. Notice that the maximum value of the predictions is less than 0.4, but the center of density is about 0.05. The actual shape of the histogram and the maximum predicted value depend on the Neural Network; some may have maximum values slightly above 0.5. If the threshold for the classification decision is set to 0.5, since no neural network predicted confidence is greater than 0.5, all of the classification labels will be 0. However, if one sets the threshold to the TARGET_B prior probability, 0.051, many of the predictions will exceed that value and be labeled as 1. We can see the result of the new threshold by color-coding the histogram of the previous figure with the new class label, in the following screenshot. This recipe used a Filler node to modify the existing predicted target value. The categorical prediction from the Neural Network whose prediction is being changed is $N1-TARGET_B. The $ variables are special field names that are used automatically in the Analysis node and Evaluation node. It's possible to construct one's own $ fields with a Derive node, but it is safer to modify the one that's already in the data. There's more... This same procedure defined in this recipe works for other modeling algorithms as well, including logistic regression. Decision trees are a different matter. Consider the following screenshot. This result, stating that the C5 tree didn't split at all, is the result of the imbalanced target variable. Rather than balancing the sample, there are other ways to get a tree built. For C&RT or Quest trees, go to the Build Options, select the Costs & Priors item, and select Equal for all classes for priors: equal priors. This option forces C&RT to treat the two classes mathematically as if their counts were equal. It is equivalent to running the Balance node to boost samples so that there are equal numbers of 0s and 1s. However, it's done without adding additional records to the data, slowing down training; equal priors is purely a mathematical reweighting. The C5 tree doesn't have the option of setting priors. An alternative, one that will work not only with C5 but also with C&RT, CHAID, and Quest trees, is to change the Misclassification Costs so that the cost of classifying a one as a zero is 20, approximately the ratio of the 95 percent 0s to 5 percent 1s.
Read more
  • 0
  • 0
  • 7730

article-image-cloudera-hadoop-and-hp-vertica
Packt
29 Oct 2013
9 min read
Save for later

Cloudera Hadoop and HP Vertica

Packt
29 Oct 2013
9 min read
(For more resources related to this topic, see here.) Cloudera Hadoop Hadoop is one of the names we think about when it comes to Big Data. I'm not going into details about it since there is plenty of information out there; moreover, like somebody once said, "If you decided to use Hadoop for your data warehouse, then you probably have a good reason for it". Let's not forget: it is primarily a distributed filesystem, not a relational database. That said, there are many cases when we may need to use this technology for number crunching, for example, together with MicroStrategy for analysis and reporting. There are mainly two ways to leverage Hadoop data from MicroStrategy: the first is Hive and the second is Impala. They both work as SQL bridges to the underlying Hadoop structures, converting standard SELECT statements into jobs. The connection is handled by a proprietary 32-bit ODBC driver available for free from the Cloudera website. In my tests, Impala resulted largely faster than Hive, so I will show you how to use it from our MicroStrategy virtual machine. Please note that I am using Version 9.3.0 for consistency with the rest of the book. If you're serious about Big Data and Hadoop, I strongly recommend upgrading to 9.3.1 for enhanced performance and easier setup. See MicroStrategy knowledge base document TN43588 : Post-Certification of Cloudera Impala 1.0 with MicroStrategy 9.3.1 . The ODBC driver is the same for both Hive and Impala, only the driver settings change. Connecting to a Hadoop database To show how we can connect to a Hadoop database, I will use two virtual machines: one with MicroStrategy Suite and the second with Cloudera Hadoop distribution, specifically, a virtual appliance that is available for download from their website. The configuration of the Hadoop cluster is out of scope; moreover, I am not a Hadoop expert. I'll simply give some hints, feel free to use any other configuration/vendor, the procedure and ODBC parameters should be similar. Getting ready Start by going to http://at5.us/AppAU1 The Cloudera VM download is almost 3 GB (cloudera-quickstart-vm-4.3.0-vmware.tar.gz) and features the CH4 version. After unpacking the archive, you'll find a cloudera-quickstart-vm-4.3.0-vmware.ovf file that can be opened with VMware, see screen capture: Accept the defaults and click on Import to generate the cloudera-quickstart-vm-4.3.0-vmware virtual machine. Before starting the Cloudera appliance, change the network card settings from NAT to Bridged since we need to access the database from another VM: Leave the rest of the parameters, as per the default, and start the machine. After a while, you'll be presented with a graphical interface of Centos Linux. If the network has started correctly, the machine should have received an IP address from your network DHCP. We need a fixed rather than dynamic address in the Hadoop VM, so: Open the System | Preferences | Network Connections menu. Select the name of your card (should be something like Auto eth1 ) and click on Edit… . Move to the IPv4 Settings tab and change the Method from Automatic (DHCP) to Manual . Click on the Add button to create a new address. Ask your network administrator for details here and fill Address , Netmask , and Gateway . Click on Apply… and when prompted type the root password cloudera and click on Authenticate . Then click on Close . Check if the change was successful by opening a Terminal window (Applications | System Tools | Terminal ) and issue the ifconfig command, the answer should include the address that you typed in step 4. From the MicroStrategy Suite virtual machine, test if you can ping the Cloudera VM. When we first start Hadoop, there are no tables in the database, so we create the samples: In the Cloudera virtual machine, from the main page in Firefox open Cloudera Manager , click on I Agree in the Information Assurance Policy dialog. Log in with username admin and password admin. Look for a line with a service named oozie1 , notice that it is stopped. Click on the Actions button and select Start… . Confirm with the Start button in the dialog. A Status window will pop up, wait until the Progress is reported as Finished and close it. Now click on the Hue button in the bookmarks toolbar. Sign up with username admin and password admin, you are now in the Hue home page. Click on the first button in the blue Hue toolbar (tool tip: About Hue ) to go to the quick Start Wizard . Click on the Next button to go to Step 2: Examples tab. Click on Beeswax (Hive UI) and wait until a message over the toolbar says Examples refreshed . Now in the Hue toolbar, click on the seventh button from the left (tool tip: Metastore Manager ), you will see the default database with two tables: sample_07 and sample_08 . Enable the checkbox of sample_08 and click on the Browse Data button. After a while the Results tab shows a grid with data. So far so good. We now go back to the Cloudera Manager to start the Impala service. Click on the Cloudera Manager bookmark button. In the Impala1 row, open the Actions menu and choose Start… , then confirm Start . Wait until the Progress says Finished , then click on Close in the command details window. Go back to Hue and click on the fourth button on the toolbar (tool tip: Cloudera Impala (TM) Query UI ). In the Query Editor text area, type select * from sample_08 and click on Execute to see the table content. Next, we open the MicroStrategy virtual machine and download the 32-bit Cloudera ODBC Driver for Apache Hive, Version 2.0 from http://at5.us/AppAU2. Download the ClouderaHiveODBCSetup_v2_00.exe file and save it in C:install. How to do it... We install the ODBC driver: Run C:installClouderaHiveODBCSetup_v2_00.exe and click on the Next button until you reach Finish at the end of the setup, accepting every default. Go to Start | All Programs | Administrative Tools | Data Sources (ODBC) to open the 32-bit ODBC Data Source Administrator (if you're on 64-bit Windows, it's in the SysWOW64 folder). Click on System DSN and hit the Add… button. Select Cloudera ODBC Driver for Apache Hive and click on Finish . Fill the Hive ODBC DSN Configuration with these case-sensitive parameters (change the Host IP according to the address used in step 4 of the Getting ready section): Data Source Name : Cloudera VM Host : 192.168.1.40 Port : 21050 Database : default Type : HS2NoSasl Click on OK and then on OK again to close the ODBC Data Source Administrator. Now open the MicroStrategy Desktop application and log in with administrator and the corresponding password. Right-click on MicroStrategy Analytics Modules and select Create New Project… . Click on the Create project button and name it HADOOP, uncheck Enable Change Journal for this project and click on OK . When the wizard finishes creating the project click on Select tables from the Warehouse Catalog and hit the button labeled New… . Click on Next and type Cloudera VM in the Name textbox of the Database Instance Definition window. In this same window, open the Database type combobox and scroll down until you find Generic DBMS . Click on Next . In Local system ODBC data sources , pick Cloudera VM and type admin in both Database login and Password textboxes. Click on Next , then on Finish , and then on OK . When a Warehouse Catalog Browser error appears, click on Yes . In the Warehouse Catalog Options window, click on Edit… on the right below Cloudera VM . Select the Advanced tab and enable the radio button labeled Use 2.0 ODBC calls in the ODBC Version group. Click on OK . Now select the category Catalog | Read Settings in the left tree and enable the first radio button labeled Use standard ODBC calls to obtain the database catalog . Click on OK to close this window. When the Warehouse Catalog window appears, click on the lightning button (tool tip: Read the Warehouse Catalog ) to refresh the list of available tables. Pick sample_08 and move it to the right of the shopping cart. Then right-click on it and choose Import Prefix . Click on Save and Close and then on OK twice to close the Project Creation Assistant . You can now open the project and update the schema. From here, the procedure to create objects is the same as in any other project: Go to the Schema Objects | Attributes folder, and create a new Job attribute with these columns: ID : Table: sample_08 Column: code DESC : Table: sample_08 Column: description Go to the Fact folder and create a new Salary fact with salary column. Update the schema. Go to the Public Objects | Metrics folder and create a new Salary metric based on the Salary fact with Sum as aggregation function. Go to My Personal Objects | My Reports and create a new report with the Job attribute and the Salary metric: There you go; you just created your first Hadoop report. How it works... Executing Hadoop reports is no different from running any other standard DBMS reports. The ODBC driver handles the communication with Cloudera machine and Impala manages the creation of jobs to retrieve data. From MicroStrategy perspective, it is just another SELECT query that returns a dataset. There's more... Impala and Hive do not support the whole set of ANSI SQL syntax, so in some cases you may receive an error if a specific feature is not implemented: See the Cloudera documentation for details. HP Vertica Vertica Analytic Database is grid-based, column-oriented, and designed to manage large, fast-growing volumes of data while providing rapid query performance. It features a storage organization that favors SELECT statements over UPDATE and DELETE plus a high compression that stores columns of homogeneous datatype together. The Community (free) Edition allows up to three hosts and 1 TB of data, which is fairly sufficient for small to medium BI projects with MicroStrategy. There are several clients available for different operating systems, including 32-bit and 64-bit ODBC drivers for Windows.
Read more
  • 0
  • 0
  • 2986
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-low-level-index-control
Packt
28 Oct 2013
12 min read
Save for later

Low-Level Index Control

Packt
28 Oct 2013
12 min read
(For more resources related to this topic, see here.) Altering Apache Lucene scoring With the release of Apache Lucene 4.0 in 2012, all the users of this great, full text search library, were given the opportunity to alter the default TF/IDF based algorithm. Lucene API was changed to allow easier modification and extension of the scoring formula. However, that was not the only change that was made to Lucene when it comes to documents score calculation. Lucene 4.0 was shipped with additional similarity models, which basically allows us to use different scoring formula for our documents. In this section we will take a deeper look at what Lucene 4.0 brings and how those features were incorporated into ElasticSearch. Setting per-field similarity Since ElasticSearch 0.90, we are allowed to set a different similarity for each of the fields we have in our mappings. For example, let's assume that we have the following simple mapping that we use, in order to index blog posts (stored in the posts_no_similarity.json file): { "mappings" : { "post" : { "properties" : { "id" : { "type" : "long", "store" : "yes", "precision_step" : "0" }, "name" : { "type" : "string", "store" : "yes", "index" : "analyzed" }, "contents" : { "type" : "string", "store" : "no", "index" : "analyzed" } } } } } What we would like to do is, use the BM25 similarity model for the name field and the contents field. In order to do that, we need to extend our field definitions and add the similarity property with the value of the chosen similarity name. Our changed mappings (stored in the posts_similarity.json file) would appear as shown in the following code: { "mappings" : { "post" : { "properties" : { "id" : { "type" : "long", "store" : "yes", "precision_step" : "0" }, "name" : { "type" : "string", "store" : "yes", "index" : "analyzed", "similarity" : "BM25" }, "contents" : { "type" : "string", "store" : "no", "index" : "analyzed", "similarity" : "BM25" } } } } } And that's all, nothing more is needed. After the preceding change, Apache Lucene will use the BM25 similarity to calculate the score factor for the name and contents fields. In case of the Divergence from randomness and Information based similarity model, we need to configure some additional properties to specify the behavior of those similarities. How to do that is covered in the next part of the current section. Default codec properties When using the default codec we are allowed to configure the following properties: min_block_size: It specifies the minimum block size Lucene term dictionary uses to encode blocks. It defaults to 25. max_block_size: It specifies the maximum block size Lucene term dictionary uses to encode blocks. It defaults to 48. Direct codec properties The direct codec allows us to configure the following properties: min_skip_count: It specifies the minimum number of terms with a shared prefix to allow writing of a skip pointer. It defaults to 8. low_freq_cutoff: The codec will use a single array object to hold postings and positions that have document frequency lower than this value. It defaults to 32. Memory codec properties By using the memory codec we are allowed to alter the following properties: pack_fst: It is a Boolean option that defaults to false and specifies if the memory structure that holds the postings should be packed into the FST. Packing into FST will reduce the memory needed to hold the data. acceptable_overhead_ratio: It is a compression ratio of the internal structure specified as a float value which defaults to 0.2. When using the 0 value, there will be no additional memory overhead but the returned implementation may be slow. When using the 0.5 value, there can be a 50 percent memory overhead, but the implementation will be fast. Values higher than 1 are also possible, but may result in high memory overhead. Pulsing codec properties When using the pulsing codec we are allowed to use the same properties as with the default codec and in addition to them one more property, which is described as follows: freq_cut_off: It defaults to 1. The document frequency at which the postings list will be written into the term dictionary. The documents with the frequency equal to or less than the value of freq_cut_off will be processed. Bloom filter-based codec properties If we want to configure a bloom filter based codec, we can use the bloom_filter type and set the following properties: delegate: It specifies the name of the codec we want to wrap, with the bloom filter. ffp: It is a value between 0 and 1.0 which specifies the desired false positive probability. We are allowed to set multiple probabilities depending on the amount of documents per Lucene segment. For example, the default value of 10k=0.01, 1m=0.03 specifies that the fpp value of 0.01 will be used when the number of documents per segment is larger than 10.000 and the value of 0.03 will be used when the number of documents per segment is larger than one million. For example, we could configure our custom bloom filter based codec to wrap a direct posting format as shown in the following code (stored in posts_bloom_custom.json file): { "settings" : { "index" : { "codec" : { "postings_format" : { "custom_bloom" : { "type" : "bloom_filter", "delegate" : "direct", "ffp" : "10k=0.03, 1m=0.05" } } } } }, "mappings" : { "post" : { "properties" : { "id" : { "type" : "long", "store" : "yes", "precision_step" : "0" }, "name" : { "type" : "string", "store" : "yes", "index" : "analyzed", "postings_format" : "custom_bloom" }, "contents" : { "type" : "string", "store" : "no", "index" : "analyzed" } } } } } NRT, flush, refresh, and transaction log In an ideal search solution, when new data is indexed it is instantly available for searching. At the first glance it is exactly how ElasticSearch works even in multiserver environments. But this is not the truth (or at least not all the truth) and we will show you why it is like this. Let's index an example document to the newly created index by using the following command: curl -XPOST localhost:9200/test/test/1 -d '{ "title": "test" }' Now, we will replace this document and immediately we will try to find it. In order to do this, we'll use the following command chain: curl –XPOST localhost:9200/test/test/1 -d '{ "title": "test2" }' ; curl localhost:9200/test/test/_search?pretty The preceding command will probably result in the response, which is very similar to the following response: {"ok":true,"_index":"test","_type":"test","_id":"1","_version":2}{ "took" : 1, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 1, "max_score" : 1.0, "hits" : [ { "_index" : "test", "_type" : "test", "_id" : "1", "_score" : 1.0, "_source" : { "title": "test" } } ] } } The first line starts with a response to the indexing command—the first command. As you can see everything is correct, so the second, search query should return the document with the title field test2, however, as you can see it returned the first document. What happened? But before we give you the answer to the previous question, we should take a step backward and discuss about how underlying Apache Lucene library makes the newly indexed documents available for searching. Updating index and committing changes The segments are independent indices, which means that queries that are run in parallel to indexing, from time to time should add newly created segments to the set of those segments that are used for searching. Apache Lucene does that by creating subsequent (because of write-once nature of the index) segments_N files, which list segments in the index. This process is called committing. Lucene can do this in a secure way—we are sure that all changes or none of them hits the index. If a failure happens, we can be sure that the index will be in consistent state. Let's return to our example. The first operation adds the document to the index, but doesn't run the commit command to Lucene. This is exactly how it works. However, a commit is not enough for the data to be available for searching. Lucene library use an abstraction class called Searcher to access index. After a commit operation, the Searcher object should be reopened in order to be able to see the newly created segments. This whole process is called refresh. For performance reasons ElasticSearch tries to postpone costly refreshes and by default refresh is not performed after indexing a single document (or a batch of them), but the Searcher is refreshed every second. This happens quite often, but sometimes applications require the refresh operation to be performed more often than once every second. When this happens you can consider using another technology or requirements should be verified. If required, there is possibility to force refresh by using ElasticSearch API. For example, in our example we can add the following command: curl –XGET localhost:9200/test/_refresh If we add the preceding command before the search, ElasticSearch would respond as we had expected. Changing the default refresh time The time between automatic Searcher refresh can be changed by using the index.refresh_interval parameter either in the ElasticSearch configuration file or by using the update settings API. For example: curl -XPUT localhost:9200/test/_settings -d '{ "index" : { "refresh_interval" : "5m" } }' The preceding command will change the automatic refresh to be done every 5 minutes. Please remember that the data that are indexed between refreshes won't be visible by queries. As we said, the refresh operation is costly when it comes to resources. The longer the period of refresh is, the faster your indexing will be. If you are planning for very high indexing procedure when you don't need your data to be visible until the indexing ends, you can consider disabling the refresh operation by setting the index.refresh_interval parameter to -1 and setting it back to its original value after the indexing is done. The transaction log Apache Lucene can guarantee index consistency and all or nothing indexing, which is great. But this fact cannot ensure us that there will be no data loss when failure happens while writing data to the index (for example, when there isn't enough space on the device, the device is faulty or there aren't enough file handlers available to create new index files). Another problem is that frequent commit is costly in terms of performance (as you recall, a single commit will trigger a new segment creation and this can trigger the segments to merge). ElasticSearch solves those issues by implementing transaction log. Transaction log holds all uncommitted transactions and from time to time, ElasticSearch creates a new log for subsequent changes. When something goes wrong, transaction log can be replayed to make sure that none of the changes were lost. All of these tasks are happening automatically, so, the user may not be aware of the fact that commit was triggered at a particular moment. In ElasticSearch, the moment when the information from transaction log is synchronized with the storage (which is Apache Lucene index) and transaction log is cleared is called flushing. Please note the difference between flush and refresh operations. In most of the cases refresh is exactly what you want. It is all about making new data available for searching. From the opposite side, the flush operation is used to make sure that all the data is correctly stored in the index and transaction log can be cleared. In addition to automatic flushing, it can be forced manually using the flush API. For example, we can run a command to flush all the data stored in the transaction log for all indices, by running the following command: curl –XGET localhost:9200/_flush Or we can run the flush command for the particular index, which in our case is the one called library: curl –XGET localhost:9200/library/_flush curl –XGET localhost:9200/library/_refresh In the second example we used it together with the refresh, which after flushing the data opens a new searcher. The transaction log configuration If the default behavior of the transaction log is not enough ElasticSearch allows us to configure its behavior when it comes to the transaction log handling. The following parameters can be set in the elasticsearch.yml file as well as using index settings update API to control transaction log behavior: index.translog.flush_threshold_period: It defaults to 30 minutes (30m). It controls the time, after which flush will be forced automatically even if no new data was being written to it. In some cases this can cause a lot of I/O operation, so sometimes it's better to do flush more often with less data being stored in it. index.translog.flush_threshold_ops: It specifies the maximum number of operations after which the flush operation will be performed. It defaults to 5000. index.translog.flush_threshold_size: It specifies the maximum size of the transaction log. If the size of the transaction log is equal to or greater than the parameter, the flush operation will be performed. It defaults to 200 MB. index.translog.disable_flush: This option disables automatic flush. By default flushing is enabled, but sometimes it is handy to disable it temporarily, for example, during import of large amount of documents. All of the mentioned parameters are specified for an index of our choice, but they are defining the behavior of the transaction log for each of the index shards. Of course, in addition to setting the preceding parameters in the elasticsearch.yml file, they can also be set by using Settings Update API. For example: curl -XPUT localhost:9200/test/_settings -d '{ "index" : { "translog.disable_flush" : true } }' The preceding command was run before the import of a large amount of data, which gave us a performance boost for indexing. However, one should remember to turn on flushing when the import is done. Near Real Time GET Transaction log gives us one more feature for free that is, real-time GET operation, which provides the possibility of returning the previous version of the document including non-committed versions. The real-time GET operation fetches data from the index, but first it checks if a newer version of that document is available in the transaction log. If there is no flushed document, data from the index is ignored and a newer version of the document is returned—the one from the transaction log. In order to see how it works, you can replace the search operation in our example with the following command: curl -XGET localhost:9200/test/test/1?pretty ElasticSearch should return the result similar to the following: { "_index" : "test", "_type" : "test", "_id" : "1", "_version" : 2, "exists" : true, "_source" : { "title": "test2" } } If you look at the result, you would see that again, the result was just as we expected and no trick with refresh was required to obtain the newest version of the document.
Read more
  • 0
  • 0
  • 2106

article-image-about-cassandra
Packt
25 Oct 2013
28 min read
Save for later

About Cassandra

Packt
25 Oct 2013
28 min read
(For more resources related to this topic, see here.) So, if Cassandra is so good at everything, why not everyone drop whatever database they are using and jump start with Cassandra? This is a natural question. Some applications require strong ACID compliance, such as a booking system. If you are a person who goes by statistics, you'd ask how Cassandra fares with other existing data stores. TilmannRabl et al in their paper, Solving Big Data Challenges for Enterprise Application Performance Management (http://vldb.org/pvldb/vol5/p1724_tilmannrabl_vldb2012.pdf), told that, "In terms of scalability, there is a clear winner throughout our experiments. Cassandra achieves the highest throughput for the maximum number of nodes in all experiments with a linear increasing throughput from one to 12 nodes. This comes at the price of a high write and read latency. Cassandra's performance is best for high insertion rates." If you go through the paper, Cassandra wins in almost all the criteria. Equipped with proven concepts of distributed computing, made to reliably serve from commodity servers, and simple and easy maintenance, Cassandra is one of the most scalable, fastest, and very robust NoSQL database. So, the next natural question is what makes Cassandra so blazing fast? Let us dive deeper into the Cassandra architecture. Cassandra architecture Cassandra is a relative latecomer in the distributed data-store war. It takes advantage of two proven and closely similar data-store mechanisms, namely Google BigTable, a distributed storage system for structured data, 2006 (http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//archive/bigtable-osdi06.pdf [2006]), and Amazon Dynamo, Amazon's highly available key-value store, 2007 (http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf [2007]). Figure 2.3: Read throughputs shows linear scaling of Cassandra Like BigTable, it has tabular data presentation. It is not tabular in the strictest sense. It is rather a dictionary-like structure where each entry holds another sorted dictionary/map. This model is more powerful than the usual key-value store and it is named as column family. The properties such as Eventual Consistency and decentralization are taken from Dynamo. For now, assume a column family as a giant spreadsheet, such as MS Excel. But unlike spreadsheets, each row is identified by a row key with a number (token), and unlike spreadsheets, each cell can have its own unique name within the row. The columns in the rows are sorted by this unique column name. Also, since the number of rows is allowed to be very large (1.7*(10)^38), we distribute the rows uniformly across all the available machines by dividing the rows in equal token groups. These rows create a Keyspace. Keyspace is a set of all the tokens (row IDs). Ring representation Cassandra cluster is denoted as a ring. The idea behind this representation is to show token distribution. Let's take an example. Assume that there is a partitioner that generates tokens from zero to 127 and you have four Cassandra machines to create a cluster. To allocate equal load, we need to assign each of the four nodes to bear an equal number of tokens. So, the first machine will be responsible for tokens one to 32, the second will hold 33 to 64, the third, 65 to 96, and the fourth, 97 to 127 and 0. If you mark each node with the maximum token number that it can hold, the cluster looks like a ring. (Figure 2.22) Partitioner is the hash function that determines the range of possible row keys. Cassandra uses a partitioner to calculate the token equivalent to a row key (row ID). Figure 2.4: Token ownership and distribution in a balanced Cassandra ring When you start to configure Cassandra, one thing that you may want to set is the maximum token number that a particular machine could hold. This property can be set in the Cassandra.yaml file as initial_token. One thing that may confuse a beginner is that the value of the initial token is what the last token owns. Be aware that nodes can be rebalanced and these tokens can be changed as the new nodes join or old nodes get discarded. This is the initial token because this is just the initial value, and it may be changed later. How Cassandra works Diving into various components of Cassandra without having a context is really a frustrating experience. It does not makes sense why you are studying SSTable, MemTable, and Log Structured Merge (LSM) tree without being able to see how they fit into functionality and performance guarantees that Cassandra gives. So, first, we will see Cassandra's write and read mechanism. It is possible that some of the terms that we encounter during this discussion may not be immediately understandable. A rough overview of the Cassandra components is as shown in the following figure: Figure 2.5: Main components of the Cassandra service The main class of Storage Layer is StorageProxy. It handles all the requests. Messaging Layer is responsible for internode communications like gossip. Apart from this, process-level structures keep a rough idea about the actual data containers and where they live. There are four data buckets that you need to know. MemTable is a hash table-like structure that stays in memory. It contains actual column data. SSTable is the disk version of MemTables. When MemTables are full, SSTables are persisted to the hard disk. Bloom filters is a probabilistic data structure that lives in memory. It helps Cassandra to quickly detect which SSTable does not have the requested data. CommitLog is the usual commit log that contains all the mutations that are to be applied. It lives on the disk and helps to replay uncommitted changes. With this primer, we can start looking into how write and read works in Cassandra. We will see more explanation later. Write in action To write, clients need to connect to any of the Cassandra nodes and send a write request. This node is called as the coordinator node. When a node in Cassandra cluster receives a write request, it delegates it to a service called StorageProxy. This node may or may not be the right place to write the data to. The task of StorageProxy is to get the nodes (all the replicas) that are responsible to hold the data that is going to be written. It utilizes a replication strategy to do that. Once the replica nodes are identified, it sends the RowMutation message to them, the node waits for replies from these nodes, but it does not wait for all the replies to come. It only waits for as many responses as are enough to satisfy the client's minimum number of successful writes defined by ConsistencyLevel. So, the following figure and steps after that show all that can happen during a write mechanism: Figure 2.6: A simplistic representation of the write mechanism. The figure on the left represents the node-local activities on receipt of the write request If FailureDetector detects that there aren't enough live nodes to satisfy ConsistencyLevel, the request fails. If FailureDetector gives a green signal, but writes time-out after the request is sent due to infrastructure problems or due to extreme load, StorageProxy writes a local hint to replay when the failed nodes come back to life. This is called hinted handoff. One might think that hinted handoff may be responsible for Cassandra's eventual consistency. But it's not entirely true. If the coordinator node gets shut down or dies due to hardware failure and hints on this machine cannot be forwarded, eventual consistency will not occur. The Anti-entropy mechanism is responsible for consistency rather than hinted handoff. Anti-entropy makes sure that all replicas are in sync. If the replica nodes are distributed across datacenters, it will be a bad idea to send individual messages to all the replicas in other datacenters. It rather sends the message to one replica in each datacenter with a header instructing it to forward the request to other replica nodes in that datacenter. Now, the data is received by the node that should actually store that data. The data first gets appended to CommitLog, and pushed to a MemTable for the appropriate column family in the memory. When MemTable gets full, it gets flushed to the disk in a sorted structure named SSTable. With lots of flushes, the disk gets plenty of SSTables. To manage SSTables, a compaction process runs. This process merges data from smaller SSTables to one big sorted file. Read in action Similar to a write case, when StorageProxy of the node that a client is connected to gets the request, it gets a list of nodes containing this key based on Replication Strategy. StorageProxy then sorts the nodes based on their proximity to itself. The proximity is determined by the Snitch function that is set up for this cluster. Basically, there are the following types of Snitch: SimpleSnitch: A closer node is the one that comes first when moving clockwise in the ring. (A ring is when all the machines in the cluster are placed in a circular fashion with each having a token number. When you walk clockwise, the token value increases. At the end, it snaps back to the first node.) AbstractNetworkTopologySnitch: Implementation of the Snitch function works like this: nodes on the same rack are closest. The nodes in the same datacenter but in different rack are closer than those in other datacenters, but farther than the nodes in the same rack. Nodes in different datacenters are the farthest. To a node, the nearest node will be the one on the same rack. If there is no node on the same rack, the nearest node will be the one that lives in the same datacenter, but on a different rack. If there is no node in the datacenter, any nearest neighbor will be the one in the other datacenter. DynamicSnitch: This Snitch determines closeness based on recent performance delivered by a node. So, a quick-responding node is perceived closer than a slower one, irrespective of their location closeness or closeness in the ring. This is done to avoid overloading a slow-performing node. Now that we have the list of nodes that have desired row keys, it's time to pull data from them. The coordinator node (the one that the client is connected to) sends a command to the closest node to perform read (we'll discuss local read in a minute) and return the data. Now, based on ConsistencyLevel, other nodes will send a command to perform a read operation and send just the digest of the result. If we have Read Repair (discussed later) enabled, the remaining replica nodes will be sent a message to compute the digest of the command response. Let's take an example: say you have five nodes containing a row key K (that is, replication factor (RF) equals 5). Your read ConsistencyLevel is three. Then the closest of the five nodes will be asked for the data. And the second and third closest nodes will be asked to return the digest. We still have two left to be queried. If read Repair is not enabled, they will not be touched for this request. Otherwise, these two will be asked to compute digest. The request to the last two nodes is done in the background, after returning the result. This updates all the nodes with the most recent value, making all replicas consistent. So, basically, in all scenarios, you will have a maximum one wrong response. But with correct read and write consistency levels, we can guarantee an up-to-date response all the time. Let's see what goes within a node. Take a simple case of a read request looking for a single column within a single row. First, the attempt is made to read from MemTable, which is rapid-fast since there exists only one copy of data. This is the fastest retrieval. If the data is not found there, Cassandra looks into SSTable. Now, remember from our earlier discussion that we flush MemTables to disk as SSTables and later when compaction mechanism wakes up, it merges those SSTables. So, our data can be in multiple SSTables. Figure 2.7: A simplified representation of the read mechanism. The bottom image shows processing on the read node. Numbers in circles shows the order of the event. BF stands for Bloom Filter Each SSTable is associated with its Bloom Filter built on the row keys in the SSTable. Bloom Filters are kept in memory, and used to detect if an SSTable may contain (false positive) the row data. Now, we have the SSTables that may contain the row key. The SSTables get sorted in reverse chronological order (latest first). Apart from Bloom Filter for row keys, there exists one Bloom Filter for each row in the SSTable. This secondary Bloom Filter is created to detect whether the requested column names exist in the SSTable. Now, Cassandra will take SSTables one by one from younger to older. And use the index file to locate the offset for each column value for that row key and the Bloom filter associated with the row (built on the column name). On Bloom filter being positive for the requested column, it looks into the SSTable file to read the column value. Note that we may have a column value in other yet-to-be-read SSTables, but that does not matter, because we are reading the most recent SSTables first, and any value that was written earlier to it does not matter. So, the value gets returned as soon as the first column in the most recent SSTable is allocated. Components of Cassandra We have gone through how read and write takes place in highly distributed Cassandra clusters. It's time to look into individual components of it a little deeper. Messaging service Messaging service is the mechanism that manages internode socket communication in a ring. Communications, for example, gossip, read, read digest, write, and so on, processed via a messaging service, can be assumed as a gateway messaging server running at each node. To communicate, each node creates two socket connections per node. This implies that if you have 101 nodes, there will be 200 open sockets on each node to handle communication with other nodes. The messages contain a verb handler within them that basically tells the receiving node a couple of things: how to deserialize the payload message and what handler to execute for this particular message. The execution is done by the verb handlers (sort of an event handler). The singleton that orchestrates the messaging service mechanism is org.apache.cassandra.net.MessagingService. Gossip Cassandra uses the gossip protocol for internode communication. As the name suggests, the protocol spreads information in the same way an office rumor does. It can also be compared to a virus spread. There is no central broadcaster, but the information (virus) gets transferred to the whole population. It's a way for nodes to build the global map of the system with a small number of local interactions. Cassandra uses gossip to find out the state and location of other nodes in the ring (cluster). The gossip process runs every second and exchanges information with at the most three other nodes in the cluster. Nodes exchange information about themselves and other nodes that they come to know about via some other gossip session. This causes all the nodes to eventually know about all the other nodes. Like everything else in Cassandra, gossip messages have a version number associated with it. So, whenever two nodes gossip, the older information about a node gets overwritten with a newer one. Cassandra uses an Anti-entropy version of gossip protocol that utilizes Merkle trees (discussed later) to repair unread data. Implementation-wise the gossip task is handled by the org.apache.cassandra.gms.Gossiper class. Gossiper maintains a list of live and dead endpoints (the unreachable endpoints). At every one-second interval, this module starts a gossip round with a randomly chosen node. A full round of gossip consists of three messages. A node X sends a syn message to a node Y to initiate gossip. Y, on receipt of this syn message, sends an ack message back to X. To reply to this ack message, X sends an ack2 message to Y completing a full message round. Figure 2.8: Two nodes gossiping Failure detection Failure detection is one of the fundamental features of any robust and distributed system. A good failure detection mechanism implementation makes a fault-tolerant system such as Cassandra. The failure detector that Cassandra uses is a variation of The ϕ accrual failure detector (2004) by Xavier Défago et al. (The phi accrual detector research paper is available at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.3350.) The idea behind a FailureDetector is to detect a communication failure and take appropriate actions based on the state of the remote node. Unlike traditional failure detectors, phi accrual failure detector does not emit a Boolean alive or dead (true or false, trust or suspect) value. Instead, it gives a continuous value to the application and the application is left to decide the level of severity and act accordingly. This continuous suspect value is called phi (ϕ). Partitioner Cassandra is a distributed database management system. This means it takes a single logical database and distributes it over one or more machines in the database cluster. So, when you insert some data in Cassandra, it assigns each row to a row key; and based on that row key, Cassandra assigns that row to one of the nodes that's responsible for managing it. Let's try to understand this. Cassandra inherits the data model from Google's BigTable (BigTable research paper can be found at http://research.google.com/archive/bigtable.html.). This means we can roughly assume that the data is stored in some sort of a table that has an unlimited number of columns (not unlimited, Cassandra limits the maximum number of columns to be two billion) with rows binded with a unique key, namely, row key. Now, your terabytes of data on one machine will be restrictive from multiple points of views. One is disk space, another being limited parallel processing, and if not duplicated, a source of single point of failure. What Cassandra does is, it defines some rules to slice data across rows and assigns which node in the cluster is responsible for holding which slice. This task is done by a partitioner. There are several types of partitioners to choose from. In short, Cassandra (as of Version 1.2) offers three partitioners as follows: RandomPartitioner: It uses MD5 hashing to distribute data across the cluster. Cassandra 1.1.x and precursors have this as the default partitioner. Murmur3Partitioner: It uses Murmur hash to distribute the data. It performs better than RandomPartitioner. It is the default partitioner from Cassandra Version 1.2 onwards. ByteOrderPartitioner: Keeps keys distributed across the cluster by key bytes. This is an ordered distribution, so the rows are stored in lexical order. This distribution is commonly discouraged because it may cause a hotspot. Replication Cassandra runs on commodity hardware, and works reliably in network partitions. However, this comes with a cost: replication. To avoid data inaccessibility in case a node goes down or becomes unavailable, one must replicate data to more than one node. Replication brings features such as fault tolerance and no single point of failure to the system. Cassandra provides more than one strategy to replicate the data, and one can configure the replication factor while creating keyspace. Log Structured Merge tree Cassandra (also, HBase) is heavily influenced by Log Structured Merge (LSM). It uses an LSM tree-like mechanism to store data on a disk. The writes are sequential (in append fashion) and the data storage is contiguous. This makes writes in Cassandra superfast, because there is no seek involved. Contrast this with an RBDMS system that is based on the B+ Tree (http://en.wikipedia.org/wiki/B%2B_tree) implementation. LSM tree advocates the following mechanism to store data: note down the arriving modification into a log file (CommitLog), push the modification/new data into memory (MemTable) for faster lookup, and when the system has gathered enough updates in memory or after a certain threshold time, flush this data to a disk in a structured store file (SSTable). The logs corresponding to the updates that are flushed can now be discarded. Figure 2.12: Log Structured Merge (LSM) Trees CommitLog One of the promises that Cassandra makes to the end users is durability. In conventional terms (or in ACID terminology), durability guarantees that a successful transaction (write, update) will survive permanently. This means once Cassandra says write successful that means the data is persisted and will survive system failures. It is done the same way as in any DBMS that guarantees durability: by writing the replayable information to a file before responding to a successful write. This log is called the CommitLog in the Cassandra realm. This is what happens. Any write to a node gets tracked by org.apache.cassandra.db.commitlog.CommitLog, which writes the data with certain metadata into the CommitLog file in such a manner that replaying this will recreate the data. The purpose of this exercise is to ensure there is no data loss. If due to some reason the data could not make it into MemTable or SSTable, the system can replay the CommitLog to recreate the data. MemTable MemTable is an in-memory representation of column family. It can be thought of as a cached data. MemTable is sorted by key. Data in MemTable is sorted by row key. Unlike CommitLog, which is append-only, MemTable does not contain duplicates. A new write with a key that already exists in the MemTable overwrites the older record. This being in memory is both fast and efficient. The following is an example: Write 1: {k1: [{c1, v1}, {c2, v2}, {c3, v3}]} In CommitLog (new entry, append): {k1: [{c1, v1},{c2, v2}, {c3, v3}]} In MemTable (new entry, append): {k1: [{c1, v1}, {c2, v2}, {c3, v3}]} Write 2: {k2: [{c4, v4}]} In CommitLog (new entry, append): {k1: [{c1, v1}, {c2, v2}, {c3, v3}]} {k2: [{c4, v4}]} In MemTable (new entry, append): {k1: [{c1, v1}, {c2, v2}, {c3, v3}]} {k2: [{c4, v4}]} Write 3: {k1: [{c1, v5}, {c6, v6}]} In CommitLog (old entry, append): {k1: [{c1, v1}, {c2, v2}, {c3, v3}]} {k2: [{c4, v4}]} {k1: [{c1, v5}, {c6, v6}]} In MemTable (old entry, update): {k1: [{c1, v5}, {c2, v2}, {c3, v3}, {c6, v6}]} {k2: [{c4, v4}]} Cassandra Version 1.1.1 uses SnapTree (https://github.com/nbronson/snaptree) for MemTable representation, which claims it to be "... a drop-in replacement for ConcurrentSkipListMap, with the additional guarantee that clone() is atomic and iteration has snapshot isolation." See also copy-on-write and compare-and-swap (http://en.wikipedia.org/wiki/Copy-on-write, http://en.wikipedia.org/wiki/Compare-and-swap). Any write gets written first to CommitLog and then to MemTable. SSTable SSTable is a disk representation of the data. MemTables gets flushed to disk to immutable SSTables. All the writes are sequential, which makes this process fast. So, the faster the disk speed, the quicker the flush operation. The SSTables eventually get merged in the compaction process and the data gets organized properly into one file. This extra work in compaction pays off during reads. SSTables have three components: Bloom filter, index files, and datafiles. Bloom filter Bloom filter is a litmus test for the availability of certain data in storage (collection). But unlike a litmus test, a Bloom filter may result in false positives: that is, it says that a data exists in the collection associated with the Bloom filter, when it actually does not. A Bloom filter never results in a false negative. That is, it never states that a data is not there while it is. The reason to use Bloom filter, even with its false-positive defect, is because it is superfast and its implementation is really simple. Cassandra uses Bloom filters to determine whether an SSTable has the data for a particular row key. Bloom filters are unused for range scans, but they are good candidates for index scans. This saves a lot of disk I/O that might take in a full SSTable scan, which is a slow process. That's why it is used in Cassandra, to avoid reading many, many SSTables, which can become a bottleneck. Index files Index files are companion files of SSTables. The same as Bloom filter, there exists one index file per SSTable. It contains all the row keys in the SSTable and its offset at which the row starts in the datafile. At startup, Cassandra reads every 128th key (configurable) into the memory (sampled index). When the index is looked for a row key (after Bloom filter hinted that the row key might be in this SSTable), Cassandra performs a binary search on the sampled index in memory. Followed by a positive result from the binary search, Cassandra will have to read a block in the index file from the disk starting from the nearest value lower than the value that we are looking for. Datafiles Datafiles are the actual data. They contain row keys, metadata, and columns (partial or full). Reading data from datafiles is just one disk seek followed by a sequential read, as offset to a row key is already obtained from the associated index file. Compaction As mentioned earlier, a read require may require Cassandra to read across multiple SSTables to get a result. This is wasteful, costs multiple (disk) seeks, may require a conflict resolution, and if there are too many, SSTables were created. To handle this problem, Cassandra has a process in place, namely, compaction. Compaction merges multiple SSTable files into one. Off the shelf, Cassandra offers two types of compaction mechanism: Size Tiered Compaction Strategy and Level Compaction Strategy. The compaction process starts when the number of SSTables on disk reaches a certain threshold (N: configurable). Although the merge process is a little I/O intensive, it benefits in the long term with a lower number of disk seeks during reads. Apart from this, there are a few other benefits of compaction as follows: Removal of expired tombstones (Cassandra v0.8+) Merging row fragments Rebuilds primary and secondary indexes Tombstones Cassandra is a complex system with its data distributed among CommitLogs, MemTables, and SSTables on a node. The same data is then replicated over replica nodes. So, like everything else in Cassandra, deletion is going to be eventful. Deletion, to an extent, follows an update pattern except Cassandra tags the deleted data with a special value, and marks it as a tombstone. This marker helps future queries, compaction, and conflict resolution. Let's step further down and see what happens when a column from a column family is deleted. A client connected to a node (coordinator node, but it may not be the one holding the data that we are going to mutate), issues a delete command for a column C, in a column family CF. If the consistency level is satisfied, the delete command gets processed. When a node, containing the row key, receives a delete request, it updates or inserts the column in MemTable with a special value, namely, tombstone. The tombstone basically has the same column name as the previous one; the value is set to UNIX epoch. The timestamp is set to what the client has passed. When a MemTable is flushed to SSTable, all tombstones go into it as any regular column. On the read side, when the data is read locally on the node and it happens to have multiple versions of it in different SSTables, they are compared and the latest value is taken as the result of reconciliation. If a tombstone turns out to be a result of reconciliation, it is made a part of the result that this node returns. So, at this level, if a query has a deleted column, this exists in the result. But the tombstones will eventually be filtered out of the result before returning it back to the client. So, a client can never see a value that is a tombstone. Hinted handoff When we last talked about durability, we observed Cassandra provides CommitLogs to provide write durability. This is good. But what if the node, where the writes are going to be, is itself dead? No communication will keep anything new to be written to the node. Cassandra, inspired by Dynamo, has a feature called hinted handoff. In short, it's the same as taking a quick note locally that X cannot be contacted. Here is the mutation (operation that requires modification of the data such as insert, delete, and update) M that will be required to be replayed when it comes back. The coordinator node (the node which the client is connected to) on receipt of a mutation/write request, forwards it to appropriate replicas that are alive. If this fulfills the expected consistency level, write is assumed successful. The write requests to a node that does not respond to a write request or is known to be dead (via gossip) and is stored locally in the system.hints table. This hint contains the mutation. When a node comes to know via gossip that a node is recovered, it replays all the hints it has in store for that node. Also, every 10 minutes, it keeps checking any pending hinted handoffs to be written. Why worry about hinted handoff when you have written to satisfy consistency level? Wouldn't it eventually get repaired? Yes, that's right. Also, hinted handoff may not be the most reliable way to repair a missed write. What if the node that has hinted handoff dies? This is a reason why we do not count on hinted handoff as a mechanism to provide consistency (except for the case of the consistency level, ANY) guarantee; it's a single point of failure. The purpose of hinted handoff is, one, to make restored nodes quickly consistent with the other live ones; and two, to provide extreme write availability when consistency is not required. Read repair and Anti-entropy Cassandra promises eventual consistency and read repair is the process which does that part. Read repair, as the name suggests, is the process of fixing inconsistencies among the replicas at the time of read. What does that mean? Let's say we have three replica nodes A, B, and C that contain a data X. During an update, X is updated to X1 in replicas A and B. But it is failed in replica C for some reason. On a read request for data X, the coordinator node asks for a full read from the nearest node (based on the configured Snitch) and digest of data X from other nodes to satisfy consistency level. The coordinator node compares these values (something like digest(full_X) == digest_from_node_C). If it turns out that the digests are the same as the digests of full read, the system is consistent and the value is returned to the client. On the other hand, if there is a mismatch, full data is retrieved and reconciliation is done and the client is sent the reconciled value. After this, in background, all the replicas are updated with the reconciled value to have a consistent view of data on each node. See Figure 2.1 as shown: Figure 2.16: Image showing read repair dynamics. 1. Client queries for data x, from a node C (coordinator). 2. C gets data from replicas R1, R2, and R3; reconciles. 3. Sends reconciled data to client. 4. If there is a mismatch across replicas, repair is invoked. Merkle tree Merkle tree (A digital signature Based On A Conventional Encryption Function by Merkle, R. (1988), available at http://www.cse.msstate.edu/~ramkumar/merkle2.pdf.) is a hash tree where leaves of the tree hashes hold actual data in a column family and non-leaf nodes hold hashes of their children. The unique advantage of Merkle tree is a whole subtree can be validated just by looking at the value of the parent node. So, if nodes on two replica servers have the same hash values, then the underlying data is consistent and there is no need to synchronize. If one node passes the whole Merkle tree of a column family to another node, it can determine all the inconsistencies. Figure 2.17: Merkle tree to determine mismatch in hash values at parent nodes due to the difference in underlying data Summary By now, you are familiar with all the nuts and bolts of Cassandra. It is understandable that it may be a lot to take in for someone new to NoSQL systems. It is okay if you do not have complete clarity at this point. As you start working with Cassandra, tweaking it, experimenting with it, and going through the Cassandra mailing list discussions or talks, you will start to come across stuff that you have read in this article and it will start to make sense, and perhaps you may want to come back and refer to this article to improve clarity. It is not required to understand this article fully to be able to write queries, set up clusters, maintain clusters, or do anything else related to Cassandra. A general sense of this article will take you far enough to work extremely well with Cassandra-based projects. Resources for Article: Further resources on this subject: Apache Cassandra: Libraries and Applications [Article] Apache Felix Gogo [Article] Migration from Apache to Lighttpd [Article]
Read more
  • 0
  • 0
  • 3608

article-image-designing-sample-applications-and-creating-reports
Packt
25 Oct 2013
6 min read
Save for later

Designing Sample Applications and Creating Reports

Packt
25 Oct 2013
6 min read
(For more resources related to this topic, see here.) Creating a new application We will now start using Microsoft Visual Studio, which we installed in the previous article: From the Start menu, select All Programs, then select the Microsoft Visual Studio 2012 folder, and then select Visual Studio 2012. Since we are running Visual Studio for the first time, we will see the following screenshot. We can select the default environment setting. Also, we can change this setting in our application. We will use C# as the programming language, so we will select Visual C# Development Settings and click on the Start Visual Studio button. Now we will see the Start Page of Microsoft Visual Studio 2012. Select New Project or open it by navigating to File | New | Project. This is shown in the following screenshot: As shown in the following screenshot, we will select Windows as the application template. We will then select Windows Forms Application as the application type. Let us name our application Monitor. Choose where you want to save the application and click on OK. Now we will see the following screenshot. Let's start to design our application interface. Start by right-clicking on the form and navigating to Properties, as shown in the following screenshot: The Properties explorer will open as shown in the following screenshot. Change Text property from Form1 to monitor, change the Name property to MainForm, and then change Size to 322, 563 because we will add many controls to our form. After I finished the design process, I found that this was the suitable form size; you can change form size and all controls sizes as you wish in order to better suit your application interface. Adding controls to the form In this step, we will start designing the interface of our application by adding controls to the Main Form. We will divide the Main Form into three main parts as follows: Employees Products and orders Employees and products Employees In this section, we will see the different ways in which we can search for our employees and display the result in a report. Beginning your design process Let's begin to design the Employees section of our application using the following steps: From the Toolbox explorer, drag-and-drop GroupBox into the Main Form. Change the GroupBox Size property to 286, 181, the Location property to 12,12, and Text property to Employees. Add Label to the Main Form. Change the Text property to ID and the Location property to 12,25. Add Textbox to the Main Form and change the Name property to txtEmpId. Change the Location property to 70, 20. Add GroupBox to the Main Form. Change the Groupbox Size property to 250,103, the Location property to 6, 40, and the Text property to filters. Add Label to the Main Form. Change the Text property to Title and the Location property to 6,21. Add ComboBox to Main Form. Change the Name property to cbEmpTitle and the Location property to 56,17. Add Label to Main Form. Change the Text property to City and the Location property to 6,50. Add ComboBox to Main Form. Change the Name property to cbEmpCity and the Location property to 56,46. Add Label to Main Form. Change the Text property to Country and the Location property to 6,76. Add ComboBox to Main Form. Change the Name property to cbEmpCountry and the Location property to 56,72. Add Button to Main Form. Change the Name property to btnEmpAll, the Location property to 6,152, and the Text property to All. Add Button to Main Form. Change the Name property to btnEmpById, the Location property to 99,152, and the Text property to By Id. Add Button to Main Form. Change the Name property to btnEmpByFilters, the Location property to 196,152 and the Text property to By filters. The final design will look like the following screenshot: Products and orders In this part, we will see the relationship between products and orders in order to demonstrate, which products have higher orders. Because we have a lot of product categories and orders from different countries, we filter our report by products category and countries. Beginning your design process After we finished the design of employees section, let's start designing the products and orders section. From Toolbox explorer, drag-and-drop GroupBox to the Main Form. Change the GroupBox Size property to 284,136, the Location property to 12,204 and the Text property to Products – Orders. Add GroupBox to the Main Form. Change the GroupBox Size property to 264,77, the Location property to 6,20, and the Text property to filters. Add Label to the Main Form, change the Text property to Category, and the Location property to 6,25. Add ComboBox to the Main Form. Change the Name Property to cbProductsCategory, and the Location property to 61,20. Add Label to the Main Form. Change the Text property to Country and the Location property to 6,51. Add ComboBox to the Main Form. Change the Name property to cbOrdersCountry and the Location property to 61,47. Add Button to the Main Form. Change the Name property to btnProductsOrdersByFilters, the Location property to 108,103, and the Text property to By filters. The final design looks like the following screenshot: Employees and orders In this part we will see the relationship between employees and orders in order to evaluate the performance of each employee. In this section we will filter data by a period of time. Beginning your design process In this section we will start to design the last section in our application employees and orders. From Toolbox explorer, drag -and-drop GroupBox to the Main Form. Change the Size property of GroupBox to 284,175, the Location property to 14,346, and the Text property to Employees - Orders. Add GroupBox to the Main Form. Change the GroupBox Size property to 264,77, the Location property to 6,25, and the Text property to filters. Add Label to the Main Form. Change the Text property to From and the Location property to 7,29. Add DateTimePicker to the Main Form and change the Name property to dtpFrom. Change the Location property to 41,25, the Format property to Short, and the Value property to 1/1/1996. Add Label to the Main Form. Change the Text property to To and the Location property to 7,55. Add DateTimePicker to the Main Form. Change the Name property to dtpTo, the Location property to 41,51, the Format property to Short, and the Value property to 12/30/1996. Add Button to the Main Form. Change the Name property to btnBarChart, the Location property to 15,117, and the Text property to Bar chart. Add Button to the Main Form and change the Name property to btnPieChart. Change the Location property to 99,117 and the Text property to Pie chart. Add Button to the Main Form. Change the Name Property to btnGaugeChart, the Location property to 186, 117, and the Text property to Gauge chart. Add Button to the Main Form. Change the Name property to btnAllInOne, the Location property to 63,144, and the Text property to All In One. The final design looks like the following screenshot: You can change Location, Size, and Text properties as you wish; but don't change the Name property because we will use it in code in the next articles.
Read more
  • 0
  • 0
  • 1715

article-image-visualizing-my-social-graph-d3js
Packt
24 Oct 2013
7 min read
Save for later

Visualizing my Social Graph with d3.js

Packt
24 Oct 2013
7 min read
(For more resources related to this topic, see here.) The Social Networks Analysis Social Networks Analysis (SNA) is not new, sociologists have been using it for a long time to study human relationships (sociometry), to find communities and to simulate how information or a disease is spread in a population. With the rise of social networking sites such as Facebook, Twitter, LinkedIn, and so on. The acquisition of large amounts of social network data is easier. We can use SNA to get insight about customer behavior or unknown communities. It is important to say that this is not a trivial task and we will come across sparse data and a lot of noise (meaningless data). We need to understand how to distinguish between false correlation and causation. A good start is by knowing our graph through visualization and statistical analysis. Social networking sites bring us the opportunities to ask questions that otherwise are too hard to approach, because polling enough people is time-consuming and expensive. In this article, we will obtain our social network's graph from Facebook (FB) website in order to visualize the relationships between our friends. Finally we will create an interactive visualization of our graph using D3.js. Getting ready The easiest method to get our friends list is by using a third-party application. Netvizz is a Facebook app developed by Bernhard Rieder, which allows exporting social graph data to gdf and tab formats. Netvizz may export information about our friends such as gender, age, locale, posts, and likes. In order to get our social graph from Netvizz we need to access the link below and giving access to your Facebook profile. https://apps.facebook.com/netvizz/ As is shown in the following screenshot, we will create a gdf file from our personal friend network by clicking on the link named here in the Step 2. Then we will download the GDF (Graph Modeling Language) file. Netvizz will give us the number of nodes and edges (links); finally we will click on the gdf file link, as we can see in the following screenshot: The output file myFacebookNet.gdf will look like this: nodedef>name VARCHAR,label VARCHAR,gender VARCHAR,locale VARCHAR,agerankINT23917067,Jorge,male,en_US,10623931909,Haruna,female,en_US,10535702006,Joseph,male,en_US,104503839109,Damian,male,en_US,103532735006,Isaac,male,es_LA,102. . .edgedef>node1 VARCHAR,node2 VARCHAR23917067,3570200623917067,62939583723917067,74734348223917067,75560507523917067,1186286815. . . In the following screenshot we may see the visualization of the graph (106 nodes and 279 links). The nodes represent my friends and the links represent how my friends are connected between them. Transforming GDF to JSON In order to work with the graph in the web with d3.js, we need to transform our gdf file to json format. Firstly, we need to import the libraries numpy and json. import numpy as npimport json The numpy function, genfromtxt, will obtain only the ID and name from the nodes.csv file using the usecols attribute in the 'object' format. nodes = np.genfromtxt("nodes.csv",dtype='object',delimiter=',',skip_header=1,usecols=(0,1)) Then, the numpy function, genfromtxt, will obtain links with the source node and target node from the links.csv file using the usecols attribute in the 'object' format. links = np.genfromtxt("links.csv",dtype='object',delimiter=',',skip_header=1,usecols=(0,1)) The JSON format used in the D3.js Force Layout graph implemented in this article requires transforming the ID (for example, 100001448673085) into a numerical position in the list of nodes. Then, we need to look for each appearance of the ID in the links and replace them by their position in the list of nodes. for n in range(len(nodes)):for ls in range(len(links)):if nodes[n][0] == links[ls][0]:links[ls][0] = nif nodes[n][0] == links[ls][1]:links[ls][1] = n Now, we need to create a dictionary "data" to store the JSON file. data ={} Next, we need to create a list of nodes with the names of the friends in the format as follows: "nodes": [{"name": "X"},{"name": "Y"},. . .] and add it to thedata dictionary.lst = []for x in nodes:d = {}d["name"] = str(x[1]).replace("b'","").replace("'","")lst.append(d)data["nodes"] = lst Now, we need to create a list of links with the source and target in the format as follows: "links": [{"source": 0, "target": 2},{"source": 1, "target":2},. . .] and add it to the data dictionary.lnks = []for ls in links:d = {}d["source"] = ls[0]d["target"] = ls[1]lnks.append(d)data["links"] = lnks Finally, we need to create the file, newJson.json, and write the data dictionary in the file with the function dumps of the json library. with open("newJson.json","w") as f:f.write(json.dumps(data)) The file newJson.json will look as follows: {"nodes": [{"name": "Jorge"},{"name": "Haruna"},{"name": "Joseph"},{"name": "Damian"},{"name": "Isaac"},. . .],"links": [{"source": 0, "target": 2},{"source": 0, "target": 12},{"source": 0, "target": 20},{"source": 0, "target": 23},{"source": 0, "target": 31},. . .]} Graph visualization with D3.js D3.js provides us with the d3.layout.force() function that use the Force Atlas layout algorithm and help us to visualize our graph. First, we need to define the CSS style for the nodes, links, and node labels. <style>.link {fill: none;stroke: #666;stroke-width: 1.5px;}.node circle{fill: steelblue;stroke: #fff;stroke-width: 1.5px;}.node text{pointer-events: none;font: 10px sans-serif;}</style> Then, we need to refer the d3js library. <script src = "http://d3js.org/d3.v3.min.js"></script> Then, we need to define the width and height parameters for the svg container and include into the body tag. var width = 1100,height = 800var svg = d3.select("body").append("svg").attr("width", width).attr("height", height); Now, we define the properties of the force layout such as gravity, distance, and size. var force = d3.layout.force().gravity(.05).distance(150).charge(-100).size([width, height]); Then, we need to acquire the data of the graph using the JSON format. We will configure the parameters for nodes and links. d3.json("newJson.json", function(error, json) {force.nodes(json.nodes).links(json.links).start(); For a complete reference about the d3js Force Layout implementation, visit the link https://github.com/mbostock/d3/wiki/Force-Layout. Then, we define the links as lines from the json data. var link = svg.selectAll(".link").data(json.links).enter().append("line").attr("class", "link");var node = svg.selectAll(".node").data(json.nodes).enter().append("g").attr("class", "node").call(force.drag); Now, we define the node as circles of size 6 and include the labels of each node. node.append("circle").attr("r", 6);node.append("text").attr("dx", 12).attr("dy", ".35em").text(function(d) { return d.name }); Finally, with the function, tick, run step-by-step the force layout simulation. force.on("tick", function(){link.attr("x1", function(d) { return d.source.x; }).attr("y1", function(d) { return d.source.y; }).attr("x2", function(d) { return d.target.x; }).attr("y2", function(d) { return d.target.y; });node.attr("transform", function(d){return "translate(" + d.x + "," + d.y + ")";})});});</script> In the image below we can see the result of the visualization. In order to run the visualization we just need to open a Command Terminal and run the following Python command or any other web server. >>python –m http.server 8000 Then you just need to open a web browser and type the direction http://localhost:8000/ForceGraph.html. In the HTML page we can see our Facebook graph with a gravity effect and we can interactively drag-and-drop the nodes. All the codes and datasets of this article may be found in the author github repository in the link below.https://github.com/hmcuesta/PDA_Book/tree/master/Chapter10 Summary In this article we developed our own social graph visualization tool with D3js, transforming the data obtained from Netvizz with GDF format into JSON. Resources for Article: Further resources on this subject: GNU Octave: Data Analysis Examples [Article] Securing data at the cell level (Intermediate) [Article] Analyzing network forensic data (Become an expert) [Article]
Read more
  • 0
  • 0
  • 8406
article-image-interacting-your-visualization
Packt
22 Oct 2013
9 min read
Save for later

Interacting with your Visualization

Packt
22 Oct 2013
9 min read
(For more resources related to this topic, see here.) The ultimate goal of visualization design is to optimize applications so that they help us perform cognitive work more efficiently. Ware C. (2012) The goal of data visualization is to help the audience gain information from a large quantity of raw data quickly and efficiently through metaphor, mental model alignment, and cognitive magnification. So far in this article we have introduced various techniques to leverage D3 library implementing many types of visualization. However, we haven't touched a crucial aspect of visualization: human interaction. Various researches have concluded the unique value of human interaction in information visualization. Visualization combined with computational steering allows faster analyses of more sophisticated scenarios...This case study adequately demonstrate that the interaction of a complex model with steering and interactive visualization can extend the applicability of the modelling beyond research Barrass I. & Leng J (2011) In this article we will focus on D3 human visualization interaction support, or as mentioned earlier learn how to add computational steering capability to your visualization. Interacting with mouse events The mouse is the most common and popular human-computer interaction control found on most desktop and laptop computers. Even today, with multi-touch devices rising to dominance, touch events are typically still emulated into mouse events; therefore making application designed to interact via mouse usable through touches. In this recipe we will learn how to handle standard mouse events in D3. Getting ready Open your local copy of the following file in your web browser: https://github.com/NickQiZhu/d3-cookbook/blob/master/src/chapter10/mouse.html How to do it... In the following code example we will explore techniques of registering and handling mouse events in D3. Although, in this particular example we are only handling click and mousemove, the techniques utilized here can be applied easily to all other standard mouse events supported by modern browsers: <script type="text/javascript"> var r = 400; var svg = d3.select("body") .append("svg"); var positionLabel = svg.append("text") .attr("x", 10) .attr("y", 30); svg.on("mousemove", function () { //<-A printPosition(); }); function printPosition() { //<-B var position = d3.mouse(svg.node()); //<-C positionLabel.text(position); } svg.on("click", function () { //<-D for (var i = 1; i < 5; ++i) { var position = d3.mouse(svg.node()); var circle = svg.append("circle") .attr("cx", position[0]) .attr("cy", position[1]) .attr("r", 0) .style("stroke-width", 5 / (i)) .transition() .delay(Math.pow(i, 2.5) * 50) .duration(2000) .ease('quad-in') .attr("r", r) .style("stroke-opacity", 0) .each("end", function () { d3.select(this).remove(); }); } }); </script> This recipe generates the following interactive visualization: Mouse Interaction How it works... In D3, to register an event listener, we need to invoke the on function on a particular selection. The given event listener will be attached to all selected elements for the specified event (line A). The following code in this recipe attaches a mousemove event listener which displays the current mouse position (line B): svg.on("mousemove", function () { //<-A printPosition(); }); function printPosition() { //<-B var position = d3.mouse(svg.node()); //<-C positionLabel.text(position); } On line C we used d3.mouse function to obtain the current mouse position relative to the given container element. This function returns a two-element array [x, y]. After this we also registered an event listener for mouse click event on line D using the same on function: svg.on("click", function () { //<-D for (var i = 1; i < 5; ++i) { var position = d3.mouse(svg.node()); var circle = svg.append("circle") .attr("cx", position[0]) .attr("cy", position[1]) .attr("r", 0) .style("stroke-width", 5 / (i)) // <-E .transition() .delay(Math.pow(i, 2.5) * 50) // <-F .duration(2000) .ease('quad-in') .attr("r", r) .style("stroke-opacity", 0) .each("end", function () { d3.select(this).remove(); // <-G }); } }); Once again, we retrieved the current mouse position using d3.mouse function and then generated five concentric expanding circles to simulate the ripple effect. The ripple effect was simulated using geometrically increasing delay (line F) with decreasing stroke-width (line E). Finally when the transition effect is over, the circles were removed using transition end listener (line G). There's more... Although, we have only demonstrated listening on the click and mousemove events in this recipe, you can listen on any event that your browser supports through the on function. The following is a list of mouse events that are useful to know when building your interactive visualization: click: Dispatched when user clicks a mouse button dbclick: Dispatched when a mouse button is clicked twice mousedown: Dispatched when a mouse button is pressed mouseenter: Dispatched when mouse is moved onto the boundaries of an element or one of its descendent elements mouseleave: Dispatched when mouse is moved off of the boundaries of an element and all of its descendent elements mousemove: Dispatched when mouse is moved over an element mouseout: Dispatched when mouse is moved off of the boundaries of an element mouseover: Dispatched when mouse is moved onto the boundaries of an element mouseup: Dispatched when a mouse button is released over an element Interacting with a multi-touch device Today, with the proliferation of multi-touch devices, any visualization targeting mass consumption needs to worry about its interactability not only through the traditional pointing device, but through multi-touches and gestures as well. In this recipe we will explore touch support offered by D3 to see how it can be leveraged to generate some pretty interesting interaction with multi-touch capable devices. Getting ready Open your local copy of the following file in your web browser: https://github.com/NickQiZhu/d3-cookbook/blob/master/src/chapter10/touch.html. How to do it... In this recipe we will generate a progress-circle around the user's touch and once the progress is completed then a subsequent ripple effect will be triggered around the circle. However, if the user prematurely ends his/her touch, then we shall stop the progress-circle without generating the ripples: <script type="text/javascript"> var initR = 100, r = 400, thickness = 20; var svg = d3.select("body") .append("svg"); d3.select("body") .on("touchstart", touch) .on("touchend", touch); function touch() { d3.event.preventDefault(); var arc = d3.svg.arc() .outerRadius(initR) .innerRadius(initR - thickness); var g = svg.selectAll("g.touch") .data(d3.touches(svg.node()), function (d) { return d.identifier; }); g.enter() .append("g") .attr("class", "touch") .attr("transform", function (d) { return "translate(" + d[0] + "," + d[1] + ")"; }) .append("path") .attr("class", "arc") .transition().duration(2000) .attrTween("d", function (d) { var interpolate = d3.interpolate( {startAngle: 0, endAngle: 0}, {startAngle: 0, endAngle: 2 * Math.PI} ); return function (t) { return arc(interpolate(t)); }; }) .each("end", function (d) { if (complete(g)) ripples(d); g.remove(); }); g.exit().remove().each(function () { this.__stopped__ = true; }); } function complete(g) { return g.node().__stopped__ != true; } function ripples(position) { for (var i = 1; i < 5; ++i) { var circle = svg.append("circle") .attr("cx", position[0]) .attr("cy", position[1]) .attr("r", initR - (thickness / 2)) .style("stroke-width", thickness / (i)) .transition().delay(Math.pow(i, 2.5) * 50) .duration(2000).ease('quad-in') .attr("r", r) .style("stroke-opacity", 0) .each("end", function () { d3.select(this).remove(); }); } } </script> This recipe generates the following interactive visualization on a touch enabled device: Touch Interaction How it works... Event listener for touch events are registered through D3 selection's on function similar to what we have done with mouse events in the previous recipe: d3.select("body") .on("touchstart", touch) .on("touchend", touch); One crucial difference here is that we have registered our touch event listener on the body element instead of the svg element since with many OS and browsers there are default touch behaviors defined and we would like to override it with our custom implementation. This is done through the following function call: d3.event.preventDefault(); Once the touch event is triggered we retrieve multiple touch point data using the d3.touches function as illustrated by the following code snippet: var g = svg.selectAll("g.touch") .data(d3.touches(svg.node()), function (d) { return d.identifier; }); Instead of returning a two-element array as what d3.mouse function does, d3.touches returns an array of two-element arrays since there could be multiple touch points for each touch event. Each touch position array has data structure that looks like the following: Touch Position Array Other than the [x, y] position of the touch point each position array also carries an identifier to help you differentiate each touch point. We used this identifier here in this recipe to establish object constancy. Once the touch data is bound to the selection the progress circle was generated for each touch around the user's finger: g.enter() .append("g") .attr("class", "touch") .attr("transform", function (d) { return "translate(" + d[0] + "," + d[1] + ")"; }) .append("path") .attr("class", "arc") .transition().duration(2000).ease('linear') .attrTween("d", function (d) { // <-A var interpolate = d3.interpolate( {startAngle: 0, endAngle: 0}, {startAngle: 0, endAngle: 2 * Math.PI} ); return function (t) { return arc(interpolate(t)); }; }) .each("end", function (d) { // <-B if (complete(g)) ripples(d); g.remove(); }); This is done through a standard arc transition with attribute tweening (line A). Once the transition is over if the progress-circle has not yet been canceled by the user then a ripple effect similar to what we have done in the previous recipe was generated on line B. Since we have registered the same event listener touch function on both touchstart and touchend events, we can use the following lines to remove progress-circle and also set a flag to indicate that this progress circle has been stopped prematurely: g.exit().remove().each(function () { this.__stopped__ = true; }); We need to set this stateful flag since there is no way to cancel a transition once it is started; hence, even after removing the progress-circle element from the DOM tree the transition will still complete and trigger line B. There's more... We have demonstrated touch interaction through the touchstart and touchend events; however, you can use the same pattern to handle any other touch events supported by your browser. The following list contains the proposed touch event types recommended by W3C: touchstart: Dispatched when the user places a touch point on the touch surface touchend: Dispatched when the user removes a touch point from the touch surface touchmove: Dispatched when the user moves a touch point along the touch surface touchcancel: Dispatched when a touch point has been disrupted in an implementation-specific manner
Read more
  • 0
  • 0
  • 1936

article-image-installing-mariadb-windows-and-mac-os-x
Packt
22 Oct 2013
5 min read
Save for later

Installing MariaDB on Windows and Mac OS X

Packt
22 Oct 2013
5 min read
(For more resources related to this topic, see here.) Installing MariaDB on Windows There are two types of MariaDB downloads for Windows: ZIP files and MSI packages. As mentioned previously, the ZIP files are similar to the Linux binary .tar.gz files and they are only recommended for experts who know they want it. If we are starting out with MariaDB on Windows, it is recommended to use the MSI packages. Here are the steps to do just that: Download the MSI package from https://downloads.mariadb.org/. First click on the series we want (stable, most likely), then locate the Windows 64-bit or Windows 32-bit MSI package. For most computers, the 64-bit MSI package is probably the one that we want, especially if we have more than 4 Gigabytes of RAM. If you're unsure, the 32-bit package will work on both 32-bit and 64-bit computers. Once the download has finished, launch the MSI installer by double-clicking on it. Depending on our settings we may be prompted to launch it automatically. The installer will walk us through installing MariaDB. If we are installing MariaDB for the first time, we must be sure to set the root user password when prompted. Unless we need to, don't enable access from remote machines for the root user or create an anonymous account. The Install as service box is checked by default, and it's recommended to keep it that way so that MariaDB starts up when the computer is booted. The Service Name textbox has the default value MySQL for compatibility reasons, but we can rename it if we like. Check the Enable networking option, if you need to access the databases from a different computer. If we don't it's best to uncheck this box. As with the service name, there is a default TCP port number (3306) which you can change if you want to, but it is usually best to stick with the default unless there is a specific reason not to. The Optimize for transactions checkbox is checked by default. This setting can be left as is. There are other settings that we can make through the installer. All of them can be changed later by editing the my.ini file, so we don't have to worry about setting them right away. If our version of Windows has User Account Control enabled, there will be a pop-up during the installation asking if we want to allow the installer to install MariaDB. For obvious reasons, click on Yes. After the installation completes, there will be a MariaDB folder added to the start menu. Under this will be various links, including one to the MySQL Client. If we already have an older version of MariaDB or MySQL running on our machine, we will be prompted to upgrade the data files for the version we are installing, it is highly recommended that we do so. Eventually we will be presented with a dialog box with an installation complete message and a Finish button. If you got this far, congratulations! MariaDB is now installed and running on your Windows-based computer. Click on Finish to quit the installer. Installing MariaDB on Mac OS X One of the easiest ways to install MariaDB on Mac OS X is to use Homebrew, which is an Open Source package manager for that platform. Before you can install it, however, you need to prepare your system. The first thing you need to do is install Xcode; Apple's integrated development environment. It's available for free in the Mac App Store. Once Xcode is installed you can install brew. Full instructions are available on the Brew Project website at http://mxcl.github.io/homebrew/ but the basic procedure is to open a terminal and run the following command: ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)" This command downloads the installer and runs it. Once the initial installation is completed, we run the following command to make sure everything is set up properly: brew doctor The output of the doctor command will tell us of any potential issues along with suggestions for how to fix them. Once brew is working properly, you can install MariaDB with the following commands: brew update brew install mariadb Unlike on Linux and Windows, brew does not automatically set up or offer to set up MariaDB to start automatically when your system boots or start MariaDB after installation. To do so, we perform the following command: ln -sfv /usr/local/opt/mariadb/*.plist ~/Library/LaunchAgents launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mariadb.plist To stop MariaDB, we use the unload command as follows: launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.mariadb.plist Summary In this article, we learned how to install MariaDB on Windows and Mac OS X. Resources for Article: Further resources on this subject: Ruby with MongoDB for Web Development [Article] So, what is MongoDB? [Article] Schemas and Models [Article]
Read more
  • 0
  • 0
  • 15998

article-image-drag
Packt
21 Oct 2013
4 min read
Save for later

Drag

Packt
21 Oct 2013
4 min read
(For more resources related to this topic, see here.) I can't think of a better dragging demonstration than animating with the parallax illusion. The illusion works by having several keyframes rendered in vertical slices and dragging a screen over them to create an animated thingamabob. Drawing the lines by hand would be tedious, so we're using an image Marco Kuiper created in Photoshop. I asked on Twitter and he said we can use the image, if we check out his other work at marcofolio.net. You can also get the image in the examples repository at https://raw.github.com/Swizec/d3.js-book-examples/master/ch4/parallax_base.png. We need somewhere to put the parallax: var width = 1200, height = 450, svg = d3.select('#graph') .append('svg') .attr({width: width, height: height}); We'll use SVG's native support for embedding bitmaps to insert parallax_base.png into the page: svg.append('image') .attr({'xlink:href': 'parallax_base.png', width: width, height: height}); The image element's magic stems from its xlink:href attribute. It understands links and even lets us embed images to create self-contained SVGs. To use that, you would prepend an image MIME type to a base64 encoded representation of the image. For instance, the following line is the smallest embedded version of a spacer GIF. Don't worry if you don't know what a spacer GIF is; they were useful up to about 2005. data:image/gif;base64,R0lGODlhAQABAID/ AMDAwAAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw== Anyway, now that we have the animation base, we need a screen that can be dragged. It's going to be a bunch of carefully calibrated vertical lines: var screen_width = 900, lines = d3.range(screen_width/6), x = d3.scale.ordinal().domain(lines).rangeBands([0, screen_width]); We'll base the screen off an array of numbers (lines). Since line thickness and density are very important, we divide screen_width by 6—five pixels for a line and one for spacing. Make sure the value of screen_width is a multiple of 6; otherwise anti-aliasing ruins the effect. The x scale will help us place the lines evenly: svg.append('g') .selectAll('line') .data(lines) .enter() .append('line') .style('shape-rendering', 'crispEdges') .attr({stroke: 'black', 'stroke-width': x.rangeBand()-1, x1: function (d) { return x(d); }, y1: 0, x2: function (d) { return x(d); }, y2: height}); There's nothing particularly interesting here, just stuff you already know. The code goes through the array and draws a new vertical line for each entry. We made absolutely certain there won't be any anti-aliasing by setting shape-rendering to crispEdges. Time to define and activate a dragging behavior for our group of lines: var drag = d3.behavior.drag().origin(Object).on('drag', function () {d3.select(this).attr('transform', 'translate('+d3.event.x+', 0)').datum({x: d3.event.x, y: 0});}); We created the behavior with d3.behavior.drag(), defined a .origin() accessor, and specified what happens on drag. The behavior automatically translates touch and mouse events to the higher-level drag event. How cool is that! We need to give the behavior an origin so it knows how to calculate positions relatively; otherwise, the current position is always set to the mouse cursor and objects jump around. It's terrible. Object is the identity function for elements and assumes a datum with x and y coordinates. The heavy lifting happens inside the drag listener. We get the screen's new position from d3.event.x, move the screen there, and update the attached .datum() method. All that's left to do is to call drag and make sure to set the attached datum to the current position: svg.select('g') .datum({x: 0, y: 0}) .call(drag); The item looks solid now! Try dragging the screen at different speeds. The parallax effect doesn't work very well on a retina display because the base image gets resized and our screen loses calibration. Summary In this article, we looked into the drag behavioud of d3. All this can be done by with just click events, but I heartily recommend d3's behaviors module. It makes complex behaviors is that they automatically create relevant event listeners and let you work at a higher level of abstraction. Resources for Article: Further resources on this subject: Visualizing Productions Ahead of Time with Celtx [Article] Custom Data Readers in Ext JS [Article] The Login Page using Ext JS [Article]
Read more
  • 0
  • 0
  • 1830
article-image-using-spark-shell
Packt
18 Oct 2013
5 min read
Save for later

Using the Spark Shell

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) Loading a simple text file When running a Spark shell and connecting to an existing cluster, you should see something specifying the app ID like Connected to Spark cluster with app ID app-20130330015119-0001. The app ID will match the application entry as shown in the web UI under running applications (by default, it would be viewable on port 8080). You can start by downloading a dataset to use for some experimentation. There are a number of datasets that are put together for The Elements of Statistical Learning, which are in a very convenient form for use. Grab the spam dataset using the following command: wget http://www-stat.stanford.edu/~tibs/ElemStatLearn/datasets/spam.data Now load it as a text file into Spark with the following command inside your Spark shell: scala> val inFile = sc.textFile("./spam.data") This loads the spam.data file into Spark with each line being a separate entry in the RDD (Resilient Distributed Datasets). Note that if you've connected to a Spark master, it's possible that it will attempt to load the file on one of the different machines in the cluster, so make sure it's available on all the cluster machines. In general, in future you will want to put your data in HDFS, S3, or similar file systems to avoid this problem. In a local mode, you can just load the file directly, for example, sc.textFile([filepah]). To make a file available across all the machines, you can also use the addFile function on the SparkContext by writing the following code: scala> import spark.SparkFiles; scala> val file = sc.addFile("spam.data") scala> val inFile = sc.textFile(SparkFiles.get("spam.data")) Just like most shells, the Spark shell has a command history.You can press the up arrow key to get to the previous commands. Getting tired of typing or not sure what method you want to call on an object? Press Tab, and the Spark shell will autocomplete the line of code as best as it can. For this example, the RDD with each line as an individual string isn't very useful, as our data input is actually represented as space-separated numerical information. Map over the RDD, and quickly convert it to a usable format (note that _.toDouble is the same as x => x.toDouble): scala> val nums = inFile.map(x => x.split(' ').map(_.toDouble)) Verify that this is what we want by inspecting some elements in the nums RDD and comparing them against the original string RDD. Take a look at the first element of each RDD by calling .first() on the RDDs: scala> inFile.first() [...] res2: String = 0 0.64 0.64 0 0.32 0 0 0 0 0 0 0.64 0 0 0 0.32 0 1.29 1.93 0 0.96 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.778 0 0 3.756 61 278 1 scala> nums.first() [...] res3: Array[Double] = Array(0.0, 0.64, 0.64, 0.0, 0.32, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.64, 0.0, 0.0, 0.0, 0.32, 0.0, 1.29, 1.93, 0.0, 0.96, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.778, 0.0, 0.0, 3.756, 61.0, 278.0, 1.0) Using the Spark shell to run logistic regression When you run a command and have not specified a left-hand side (that is, leaving out the val x of val x = y), the Spark shell will print the value along with res[number]. The res[number] function can be used as if we had written val res[number] = y.Now that you have the data in a more usable format, try to do something cool with it! Use Spark to run logistic regression over the dataset as follows: scala> import spark.util.Vectorimport spark.util.Vectorscala> case class DataPoint(x: Vector, y: Double)defined class DataPointscala> def parsePoint(x: Array[Double]): DataPoint = {DataPoint(new Vector(x.slice(0,x.size-2)) , x(x.size-1))}parsePoint: (x: Array[Double])this.DataPointscala> val points = nums.map(parsePoint(_))points: spark.RDD[this.DataPoint] = MappedRDD[3] at map at<console>:24scala> import java.util.Randomimport java.util.Randomscala> val rand = new Random(53)rand: java.util.Random = java.util.Random@3f4c24scala> var w = Vector(nums.first.size-2, _ => rand.nextDouble)13/03/31 00:57:30 INFO spark.SparkContext: Starting job: first at<console>:20...13/03/31 00:57:30 INFO spark.SparkContext: Job finished: first at<console>:20, took 0.01272858 sw: spark.util.Vector = (0.7290865701603526, 0.8009687428076777,0.6136632797111822, 0.9783178194773176, 0.3719683631485643,0.46409291255379836, 0.5340172959927323, 0.04034252433669905,0.3074428389716637, 0.8537414030626244, 0.8415816118493813,0.719935849109521, 0.2431646830671812, 0.17139348575456848,0.5005137792223062, 0.8915164469396641, 0.7679331873447098,0.7887571495335223, 0.7263187438977023, 0.40877063468941244,0.7794519914671199, 0.1651264689613885, 0.1807006937030201,0.3227972103818231, 0.2777324549716147, 0.20466985600105037,0.5823059390134582, 0.4489508737465665, 0.44030858771499415,0.6419366305419459, 0.5191533842209496, 0.43170678028084863,0.9237523536173182, 0.5175019655845213, 0.47999523211827544,0.25862648071479444, 0.020548000101787922, 0.18555332739714137, 0....scala> val iterations = 100iterations: Int = 100scala> import scala.math._scala> for (i <- 1 to iterations) {val gradient = points.map(p =>(1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x).reduce(_ + _)w -= gradient}[....]scala> wres27: spark.util.Vector = (0.2912515190246098, 1.05257972144256,1.1620192443948825, 0.764385365541841, 1.3340446477767611,0.6142105091995632, 0.8561985593740342, 0.7221556020229336,0.40692442223198366, 0.8025693176035453, 0.7013618380649754,0.943828424041885, 0.4009868306348856, 0.6287356973527756,0.3675755379524898, 1.2488466496117185, 0.8557220216380228,0.7633511642942988, 6.389181646047163, 1.43344096405385,1.729216408954399, 0.4079709812689015, 0.3706358251228279,0.8683036382227542, 0.36992902312625897, 0.3918455398419239,0.2840295056632881, 0.7757126171768894, 0.4564171647415838,0.6960856181900357, 0.6556402580635656, 0.060307680034745986,0.31278587054264356, 0.9273189009376189, 0.0538302050535121,0.545536066902774, 0.9298009485403773, 0.922750704590723,0.072339496591 If things went well, you just used Spark to run logistic regression. Awsome! We have just done a number of things: we have defined a class, we have created an RDD, and we have also created a function. As you can see the Spark shell is quite powerful. Much of the power comes from it being based on the Scala REPL (the Scala interactive shell), so it inherits all the power of the Scala REPL (Read-Evaluate-Print Loop). That being said, most of the time you will probably want to work with a more traditionally compiled code rather than working in the REPL environment. Summary In this article, you have learned how to load our data and how to use Spark to run logistic regression. Resources for Article: Further resources on this subject: Python Data Persistence using MySQL Part II: Moving Data Processing to the Data [Article] Configuring Apache and Nginx [Article] Advanced Hadoop MapReduce Administration [Article]
Read more
  • 0
  • 0
  • 4294

article-image-knime-terminologies
Packt
17 Oct 2013
12 min read
Save for later

KNIME terminologies

Packt
17 Oct 2013
12 min read
(For more resources related to this topic, see here.) Organizing your work In KNIME, you store your files in a workspace. When KNIME starts, you can specify which workspace you want to use. The workspaces are not just for fi les; they also contain settings and logs. It might be a good idea to set up an empty workspace, and instead of customizing a new one each time, you start a new project; you just copy (extract) it to the place you want to use, and open it with KNIME (or switch to it). The workspace can contain workflow groups (sometimes referred to as workflow set) or workflows. The groups are like folders in a filesystem that can help organize your work flows. Workflows might be your programs and processes that describe the steps which should be applied to load, analyze, visualize, or transform the data you have, something like an execution plan. Work flows contain the executable parts, which can be edited using the workflow editor, which in turn is similar to a canvas. Both the groups and the workflows might have metadata associated with them, such as the creation date, author, or comments (even the workspace can contain such information). Workflows might contain nodes, meta nodes, connections, work flow variables (or just flow variables), work flow credentials, and annotations besides the previously introduced metadata. Workflow credentials is the place where you can store your login name and password for different connections. These are kept safe, but you can access them easily. It is safe to share a work flow if you use only the work flow credentials for sensitive information (although the user name will be saved). Nodes Each node has a type, which identifies the algorithm associated with the node. You can think of the type as a template; it specifies how to execute for different inputs and parameters, and what should be the result. The nodes are similar to functions (or operators) in programs. The node types are organized according to the following general types, which specify the color and the shape of the node for easier understanding of work flows. The general types are shown in the following image: Example representation of different general types of nodes The nodes are organized in categories; this way, it is easier to find them. Each node has a node documentation that describes what can be achieved using that type of node, possibly use cases or tips. It also contains information about parameters and possible input ports and output ports. (Sometimes the last two are called inports and outports, or even in-ports and out-ports.) Parameters are usually single values (for example, filename, column name, text, number, date, and so on) associated with an identifier; although, having an array of texts is also possible. These are the settings that influence the execution of a node. There are other things that can modify the results, such as work flow variables or any other state observable from KNIME. Node lifecycle Nodes can have any of the following states: Misconfigured (also called IDLE) Configured Queued for execution Running Executed There are possible warnings in most of the states, which might be important; you can read them by moving the mouse pointer over the triangle sign. Meta nodes Meta nodes look like normal nodes at first sight, although they contain other nodes (or meta nodes) inside them. The associated context of the node might give options for special execution. Usually they help to keep your work flow organized and less scary at first sight. A user-defined meta node Ports The ports are where data in some form flows through from one node to another. The most common port type is the data table. These are represented by white triangles. The input ports (where data is expected to get into) are on the left-hand side of the nodes, but the output ports (where the created data comes out) are on the right-hand side of the nodes. You cannot mix and match the different kinds of ports. It is also not allowed to connect a node's output to its input or create circles in the graph of nodes; you have to create a loop if you want to achieve something similar to that. Currently, all ports in the standard KNIME distribution are presenting the results only when they are ready; although the infrastructure already allows other strategies, such as streaming, where you can view partial results too. The ports might contain information about the data even if their nodes are not yet executed. Data tables These are the most common form of port types. It is similar to an Excel sheet or a data table in the database. Sometimes these are named example set or data frame. Each data table has a name, a structure (or schema, a table specification), and possibly properties. The structure describes the data present in the table by storing some properties about the columns. In other contexts, columns may be called attributes, variables, or features. A column can only contain data of a single type (but the types form a hierarchy from the top and can be of any type). Each column has a type , a name, and a position within the table. Besides these, they might also contain further information, for example, statistics about the contained values or color/shape information for visual representation. There is always something in the data tables that looks like a column, even if it is not really a column. This is where the identifiers for the rows are held, that is, the row keys. There can be multiple rows in the table, just like in most of the other data handling software (similar to observations or records). The row keys are unique (textual) identifiers within the table. They have multiple roles besides that; for example, usually row keys are the labels when showing the data, so always try to find user-friendly identifiers for the rows. At the intersection of rows and columns are the (data) cells , similar to the data found in Excel sheets or in database tables (whereas in other contexts, it might refer to the data similar to values or fi elds). There is a special cell that represents the missing values. The missing value is usually represented as a question mark (?). If you have to represent more information about the missing data, you should consider adding a new column for each column, where this requirement is present, and add that information; however, in the original column, you just declare it as missing. There are multiple cell types in KNIME, and the following table contains the most important ones: Cell type Symbol Remarks Int cell I This represents integral numbers in the range from -231 to 231-1 (approximately 2E9). Long cell L This represents larger integral numbers, and their range is from -263 to 263-1 (approximately 9E18). Double cell D This represents real numbers with double (64 bit) floating point precision. String cell S This represents unstructured textual information. Date and time cell  calendar & clock With these cells, you can store either date or time. Boolean cell B This represents logical values from the Boolean algebra (true or false); note that you cannot exclude the missing value. Xml cell XML This cell is ideal for structured data. Set cell {...} This cell can contain multiple cells (so a collection cell type) of the same type (no duplication or order of values are preserved). List cell {...} This is also a collection cell type, but this keeps the order and does not filter out the duplicates. Unknown type cell ? When you have different type of cells in a column (or in a collection cell), this is the generic cell type used. There are other cell types, for example, the ones for chemical data structures (SMILES, CDK, and so on), for images (SVG cell, PNG cell, and so on), or for documents. This is extensible, so the other extension can defi ne custom data cell types. Note that any data cell type can contain the missing value. Port view The port view allows you to get information about the content of the port. Complete content is available only after the node is executed, but usually some information is available even before that. This is very handy when you are constructing the workflow. You can check the structure of the data even if you will usually use node view in the later stages of data exploration during work flow construction. Flow variables Workflows can contain flow variables, which can act as a loop counter, a column name, or even an expression for a node parameter. These are not constants, but you can introduce them to the workspace level as well. This is a powerful feature; once you master it, you can create workflows you thought were impossible to create using KNIME. A typical use case for them is to assign roles to different columns (by assigning the column names to the role name as a flow variable) and use this information for node configurations. If your work flow has some important parameters that should be adjusted or set before each execution (for example a filename), this is an ideal option to provide these to the user; use the flow variables instead of a preset value that is hard to find. As the automatic generation of figures gets more support, the flow variables will find use there too. Iterating a range of values or files in a folder should also be done using flow variables. Node views Nodes can also have node views associated with them. These help to visualize your data or a model, show the node's internal state, or select a subset of the data using the HiLite feature. An important feature exists that a node's views can be opened multiple times. This allows us to compare different options of visualization without taking screenshots or having to remember what was it like, and how you reached that state. You can export these views to image fi les. HiLite The HiLite feature of KNIME is quite unique. Its purpose is to help identify a group of data that is important or interesting for some reason. This is related to the node views, as this selection is only visible in nodes with node views (for example, it is not available in port views). Support for data high lighting is optional, because not all views support this feature. The HiLite selection data is based on row keys, and this information can be lost when the row keys change. For this reason, some of the nonview nodes also have an option to keep this information propagated to the adjacent nodes. On the other hand, when the row keys remain the same, the marks in different views point to the same data rows. It is very important that the HiLite selection is only visible in a well-connected subgraph of work flow. It can also be available for non-executed nodes (for example, the HiLite Collector node). The HiLite information is not saved in the workflow, so you should use the HiLite filter node once you are satisfied with your selection to save that state, and you can reset that HiLite later. Eclipse concepts Because KNIME is based on the Eclipse platform (http://eclipse.org), it inherits some of its features too. One of them is the workspace model with projects (work flows in case of KNIME), and another important one is modularity. You can extend KNIME's functionality using plugins and features; sometimes these are named KNIME extensions. The extensions are distributed through update sites , which allow you to install updates or install new software from a local folder, a zip fi le, or an Internet location. The help system, the update mechanism (with proxy settings), or the fi le search feature are also provided by Eclipse. Eclipse's main window is the workbench. The most typical features are the perspectives and the views. Perspectives are about how the parts of the UI are arranged, while these independently configurable parts are the views. These views have nothing to do with node views or port views. The Eclipse/KNIME views can be detached, closed, moved around, minimized, or maximized within the window. Usually each view can have at most one instance visible (the Console view is an exception). KNIME does not support alternative perspectives (arrangements of views), so it is not important for you; however, you can still reset it to its original state. It might be important to know that Eclipse keeps the contents of fi les and folders in a special form. If you generate files, you should refresh the content to load it from the filesystem. You can do this from the context menu, but it can also be automated if you prefer that option. Preferences The preferences are associated with the workspace you use. This is where most of the Eclipse and KNIME settings are to be specified. The node parameters are stored in the workflows (which are also within the workspace), and these parameters are not considered to be preferences. Logging KNIME has something to tell you about almost every action. Usually, you do not care to read these logs, you do not need to do so. For this reason, KNIME dispatches these messages using different channels. There is a file in the workplace that collects all the messages by default with considerable details. There is even a KNIME/Eclipse view named Console, which contains only the most important details initially. Summary In this article we went through the most important terminologies and concepts you will use when using KNIME. Resources for Article : Further resources on this subject: Visualizing Productions Ahead of Time with Celtx [Article] N-Way Replication in Oracle 11g Streams: Part 1 [Article] Sage: 3D Data Plotting [Article]
Read more
  • 0
  • 0
  • 2263
Modal Close icon
Modal Close icon