Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-knime-terminologies
Packt
17 Oct 2013
12 min read
Save for later

KNIME terminologies

Packt
17 Oct 2013
12 min read
(For more resources related to this topic, see here.) Organizing your work In KNIME, you store your files in a workspace. When KNIME starts, you can specify which workspace you want to use. The workspaces are not just for fi les; they also contain settings and logs. It might be a good idea to set up an empty workspace, and instead of customizing a new one each time, you start a new project; you just copy (extract) it to the place you want to use, and open it with KNIME (or switch to it). The workspace can contain workflow groups (sometimes referred to as workflow set) or workflows. The groups are like folders in a filesystem that can help organize your work flows. Workflows might be your programs and processes that describe the steps which should be applied to load, analyze, visualize, or transform the data you have, something like an execution plan. Work flows contain the executable parts, which can be edited using the workflow editor, which in turn is similar to a canvas. Both the groups and the workflows might have metadata associated with them, such as the creation date, author, or comments (even the workspace can contain such information). Workflows might contain nodes, meta nodes, connections, work flow variables (or just flow variables), work flow credentials, and annotations besides the previously introduced metadata. Workflow credentials is the place where you can store your login name and password for different connections. These are kept safe, but you can access them easily. It is safe to share a work flow if you use only the work flow credentials for sensitive information (although the user name will be saved). Nodes Each node has a type, which identifies the algorithm associated with the node. You can think of the type as a template; it specifies how to execute for different inputs and parameters, and what should be the result. The nodes are similar to functions (or operators) in programs. The node types are organized according to the following general types, which specify the color and the shape of the node for easier understanding of work flows. The general types are shown in the following image: Example representation of different general types of nodes The nodes are organized in categories; this way, it is easier to find them. Each node has a node documentation that describes what can be achieved using that type of node, possibly use cases or tips. It also contains information about parameters and possible input ports and output ports. (Sometimes the last two are called inports and outports, or even in-ports and out-ports.) Parameters are usually single values (for example, filename, column name, text, number, date, and so on) associated with an identifier; although, having an array of texts is also possible. These are the settings that influence the execution of a node. There are other things that can modify the results, such as work flow variables or any other state observable from KNIME. Node lifecycle Nodes can have any of the following states: Misconfigured (also called IDLE) Configured Queued for execution Running Executed There are possible warnings in most of the states, which might be important; you can read them by moving the mouse pointer over the triangle sign. Meta nodes Meta nodes look like normal nodes at first sight, although they contain other nodes (or meta nodes) inside them. The associated context of the node might give options for special execution. Usually they help to keep your work flow organized and less scary at first sight. A user-defined meta node Ports The ports are where data in some form flows through from one node to another. The most common port type is the data table. These are represented by white triangles. The input ports (where data is expected to get into) are on the left-hand side of the nodes, but the output ports (where the created data comes out) are on the right-hand side of the nodes. You cannot mix and match the different kinds of ports. It is also not allowed to connect a node's output to its input or create circles in the graph of nodes; you have to create a loop if you want to achieve something similar to that. Currently, all ports in the standard KNIME distribution are presenting the results only when they are ready; although the infrastructure already allows other strategies, such as streaming, where you can view partial results too. The ports might contain information about the data even if their nodes are not yet executed. Data tables These are the most common form of port types. It is similar to an Excel sheet or a data table in the database. Sometimes these are named example set or data frame. Each data table has a name, a structure (or schema, a table specification), and possibly properties. The structure describes the data present in the table by storing some properties about the columns. In other contexts, columns may be called attributes, variables, or features. A column can only contain data of a single type (but the types form a hierarchy from the top and can be of any type). Each column has a type , a name, and a position within the table. Besides these, they might also contain further information, for example, statistics about the contained values or color/shape information for visual representation. There is always something in the data tables that looks like a column, even if it is not really a column. This is where the identifiers for the rows are held, that is, the row keys. There can be multiple rows in the table, just like in most of the other data handling software (similar to observations or records). The row keys are unique (textual) identifiers within the table. They have multiple roles besides that; for example, usually row keys are the labels when showing the data, so always try to find user-friendly identifiers for the rows. At the intersection of rows and columns are the (data) cells , similar to the data found in Excel sheets or in database tables (whereas in other contexts, it might refer to the data similar to values or fi elds). There is a special cell that represents the missing values. The missing value is usually represented as a question mark (?). If you have to represent more information about the missing data, you should consider adding a new column for each column, where this requirement is present, and add that information; however, in the original column, you just declare it as missing. There are multiple cell types in KNIME, and the following table contains the most important ones: Cell type Symbol Remarks Int cell I This represents integral numbers in the range from -231 to 231-1 (approximately 2E9). Long cell L This represents larger integral numbers, and their range is from -263 to 263-1 (approximately 9E18). Double cell D This represents real numbers with double (64 bit) floating point precision. String cell S This represents unstructured textual information. Date and time cell  calendar & clock With these cells, you can store either date or time. Boolean cell B This represents logical values from the Boolean algebra (true or false); note that you cannot exclude the missing value. Xml cell XML This cell is ideal for structured data. Set cell {...} This cell can contain multiple cells (so a collection cell type) of the same type (no duplication or order of values are preserved). List cell {...} This is also a collection cell type, but this keeps the order and does not filter out the duplicates. Unknown type cell ? When you have different type of cells in a column (or in a collection cell), this is the generic cell type used. There are other cell types, for example, the ones for chemical data structures (SMILES, CDK, and so on), for images (SVG cell, PNG cell, and so on), or for documents. This is extensible, so the other extension can defi ne custom data cell types. Note that any data cell type can contain the missing value. Port view The port view allows you to get information about the content of the port. Complete content is available only after the node is executed, but usually some information is available even before that. This is very handy when you are constructing the workflow. You can check the structure of the data even if you will usually use node view in the later stages of data exploration during work flow construction. Flow variables Workflows can contain flow variables, which can act as a loop counter, a column name, or even an expression for a node parameter. These are not constants, but you can introduce them to the workspace level as well. This is a powerful feature; once you master it, you can create workflows you thought were impossible to create using KNIME. A typical use case for them is to assign roles to different columns (by assigning the column names to the role name as a flow variable) and use this information for node configurations. If your work flow has some important parameters that should be adjusted or set before each execution (for example a filename), this is an ideal option to provide these to the user; use the flow variables instead of a preset value that is hard to find. As the automatic generation of figures gets more support, the flow variables will find use there too. Iterating a range of values or files in a folder should also be done using flow variables. Node views Nodes can also have node views associated with them. These help to visualize your data or a model, show the node's internal state, or select a subset of the data using the HiLite feature. An important feature exists that a node's views can be opened multiple times. This allows us to compare different options of visualization without taking screenshots or having to remember what was it like, and how you reached that state. You can export these views to image fi les. HiLite The HiLite feature of KNIME is quite unique. Its purpose is to help identify a group of data that is important or interesting for some reason. This is related to the node views, as this selection is only visible in nodes with node views (for example, it is not available in port views). Support for data high lighting is optional, because not all views support this feature. The HiLite selection data is based on row keys, and this information can be lost when the row keys change. For this reason, some of the nonview nodes also have an option to keep this information propagated to the adjacent nodes. On the other hand, when the row keys remain the same, the marks in different views point to the same data rows. It is very important that the HiLite selection is only visible in a well-connected subgraph of work flow. It can also be available for non-executed nodes (for example, the HiLite Collector node). The HiLite information is not saved in the workflow, so you should use the HiLite filter node once you are satisfied with your selection to save that state, and you can reset that HiLite later. Eclipse concepts Because KNIME is based on the Eclipse platform (http://eclipse.org), it inherits some of its features too. One of them is the workspace model with projects (work flows in case of KNIME), and another important one is modularity. You can extend KNIME's functionality using plugins and features; sometimes these are named KNIME extensions. The extensions are distributed through update sites , which allow you to install updates or install new software from a local folder, a zip fi le, or an Internet location. The help system, the update mechanism (with proxy settings), or the fi le search feature are also provided by Eclipse. Eclipse's main window is the workbench. The most typical features are the perspectives and the views. Perspectives are about how the parts of the UI are arranged, while these independently configurable parts are the views. These views have nothing to do with node views or port views. The Eclipse/KNIME views can be detached, closed, moved around, minimized, or maximized within the window. Usually each view can have at most one instance visible (the Console view is an exception). KNIME does not support alternative perspectives (arrangements of views), so it is not important for you; however, you can still reset it to its original state. It might be important to know that Eclipse keeps the contents of fi les and folders in a special form. If you generate files, you should refresh the content to load it from the filesystem. You can do this from the context menu, but it can also be automated if you prefer that option. Preferences The preferences are associated with the workspace you use. This is where most of the Eclipse and KNIME settings are to be specified. The node parameters are stored in the workflows (which are also within the workspace), and these parameters are not considered to be preferences. Logging KNIME has something to tell you about almost every action. Usually, you do not care to read these logs, you do not need to do so. For this reason, KNIME dispatches these messages using different channels. There is a file in the workplace that collects all the messages by default with considerable details. There is even a KNIME/Eclipse view named Console, which contains only the most important details initially. Summary In this article we went through the most important terminologies and concepts you will use when using KNIME. Resources for Article : Further resources on this subject: Visualizing Productions Ahead of Time with Celtx [Article] N-Way Replication in Oracle 11g Streams: Part 1 [Article] Sage: 3D Data Plotting [Article]
Read more
  • 0
  • 0
  • 2263

article-image-administrating-solr
Packt
11 Oct 2013
10 min read
Save for later

Administrating Solr

Packt
11 Oct 2013
10 min read
(For more resources related to this topic, see here.) Query nesting You might come across situations wherein you need to nest a query within another query in order to search specific keyword or phrase. Let us imagine that you want to run a query using the standard request handler, but you need to embed a query that is parsed by the dismax query parser inside it. Isn't that interesting? We will show you how to do it. Our example data looks like this: <add> <doc> <field name="id">1</field> <field name="title">Reviewed solrcook book</field> </doc> <doc> <field name="id">2</field> <field name="title">Some book reviewed</field> </doc> <doc> <field name="id">3</field> <field name="title">Another reviewed little book</field> </doc> </add> Here, we are going to use the standard query parser to support lucene query syntax, but we would like to boost phrases using the dismax query parser. At first it seems to be impossible to achieve, but don't worry, we will handle it. Let us suppose that we want to find books having the words reviewed and book in their title field and we would like to boost the reviewed book phrase by 10. Here we go with the query: http: //localhost:8080/solr/select?q=reviewed+AND+book+AND+_ query_:"{!dismax qf=title pf=title^10 v=$qq}"&qq=reviewed+book The results of the preceding query should look like: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">2</int> <lst name="params"> <str name="fl">*,score</str> <str name="qq">book reviewed</str> <str name="q">book AND reviewed AND _query_:"{!dismax qf=title pf=title^10 v=$qq}"</str> </lst> </lst> <result name="response" numFound="3" start="0" maxScore="0.77966106"> <doc> <float name="score">0.77966106</float> <str name="id">2</str> <str name="title">Some book reviewed</str> </doc> <doc> <float name="score">0.07087828</float> <str name="id">1</str> <str name="title">Reviewed solrcook book</str> </doc> <doc> <float name="score">0.07087828</float> <str name="id">3</str> <str name="title">Another reviewed little book</str> </doc> </result> </response> Let us focus on the query. The q parameter is built of two parts connected together with AND operator. The first one reviewed+AND+book is just a usual query with a logical operator AND defined. The second part building the query starts with a strange looking expression, _query_. This expression tells Solr that another query should be made that will affect the results list. We then see the expression stating that Solr should use the dismax query parser (the !dismax part) along with the parameters that will be passed to the parser (qf and pf). The v parameter is an abbreviation for value and it is used to pass the value of the q parameter (in our case, reviewed+book is being passed to dismax query parser). And that's it! We land to the search results which we had expected. Stats.jsp From the admin interface, when you click on the Statistics link, though you receive a web page of information about the specific index, this information is actually being served to the browser as an XML linked to an embedded XSL stylesheet. This is then transformed into HTML in the browser. This means that if you perform a GET request on stats.jsp, you will be back with XML demonstrated as follows. curl http://localhost:8080/solr/mbartists/admin/stats.jsp If you open the downloaded file, you will see all the data as XML. The following code is an extract of the statistics available that stores individual documents and the standard request handler with the metrics you might wish to monitor (highlighted in the following code): <entry> <name>documentCache</name> <class>org.apache.solr.search.LRUCache</class> <version>1.0</version> <description>LRU Cache(maxSize=512, initialSize=512)</description> <stats> <stat name="lookups">3251</stat> <stat name="hits">3101</stat> <stat name="hitratio">0.95</stat> <stat name="inserts">160</stat> <stat name="evictions">0</stat> <stat name="size">160</stat> <stat name="warmupTime">0</stat> <stat name="cumulative_lookups">3251</stat> <stat name="cumulative_hits">3101</stat> <stat name="cumulative_hitratio">0.95</stat> <stat name="cumulative_inserts">150</stat> <stat name="cumulative_evictions">0</stat> </stats> </entry> <entry> <name>standard</name> <class>org.apache.solr.handler.component.SearchHandler</class> <version>$Revision: 1052938 $</version> <description>Search using components: org.apache.solr.handler.component.QueryComponent, org.apache.solr.handler.component.FacetComponent</description> <stats> <stat name="handlerStart">1298759020886</stat> <stat name="requests">359</stat> <stat name="errors">0</stat> <stat name="timeouts">0</stat> <stat name="totalTime">9122</stat> <stat name="avgTimePerRequest">25.409472</stat> <stat name="avgRequestsPerSecond">0.446995</stat> </stats> </entry> The method of integrating with monitoring system various from system to system., as an example you may explore ./examples/8/check_solr.rb for a simple Ruby script that queries the core and check if the average hit ratio and the average time per request are above a defined threshold. ./check_solr.rb -w 13 -c 20 -imtracks CRITICAL - Average Time per request more than 20 milliseconds old: 39.5 In the previous example, we have defined 20 milliseconds as the threshold and the average time for a request to serve is 39.5 milliseconds (which is far greater than the threshold we had set). Ping status It is defined as the outcome from PingRequestHandler, which is primarily used for reporting SolrCore health to a Load Balancer; that is, this handler has been designed to be used as the endpoint for an HTTP Load Balancer to use while checking the "health" or "up status" of a Solr server. In a simpler term, ping status denotes the availability of your Solr server (up-time and downtime) for the defined duration. Additionally, it should be configured with some defaults indicating a request that should be executed. If the request succeeds, then the PingRequestHandler will respond with a simple OK status. If the request fails, then the PingRequestHandler will respond with the corresponding HTTP error code. Clients (such as load balancers) can be configured to poll the PingRequestHandler monitoring for these types of responses (or for a simple connection failure) to know if there is a problem with the Solr server. PingRequestHandler can be implemented which looks something like the following: <requestHandler name="/admin/ping" class="solr.PingRequestHandler"> <lst name="invariants"> <str name="qt">/search</str><!-- handler to delegate to --> <str name="q">some test query</str> </lst> </requestHandler> You may try this out even with a more advanced option, which is to configure the handler with a healthcheckFile that can be used to enable/disable the PingRequestHandler. It would look something like the following: <requestHandler name="/admin/ping" class="solr.PingRequestHandler"> <!-- relative paths are resolved against the data dir --> <str name="healthcheckFile">server-enabled.txt</str> <lst name="invariants"> <str name="qt">/search</str><!-- handler to delegate to --> <str name="q">some test query</str> </lst> </requestHandler> A couple of points which you should know while selecting the healthcheckFile option are: If the health check file exists, the handler will execute the query and returns status as described previously. If the health check file does not exist, the handler will throw an HTTP error even though the server is working fine and the query would have succeeded. This health check file feature can be used as a way to indicate to some load balancers that the server should be "removed from rotation" for maintenance, or upgrades, or whatever reason you may wish. Business rules You might come across situations wherein your customer who is running an e-store consisting of different types of products such as jewelry, electronic gazettes, automotive products, and so on defines a business need which is flexible enough to cope up with changes in the search results based on the search keyword. For instance, imagine of a customer's requirement wherein your need to add facets such as Brand, Model, Lens, Zoom, Flash, Dimension, Display, Battery, Price, and so on whenever a user searches for "Camera" keyword. So far the requirement is easy and can be achieved in simpler way. Now let us add some complexity in our requirement wherein facets such as Year, Make, Model, VIN, Mileage, Price, and so on should get automatically added when the user searches for a keyword "Bike". Worried about how to overrule such complex requirement? This is where business rules come into play. There is n-number of rule engines (both proprietary and open source) in market such as Drools, JRules, and so on which can be plugged-in into your Solr. Drools Now let us understand how Drools functions. It injects the rules into working memory, and then it evaluates which custom rules should be triggered based on the conditions stated in the working memory. It is based on if-then clauses, which enables the rules coder to define the what condition must be true (using if or when clause), and what action/event should be triggered when the defined condition is met, that is true (using then clause). Drools conditions are nothing but any Java object that the application wishes to inject as input. A business rule is more or less in the following format: rule "ruleName" when // CONDITION then //ACTION We will now show you how to write an example rule in Drools: rule "WelcomeLucidWorks" no-loop when $respBuilder : ResponseBuilder(); then $respBuilder.rsp.add("welcome", "lucidworks"); end In the given code snippet, it checks for ResponseBuilder object (one of the prime objects which help in processing search requests in a SearchComponent) in the working memory and then adds a key-value pair to that ResponseBuilder (in our case, welcome and lucidworks). Summary In this article, we saw how to nest a query within another query, learned about stats.jsp, how to use ping status, and what are business rules, how and when they prove to be important for us and how to write your custom rule using Drools. Resources for Article: Further resources on this subject: Getting Started with Apache Solr [Article] Making Big Data Work for Hadoop and Solr [Article] Apache Solr Configuration [Article]
Read more
  • 0
  • 0
  • 2070

article-image-creating-network-graphs-gephi
Packt
08 Oct 2013
10 min read
Save for later

Creating Network Graphs with Gephi

Packt
08 Oct 2013
10 min read
(For more resources related to this topic, see here.) Basic network graph terminology Network graphs are essentially based on the construct of nodes and edges. Nodes represent points or entities within the data, while edges refer to the connections or lines between nodes. Individual nodes might be students in a school, or schools within an educational system, or perhaps agencies within a government structure. Individual nodes may be represented through equal sizes, but can also be depicted as smaller or larger based on the magnitude of a selected measure. For example, a node with many connections may be portrayed as far larger and thus more influential than a sparsely connected node. This approach will provide viewers with a visual cue that shows them where the highest (and lowest) levels of activity occur within a graph. Nodes will generally be positioned based on the similarity of their individual connections, leading to clusters of individual nodes within a larger network. In most network algorithms, nodes with higher levels of connections will also tend to be positioned near the center of the graph, while those with few connections will move toward the perimeter of the display. Edges are the connections between nodes, and may be displayed as undirected or directed paths. Undirected relationships indicate a connection that flows in both directions, while a directed relationship moves in a single direction that always originates in one node and moves toward another node. Undirected connections will tend to predominate in most cases, such as in social networks where participant activity flows in both directions. On occasion, we will see directed connections, as in the case of some transportation or technology networks where there are connections that flow in a single direction. Edges may also be weighted, to show the strength of the connection between nodes. In the case where University A has performed research with both University B and University C, the strength (width) of the edge will show viewers where the stronger relationship exists. If A and B have combined for three projects, while A and C have collaborated on 9 projects, we should weight the A to C connection three times that of the A to B connection. Another commonly used term is neighbors, which is nothing more than a node that is directly connected to a second node. Neighbors can be stated to be one degree apart. Degrees is the term used to refer to the number of connections flowing into (or away from) a node (also known as Degree Centrality), as well as to the number of connections required to connect to another node via the shortest possible path. In complex graphs, you may find nodes that are four, five, or even more degrees away from a distant node, and in some cases two nodes may not be connected at all. Now that you have a very basic understanding of network graph theory, let's learn about some of the common network graph algorithms. Common network graph algorithms Before we introduce you to some specific graph algorithms, we'll briefly discuss some of the theory behind network graphs and introduce you to a few of the terms you will frequently encounter. Network graphs are drawn through positioning nodes and their respective connections relative to one another. In the case of a graph with 8 or 10 nodes, this is a rather simple exercise, and could probably be drawn rather accurately without the help of complex methodologies. However, in the typical case where we have hundreds of nodes with thousands of edges, the task becomes far more complex. Some of the more prominent graph classes in Gephi include the following: Force-directed algorithms refer to a category of graphs that position elements based on the principles of attraction, repulsion, and gravity Circular algorithms position graph elements around the perimeter of one or more circles, and may allow the user to dictate the order of the elements based on data properties Dual circle layouts position a subset of nodes in the interior of the graph with the remaining nodes around the diameter, similar to a single circular graph Concentric layouts arrange the graph nodes using an approximately circular graph design, with less connected nodes at the perimeter of the graph and highly connected nodes typically near the center Radial axis layouts provide the user with the ability to determine some portion of the graph layout by defining graph attributes The type of graph you select may well be dictated by the sort of results you seek. If you wish to feature certain groups within your dataset, one of the layouts that allows you to segment the graph by groups will provide a potentially quick solution to your needs. In this instance, one of the circular or radial axis graphs may prove ideal. On the other hand, if you are attempting to discover relationships in a complex new dataset, one of the several available Force-directed layouts is likely a better choice. These algorithms will rely on the values in your dataset to determine the positioning within the graph. When choosing one of these approaches, please note that there will often be an extensive runtime to calculate the graph layout, especially as the data becomes more complex. Even on a powerful computer, examples may run for minutes or hours in an attempt to fully optimize the graph. Fortunately, you will have the ability in Gephi to stop these algorithms at any given point, and you will still have a very reasonable, albeit imperfect graph. In the next section, we'll look at a few of the standard layouts that are part of the Gephi base package. Standard network graph layouts Now that you are somewhat familiar with the types of layout algorithms, we'll take a look at what Gephi offers within the Layout tab. We'll begin with some common Force-directed approaches, and then examine some of the other choices. One of the best known force algorithms is Force Atlas, which in Gephi provides users with multiple options for drawing the graph. Foremost among these settings are Repulsion, Attraction, and Gravity settings. Repulsion strength adjustments will make individual nodes either more or less sensitive to other nodes they differ from. A higher repulsion level, for example, will push these nodes further apart. Conversely, setting the Attraction strength higher will force related nodes closer together. Finally, the Gravity setting will draw nodes closer to the center of the graph if it is set to a high level, and disperse them toward the edges if a very low value is set. Force Atlas 2 is another layout option that employs slightly different parameters than the original Force Atlas method. You may wish to compare these methods and determine which one gives you better results. Fruchterman Reingold is one more Force method; albeit one that provides you with just three parameters – Area, Gravity, and Speed. While the general approach is not unlike the Force Atlas algorithms, your results will appear different in a visual sense. Finally, Gephi provides three Yifan Hu methods. Each of these models—Yifan Hu, Yifan Hu Proportional, and Yifan Hu Multilevel, are likely to run much more rapidly than the methods discussed earlier, while providing generally similar results. Gephi also provides a variety of methods that do not employ the force approach. Some of the models, as we noted earlier in this article, provide you with more control over the final result. This may be the result of selecting how to order the nodes, or of which attributes to use in grouping nodes together, either through color or location. In the section above, I referenced several layout options, but in the interest of space we'll take a closer look at two of them—the Circular and Radial Axis layouts. Circular layouts are well suited to relatively small networks, given the limited flexibility of their fixed layout. We can adjust this to some degree by specifying the diameter of the graph, but anything more than a few dozen well-connected nodes often becomes difficult to manage. However, with smaller networks, these layouts can be intriguing, providing us with the ability to see patterns within and between specific groups more easily than we might see them in some other layouts. While this article will not cover any filtering options, those too can be used to help us better utilize the circular layouts, by providing us with the ability to highlight specific groups and their resulting connections. Think of the circle resembling a giant spider web filled with connections, and the filters as tools that help us see specific threads within the web. Our final notes are on Radial Axis layouts, which can provide us with fascinating looks at our data, especially if there are natural groups within the network. Think of a school with several classrooms full of students, for example. Each of these classrooms can be easily identified and grouped, perhaps by color. In a complex force directed graph we may be able to spot each of these groups, but it may become difficult due to the interactions with other classes. In a Radial Axis layout we can dictate the group definitions, forcing each group to be bound together, apart from any other groups. There are pros and cons to this approach, of course, as there are with any of the other methods. If we wish to understand how a specific group interacts with another group, this method can prove beneficial, as it isolates these groups visually, making it easier to see connections between them. On the negative side, it is often quite difficult to see connections between members within the group, due to the nature of the layout. As with any layouts, it is critical to look at the results and see how they apply to our original need. Always test your data using multiple layout algorithms, so that you wind up with the best possible approach. Summary Gephi is an ideal tool for users new to network graph analysis and visualization, as it provides a rich set of tools to create and customize network graphs. The user interface makes it easy to understand basic concepts such as nodes and edges, as well as descriptive terminology such as neighbors, degrees, repulsion, and attraction. New users can move as slowly or as rapidly as they wish, given Gephi's gentle learning curve. Gephi can also help you see and understand patterns within your data through a variety of sophisticated graph methods that will appeal to both the novice as well as seasoned users. The variety of sophisticated layout algorithms will provide you the opportunity to experiment with multiple layouts as you search for the best approach to display your data. In short, Gephi provides everything needed to produce first-rate network visualizations. Resources for Article: Further resources on this subject: OpenSceneGraph: Advanced Scene Graph Components [Article] Cacti: Using Graphs to Monitor Networks and Devices [Article] OpenSceneGraph: Managing Scene GraphOpenSceneGraph: Managing Scene Graph [Article]
Read more
  • 0
  • 0
  • 6270

article-image-cql-client-applications
Packt
04 Oct 2013
7 min read
Save for later

CQL for client applications

Packt
04 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using the Thrift API The Thrift library is based on the Thrift RPC protocol. High-level clients built over it have been a standard way of building an application for a long time. In this section, we'll explain how to write a client application using CQL as the query language and thrift as the Java API. When we start Cassandra, by default it listens to Thrift clients (start_rpc: true property in the CASSANDRA_HOME/conf/cassandra.yaml file enables this). Let's build a small program that connects to Cassandra using the Thrift API, and runs CQL 3 queries for reading/writing data in the UserProfiles table we created for the facebook application. The program can be built by performing the following steps: For downloading the Thrift Library, you need to enter apache-assandra-thrift-1.2.x.jar (which is to be found in the CASSANDRA_HOME/lib folder) into your classpath. If your Java project is mavenized, you need to insert the following entry in pom.xml under the dependency section (version will vary depending upon your Cassandra server installation): <dependency> <groupId>org.apache.cassandra</groupId> <artifactId>cassandra-thrift</artifactId> <version>1.2.5</version> </dependency> For connecting to the Cassandra server on a given host and port, you need to open org.apache.thrift.transport.TTransport to the Cassandra node and create an instance of org.apache.cassandra.thrift.Cassandra.Client as follows: TTransport transport = new TFramedTransport(new TSocket("localhost", 9160)); TProtocol protocol = new TBinaryProtocol(transport); Cassandra.Client client = new Cassandra.Client(protocol); transport.open(); client.set_cql_version("3.0.0"); The default CQL version for Thrift is 2.0.0. You must set it to 3.0.0 if you are writing CQL 3 queries and don't want to see any version related errors. After you are done with transport, close it gracefully (usually at the end of read/write operations) as follows: transport.close(); Creating a schema : The executeQuery() utility method accepts String CQL 3 query and runs it: CqlResult executeQuery(String query) throws Exception { return client.execute_cql3_query(ByteBuffer.wrap(query.getBytes("UTF-8")), Compression.NONE, ConsistencyLevel.ONE); } Now, create keyspace and the table by directly executing CQL 3 query: //Create keyspace executeQuery("CREATE KEYSPACE facebook WITH replication = "{'class':'SimpleStrategy','replication_factor':3};"); executeQuery("USE facebook;"); //Create table executeQuery("CREATE TABLE UserProfiles(" +"email_id text," + "password text,"+ "name text," + "age int," + "profile_picture blob," + "PRIMARY KEY(email_id)" + ");" ); Reading/writing data: A couple of records can be inserted as follows: executeQuery("USE facebook;"); executeQuery("INSERT INTO UserProfiles(email_id, password, name, age, profile_picture) VALUES('john.smith@example.com','p4ssw0rd',' John Smith',32,0x8e37);"); executeQuery("INSERT INTO UserProfiles(email_id, password, name, age, profile_picture) VALUES('david.bergin@example.com','guess1t',' David Bergin',42,0xc9f1);"); Executing the SELECT query returns CQLResult, on which we can iterate easily to fetch records: CqlResult result = executeQuery("SELECT * FROM facebook.UserProfiles " + "WHERE email_id = 'john.smith@example.com';"); for (CqlRow row : result.getRows()) { System.out.println(row.getKey(); } Using the Datastax Java driver The Datastax Java driver is based on the Cassandra binary protocol that was introduced in Cassandra 1.2, and works only with CQL 3. The Cassandra binary protocol is specifically made for Cassandra in contrast to Thrift, which is a generic framework and has many limitations. Now, we are going to write a Java program that uses the Datastax Java driver for reading/writing data into Cassandra, by performing the following steps: Downloading the driver library : This driver library JAR file must be in your classpath in order to build an application using it. If you have a maven-based Java project, you need to insert the following entry into the pom.xml file under the dependeny section: <dependency> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-core</artifactId> <version>1.0.1</version> </dependency> This driver project is hosted on Github: (https://github.com/datastax/java-driver). It makes sense to check and download the latest version. Configuring Cassandra to listen to native clients : In the newer version of Cassandra, this would be enabled by default and Cassandra will listen to clients using binary protocol. But the earlier Cassandra installations may require enabling this. All you have to do is to check and enable the start_native_transport property into the CASSANDRA_HOME/conf/Cassandra.yaml file by inserting/uncommenting the following line: start_native_transport: true The port that Cassandra will use for listening to native clients is determined by the native_transport_port property. It is possible for Cassandra to listen to both Thrift and native clients simultaneously. If you want to disable Thrift, just set the start_rpc property to false in CASSANDRA_HOME/conf/Cassandra.yaml. Connecting to Cassandra : The com.datastax.driver.core.Cluster class is the entry point for clients to connect to the Cassandra cluster: Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build(); After you are done with using it (usually when application shuts down), close it gracefully: cluster.shutdown(); Creating a session : An object of com.datastax.driver.core.Session allows you to execute a CQL 3 statement. The following line creates a Session instance: Session session = cluster.connect(); Creating a schema : Before reading/writing data, let's create a keyspace and a table similar to UserProfiles in the facebook application we built earlier: // Create Keyspace session.execute("CREATE KEYSPACE facebook WITH replication = " + "{'class':'SimpleStrategy','replication_factor':1};"); session.execute("USE facebook"); // Create table session.execute("CREATE TABLE UserProfiles(" + "email_id text," + "password text,"+ "name text," + "age int," + "profile_picture blob," + "PRIMARY KEY(email_id)" + ");" ); Reading/writing data : We can insert a couple of records as follows: session.execute("USE facebook"); session.execute("INSERT INTO UserProfiles(email_id, password, name, age, profile_picture) VALUES('john.smith@example.com','p4ssw0rd','John Smith',32,0x8e37);"); session.execute("INSERT INTO UserProfiles(email_id, password, name, age, profile_picture) VALUES('david.bergin@example.com','guess1t','David Bergin',42,0xc9f1);"); Finding and printing records : A SELECT query returns an instance of com.datastax.driver.core.ResultSet. You can fetch individual rows by iterating over it using the com.datastax.driver.core.Row object: ResultSet results = session.execute ("SELECT * FROM facebook.UserProfiles " + "WHERE email_id = 'john.smith@example.com';"); for (Row row : results) { System.out.println ("Email: " + row.getString("email_id") + "tName: " + row.getString("name")+ "t Age : " + row.getInt("age")); } Deleting records : We can delete a record as follows: session.execute("DELETE FROM facebook.UserProfiles WHERE email_id='john.smith@example.com';"); Using high-level clients In addition to the libraries based on Thrift and binary protocols, some high-level clients are built with the purpose to ease development and provide additional services, such as connection pooling, load balancing, failover, secondary indexing, and so on. Some of them are listed here: Astyanax (https://github.com/Netflix/astyanax): Astyanax is a high-level Java client for Cassandra. It allows you to run both simple and prepared CQL queries. Hector (https://github.com/hector-client/hector): Hector is a high-level client for Cassandra. At the time of writing this book, it supported CQL 2 only (not CQL 3). Kundera (https://github.com/impetus-opensource/Kundera): Kundera is a JPA 2.0-based object datastore mapping library for Cassandra and many other NoSQL datastores. CQL 3 queries are run with Kundera using the native queries as described in JPA specification. Summary From this article, we basically learn about using CQL in queries using three different preceding methods. Resources for Article : Further resources on this subject: Quick start – Creating your first Java application [Article] Apache Cassandra: Libraries and Applications [Article] Getting Started with Apache Cassandra [Article]
Read more
  • 0
  • 0
  • 2129

article-image-simple-graphs-d3js
Packt
03 Oct 2013
8 min read
Save for later

Simple graphs with d3.js

Packt
03 Oct 2013
8 min read
(For more resources related to this topic, see here.) There are many component vendors selling graphing controls. Frequently, these graph libraries are complicated to work with and expensive. When using them, you need to consider what would happen if the vendor went out of business or refused to fix an issue: For simple graphs, such as the one shown in the preceding screenshot, d3.js (http://d3js.org/) brings a number of functions and a coding style that makes creating graphs a snap. Let's create a using d3. The first thing to do is introduce an SVG element to the page. In d3, we'll append an SVG element explicitly: var graph = d3.select("#visualization") .append("svg") .attr("width", 500) .attr("height", 500); d3 relies heavily on the use of method chaining. If you're new to this concept, it is quick to pick up. Each call to a method performs some action and then returns an object, and the next method call operates on this object. So, in this case, the select method returns the div with the id of visualization. Calling append on the selected div adds an SVG element and then returns it. Finally, the attr methods set a property inside the object and then return the object. At first, method chaining may seem odd, but as we move on you'll see that it is a very powerful technique and cleans up the code considerably. Without method chaining, we end up with a lot of temporary variables. Next, we need to find the maximum element in the data array. Previously we might have used a jQuery each loop to find that element. d3 has built-in array functions that make this much cleaner: var maximumValue = d3.max(data, function(item){ return item.value;}); There are similar functions for finding minimums and means. None of the functions are anything you couldn't get by using a JavaScript utility library such as underscore.js (http://underscorejs.org/) or lodash (http://lodash.com/), however, it is convenient to make use of the built-in versions. In the next piece, we make us of are d3's scaling functions: var yScale = d3.scale.linear() .domain([maximumValue, 0]) .range([0, 300]); Scaling functions serve to map from one dataset to another. In our case, we're mapping from the values in our data array to the coordinates in our SVG. We use two different scales here: linear and ordinal. The linear scale is used to map a continuous domain to a continuous range. The mapping will be done linearly, so if our domain contained values between 0 and 10, and our range had values between 0 and 100, a value of 6 would map to 60, 3 to 30, and so forth. It seems trivial, but with more complicated domains and ranges, scales are very helpful. Apart from linear scales, there are power and logarithmic scales that may fit your data better. In our example data, our y values are not continuous, they're not even numeric. For this case we can make use of an ordinal scale: var xScale = d3.scale.ordinal() .domain(data.map(function(item){return item.month;})) .rangeBands([0,500], .1); ordinal scales map a discrete domain into a continuous range. Here, the domain is the list of months and the range the width of our SVG. You'll note that instead of using range we use rangeBands. rangebands splits the range up into chunks each to which each range item is assigned. So, if our domain was {May, June} and the range 0 to 100, from May we would receive a band from 0 to 49 and 50 to 100 from June. You'll also note that rangeBands takes an additional parameter, in our case 0.1. This is a padding value that generates a sort of no man's land between each band. This is ideal for creating a bar or column graph, as we may not want the columns touching each other. The padding parameter can take a value between 0 and 1 as a decimal representation of how much of the range should be reserved for padding. 0.25 would reserve 25% of the range for padding. There are also a family of built-in scales that deal with providing colors. Selecting colors for your visualization can be challenging, as the colors have to be far enough apart to be discernible. If you're color-challenged like me, the scales category10, category20, category20b, and category20c may be for you. You can declare a color scale in the the following manner: var colorScale = d3.scale.category10() .domain(data.map(function(item){return item.month;})); The preceding code will assign a different color to each month out of a set of 10 pre-calculated possible colors. Finally, we need to actually draw our graph: var graphData = graph.selectAll(".bar") .data(data); We select all the .bar elements inside the graph using selectAll. Hang on! There aren't any elements inside the graph, let alone elements that match the .bar selector. Typically, selectAll will return a collection of elements matching the selector just as the $ function does in jQuery. In this case, we're using selectAll as a short-hand method of creating an empty d3 collection that has a data method and can be chained. We next specify a set of data to union with the data from the existing selection of elements. d3 operates on collections objects without using looping techniques. This allows for a more declarative style of programming, but can be difficult to grasp immediately. In effect, we're unioning two sets of data: the currently existing data (found using selectAll) and the new data (provided by the data function). This method of dealing with data allows for easy updates to the data elements, should further elements be added or removed later. When new data elements are added, you can select just those elements by using enter(). This prevents repeating actions on existing data. You don't need to redraw the entire image, just the portions that have been updated with new data. Equally, if there are elements in the new dataset that didn't appear in the old one, they can be selected with exit(). Typically, you want to just remove those elements that can be done by running the following: graphData.exit().remove() When we create elements using the newly generated dataset, the data elements will actually be attached to the newly created DOM elements. Creating the elements involves calling append: graphData.enter() .append("rect") .attr("x", function(item){ return xScale(item.month);}) .attr("y", function(item){ return yScale(item.value);}) .attr("height", function(item){ return 300 - yScale(item.value);}) .attr("width", xScale.rangeBand()) .attr("fill", function(item, index){return colorScale(item.month);}); The following diagram shows how data() works with new and existing dataset: You can see in this chunk of code how useful method chaining has become. It makes the code much shorter and more readable than assigning a series of temporary variables or passing the results into standalone methods. The scales also come on their own here. The x coordinate is found simply by scaling the month we have using the ordinal scale. Because that scale takes into account the number of elements as well as the padding, nothing more complicated is needed. The y coordinate is similarly found using previously defined yScale. Because the origin in an SVG is in the top-left, we have to take the inverse of the scale to calculate the height. Again, this is a place where we wouldn't generally be using a constant except for the brevity of our example. The width of the column is found by asking the xScale for the width of the bands. Finally, we set the color based on the color scale so it appears as follows: Transitions Being able to animate your visualization is a powerful technique for adding that "wow" factor. People are far more likely to stop and pay attention to your visualization if it looks cool. Spending time to make your visualization look nifty can actually have a payback other than getting mad cred from other developers. d3 makes transitions simple by doing most of the heavy lifting for you. The transitions work by creating gradual changes in values of properties: graph.selectAll(".bar") .attr("height", 0) .transition() .attr("height", 50) This code will gradually change the height on a .bar from 0 to 50 px. By default, the transition will take 250ms, but this can be changed by chaining a call to duration and delayed with a call to delay: graph.selectAll(".bar") .attr("height", 0) .transition() .duration(400) .delay(100) .attr("height", 50) This transition will wait 100ms then grow the bar height over a course of 400ms. Transitions can also be used to change non-numeric attributes such as color. I like to use a few transitions during loading to get attention, but they can also be useful when changing the state of the visualization. Even such things as adding new elements using .data can have a transition attached to them. If there is a problem with transitions, it is that they are too easy to use. Be careful that you don't overuse them and overwhelm the user. You should also pay attention to the duration of the transition: if it takes too long to execute, users will lose interest. Summary This article shed light on the features of d3.js that can be used to create simple graphs. It also covered how to add that "wow" factor to your visualizations using transitions. Resources for Article: Further resources on this subject: Introducing QlikView elements [Article] Data sources for the Charts [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 2730

article-image-what-new-12c
Packt
30 Sep 2013
23 min read
Save for later

What is New in 12c

Packt
30 Sep 2013
23 min read
(For more resources related to this topic, see here.) Oracle Database 12c has introduced many new features and enhancements for backup and recovery. This article will introduce you to some of them and you will have the opportunity to learn in more detail how they could be used in real life situations. But I cannot start talking about Oracle 12 c without talking first about a revolutionary whole new concept that was introduced with this new version of the database product, called Multitenant Container Database( CDB ) that will contain two or more pluggable databases ( PDB ). When a container database only contains one PDB it is called Single Tenant Container Database. You can also have your database on Oracle 12c using the same format as before 12c, it will be called non-CDB database and will not allow the use of PDBs. Pluggable database We are now able to have multiple databases sharing a single instance and Oracle binaries. Each of the databases will be configurable to a degree and will allow some parameters to be set specifically for themselves (due that they will share the same initialization parameter file) and what is better, each database will be completely isolated from each other without either knowing that the other exists. A CDB is a single physical database that contains a root container with the main Oracle data dictionary and at least one PDB with specific application data. A PDB is a portable container with its own data dictionary, including metadata and internal links to the system-supplied objects in the root container, and this PDB will appear to an Oracle Net client as a traditional Oracle database. The CDB also contains a PDB called SEED, which is used as a template when an empty PDB needs to be created. The following figure shows an example of a CDB with five PDBs: When creating a database on Oracle 12 c , you can now create a CDB with one or more PDBs, and what is even better is that you can easily clone a PDB, or unplug it and plug it into a different server with a preinstalled CDB, if your target server is running out of resources such as CPU or memory. Many years ago, the introduction of external storage gave us the possibility to store data on external devices and the flexibility to plug and unplug them to any system independent of their OS. For example, you can connect an external device to a system using Windows XP and read your data without any problems. Later you can unplug it and connect it to a laptop running Windows 7 and you will still be able to read your data. Now with the introduction of Oracle pluggable databases, we will be able to do something similar with Oracle when upgrading a PDB, making this process simple and easy. All you will need to do to upgrade a PDB, as per example, is: Unplug your PDB (step 1 in the following figure) that is using a CDB running 12.1.0.1. Copy the PDB to the destination location with a CDB that is using a later version such as 12.2.0.1 (step 2 in the following figure). Plug the PDB to the CDB (step 3 in the following figure), and your PDB is now upgraded to 12.2.0.1. This new concept is a great solution for database consolidation and is very useful for multitenant SaaS (Software as a Service) providers, improving resource utilization, manageability, integration, and service management. Some key points about pluggable databases are: You can have many PDBs if you want inside a single container (a CDB can contain a maximum of 253 PDBs) A PDB is fully backwards compatible with an ordinary pre-12.1 database in an applications perspective, meaning that an application built for example to run on Oracle 11.1 will have no need to be changed to run on Oracle 12c A system administrator can connect to a CDB as a whole and see a single system image If you are not ready to make use of this new concept, you can still be able to create a database on Oracle 12c as before, called non-CDB (non-Container Database) Each instance in RAC opens the CDB as a whole. A foreground session will see only the single PDB it is connected to and sees it just as a non-CDB The Resource Manager is extended with some new between-PDB capabilities Fully integrated with Oracle Enterprise Manager 12c and SQL Developer Fast provisioning of new databases (empty or as a copy/clone of an existing PDB) On Clone triggers can be used to scrub or mask data during a clone process Fast unplug and plug between CDBs Fast path or upgrade by unplugging a PDB and plugging it into a different CDB already patched or with a later database version Separation of duties between DBA and application administrators Communication between PDBs is allowed via intra-CDB dblinks Every PDB has a default service with its name in one Listener An unplugged PDB carries its lineage, Opatch, encryption key info, and much more All PDBs in a CDB should use the same character set All PDBs share the same control files, SPFILE, redo log files, flashback log files, and undo Flashback PDB is not available on 12.1, it expected to be available with 12.2 Allows multitenancy of Oracle Databases, very useful for centralization, especially if using Exadata Multitenant Container Database is only available for Oracle Enterprise Edition as a payable option, all other editions of the Oracle database can only deploy non-CDB or Single Tenant Pluggable databases. RMAN new features and enhancements Now we can continue and take a fast and closer look at some of the new features and enhancements introduced in this database version for RMAN. Container and pluggable database backup and restore As we saw earlier, the introduction of Oracle 12c and the new pluggable database concept made it possible to easily centralize multiple databases maintaining the individuality of each one when using a single instance. The introduction of this new concept also forced Oracle to introduce some new enhancements to the already existent BACKUP, RESTORE, and RECOVERY commands to enable us to be able to make an efficient backup or restore of the complete CDB. This includes all PDBs or just one of more PDBs, or if you want to be more specific, you can also just backup or restore one or more tablespaces from a PDB. Some examples of how to use the RMAN commands when performing a backup on Oracle 12c are: RMAN> BACKUP DATABASE; (To backup the CBD + all PDBs) RMAN> BACKUP DATABASE root; (To backup only the CBD) RMAN> BACKUP PLUGGABLE DATABASE pdb1,pdb2; (To backup all specified PDBs) RMAN> BACKUP TABLESPACE pdb1:example; (To backup a specific tablespace in a PDB) Some examples when performing RESTORE operations are: RMAN> RESTORE DATABASE; (To restore an entire CDB, including all PDBs) RMAN> RESTORE DATABASE root; (To restore only the root container) RMAN> RESTORE PLUGGABLE DATABASE pdb1; (To restore a specific PDB) RMAN> RESTORE TABLESPACE pdb1:example; (To restore a tablespace in a PDB) Finally, some example of RECOVERY operations are: RMAN> RECOVER DATABASE; (Root plus all PDBs) RMAN> RUN { SET UNTIL SCN 1428; RESTORE DATABASE; RECOVER DATABASE; ALTER DATABASE OPEN RESETLOGS; } RMAN> RUN } RESTORE PLUGGABLE DATABASE pdb1 TO RESTORE POINT one; RECOVER PLUGGABLE DATABASE pdb1 TO RESTORE POINT one; ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;} Enterprise Manager Database Express The Oracle Enterprise Manager Database Console or Database Control that many of us used to manage an entire database is now deprecated and replaced by the new Oracle Enterprise Manager Database Express. This new tool uses Flash technology and allows the DBA to easily manage the configurations, storage, security, and performance of a database. Note that RMAN, Data Pump, and the Oracle Enterprise Manager Cloud Control are now the only tools able to perform backup and recovery operations in a pluggable database environment, in other words, you cannot use the Enterprise Manager Database Express for database backup/recovery operations. Backup privileges Oracle Database 12c provides separation support for the separation of DBA duties for the Oracle Database by introducing task-specific and least privileged administrative privileges for backups that do not require the SYSDBA privilege. The new system privilege introduced with this new release is SYSBACKUP. Avoid the use of the SYSDBA privilege for backups unless it is strictly necessary. When connecting to the database using the AS SYSDBA system privilege, you are able to see any object structure and all the data within the object, whereas if you are connecting using the new system privilege AS SYSBACKUP, you will still be able to see the structure of an object but not the object data. If you try to see any data using the SYSBACKUP privilege, the ORA-01031: insufficient privileges message will be raised. Tighter security policies require a separation of duties. The new SYSBACKUP privilege facilitates the implementation of the separation of duties, allowing backup and recovery operations to be performed without implicit access to the data, so if access to the data is required for one specific user, it will need to be granted explicitly to this user. RMAN has introduced some changes when connecting to a database such as: TARGET: It will require the user to have the SYSBACKUP administrative privilege to be able to connect to the TARGET database CATALOG: As in the earlier versions a user was required to have the RECOVERY_CATALOG_OWNER role assigned to be able to connect to the RMAN catalog, now it will need to have assigned the SYSBACKUP privilege to be able to connect to the catalog AUXILIARY: It will require the SYSBACKUP administrative privilege to connect to the AUXILIARY database Some important points about the SYSBACKUP administrative privilege are: It includes permissions for backup and recovery operations It does not include data access privileges such as SELECT ANY TABLE that the SYSDBA privilege has It can be granted to the SYSBACKUP user that is created during the database installation process It's the default privilege when a RMAN connection string is issued and does not contain the AS SYSBACKUP clause: $ RMAN TARGET / Before connecting as the SYSBACKUP user created during the database creation process, you will need to unlock the account and grant the SYSBACKUP privilege to the user. When you use the GRANT command to give the SYSBACKUP privilege to a user, the username and privilege information will be automatically added to the database password file. The v$pwfile_users view contains all information regarding users within the database password file and indicates whether a user has been granted any privileged system privilege. Let's take a closer look to this view: SQL> DESC v$pwfile_users Name Null? Type ----------------------------- -------- ----------------- USERNAME VARCHAR2(30) SYSDBA VARCHAR2(5) SYSOPER VARCHAR2(5) SYSASM VARCHAR2(5) SYSBACKUP VARCHAR2(5) SYSDG VARCHAR2(5) SYSKM VARCHAR2(5) CON_ID NUMBER As you can see, this view now contains some new columns, such as: SYSBACKUP: It indicates if the user is able to connect using the SYSBACKUP privileges SYSDG: It indicates if the user is able to connect using the SYSDG (new for Data Guard) privileges SYSKM: It indicates if the user is able to connect using the SYSKM (new for Advanced Security) privileges. CON_ID: It is the ID of the current container. If 0, it will indicate that it is related to the entire CDB or to an entire traditional database (non-CDB): if the value is 1, then this user has the access only to root; if other value, then the view will identify a specific container ID. To help you clearly understand the use of the SYSBACKUP privilege, let's run a few examples to make it completely clear. Let's connect to our newly created database as SYSDBA and take a closer look at the SYSBACKUP privilege: $ sqlplus / as sysdbaSQL> SET PAGES 999SQL> SET LINES 99SQL> COL USERNAME FORMAT A21SQL> COL ACCOUNT_STATUS FORMAT A20SQL> COL LAST_LOGIN FORMAT A41 SQL> SELECT username, account_status, last_login 2 FROM dba_users 3 WHERE username = 'SYSBACKUP';USERNAME ACCOUNT_STATUS LAST_LOGIN------------ -------------------- -----------------------SYSBACKUP EXPIRED & LOCKED As you can see, the SYSBACKUP account created during the database creation is currently EXPIRED & LOCKED, you will need to unlock this account and grant the SYSBACKUP privilege to it if you want to use this user for any backup and recovery purposes: For this demo I will use the original SYSBACKUP account, but in a production environment never use the SYSBACKUP account, instead grant the SYSBACKUP privilege to the user(s) that will be responsible for the backup and recovery operations. SQL> ALTER USER sysbackup IDENTIFIED BY "demo" ACCOUNT UNLOCK; User altered. SQL> GRANT sysbackup TO sysbackup; Grant succeeded. SQL> SQL> SELECT username, account_status 2 FROM dba_users 3 WHERE account_status NOT LIKE '%LOCKED'; USERNAME ACCOUNT_STATUS --------------------- -------------------- SYS OPEN SYSTEM OPEN SYSBACKUP OPEN We can also easily identify what system privileges and roles are assigned to SYSBACKUP by executing the following SQLs: SQL> COL grantee FORMAT A20 SQL> SELECT * 2 FROM dba_sys_privs 3 WHERE grantee = 'SYSBACKUP'; GRANTEE PRIVILEGE ADM COM ------------- ----------------------------------- --- --- SYSBACKUP ALTER SYSTEM NO YES SYSBACKUP AUDIT ANY NO YES SYSBACKUP SELECT ANY TRANSACTION NO YES SYSBACKUP SELECT ANY DICTIONARY NO YES SYSBACKUP RESUMABLE NO YES SYSBACKUP CREATE ANY DIRECTORY NO YES SYSBACKUP UNLIMITED TABLESPACE NO YES SYSBACKUP ALTER TABLESPACE NO YES SYSBACKUP ALTER SESSION NO YES SYSBACKUP ALTER DATABASE NO YES SYSBACKUP CREATE ANY TABLE NO YES SYSBACKUP DROP TABLESPACE NO YES SYSBACKUP CREATE ANY CLUSTER NO YES 13 rows selected. SQL> COL granted_role FORMAT A30 SQL> SELECT * 2 FROM dba_role_privs 3 WHERE grantee = 'SYSBACKUP'; GRANTEE GRANTED_ROLE ADM DEF COM -------------- ------------------------------ --- --- --- SYSBACKUP SELECT_CATALOG_ROLE NO YES YES Where the column ADMIN_OPTION refers to if the user has or not, the ADMIN_OPTION privilege, the column DEFAULT_ROLE indicates whether or not ROLE is designated as a default role for the user, and the column COMMON refers to if it's common to all the containers and pluggable databases available. SQL and DESCRIBE As you know well, you are able to execute the SQL commands, and the PL/SQL procedures from the RMAN command line starting with Oracle 12.1, do not require the use of the SQL prefix or quotes for most SQL commands in RMAN. You can now run some simple SQL commands in RMAN such as: RMAN> SELECT TO_CHAR(sysdate,'dd/mm/yy - hh24:mi:ss') 2> FROM dual; TO_CHAR(SYSDATE,'DD) ------------------- 17/09/12 - 02:58:40 RMAN> DESC v$datafile Name Null? Type --------------------------- -------- ------------------- FILE# NUMBER CREATION_CHANGE# NUMBER CREATION_TIME DATE TS# NUMBER RFILE# NUMBER STATUS VARCHAR2(7) ENABLED VARCHAR2(10) CHECKPOINT_CHANGE# NUMBER CHECKPOINT_TIME DATE UNRECOVERABLE_CHANGE# NUMBER UNRECOVERABLE_TIME DATE LAST_CHANGE# NUMBER LAST_TIME DATE OFFLINE_CHANGE# NUMBER ONLINE_CHANGE# NUMBER ONLINE_TIME DATE BYTES NUMBER BLOCKS NUMBER CREATE_BYTES NUMBER BLOCK_SIZE NUMBER NAME VARCHAR2(513) PLUGGED_IN NUMBER BLOCK1_OFFSET NUMBER AUX_NAME VARCHAR2(513) FIRST_NONLOGGED_SCN NUMBER FIRST_NONLOGGED_TIME DATE FOREIGN_DBID NUMBER FOREIGN_CREATION_CHANGE# NUMBER FOREIGN_CREATION_TIME DATE PLUGGED_READONLY VARCHAR2(3) PLUGIN_CHANGE# NUMBER PLUGIN_RESETLOGS_CHANGE# NUMBER PLUGIN_RESETLOGS_TIME DATE CON_ID NUMBER RMAN> ALTER TABLESPACE users 2> ADD DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/user02.dbf' size 50M; Statement processed Remember that the SYSBACKUP privilege does not grant access to the user tables or views, but the SYSDBA privilege does. Multi-section backups for incremental backups Oracle Database 11g introduced multi-section backups to allow us to backup and restore very large files using backup sets (remember that Oracle datafiles can be up to 128 TB in size). Now with Oracle Database 12c , we are able to make use of image copies when creating multi-section backups as a complement of the previous backup set functionality. This helps us to reduce image copy creation time for backups, transporting tablespaces, cloning, and doing a TSPITR (tablespace point-in-time recovery), it also improves backups when using Exadata. The main restrictions to make use of this enhancement are: The COMPATIBLE initialization parameter needs to be set to 12.0 or higher to make use of the new image copy multi-section backup feature This is only available for datafiles and cannot be used to backup control or password files Not to be used with a large number of parallelisms when a file resides on a small number of disks, to avoid each process to compete with each other when accessing the same device Another new feature introduced with multi-section backups is the ability to create multi-section backups for incremental backups. This will allow RMAN to only backup the data that has changed since the last backup, consequently enhancing the performance of multi-section backups due that they are processed independently, either serially or in parallel. Network-based recovery Restoring and recovering files over the network is supported starting with Oracle Database 12c . We can now recover a standby database and synchronize it with its primary database via the network without the need to ship the archive log files. When the RECOVER command is executed, an incremental backup is created on the primary database. It is then transferred over the network to the physical standby database and applied to the standby database to synchronize it within the primary database. RMAN uses the SCN from the standby datafile header and creates the incremental backup starting from this SCN on the primary database, in other words, only bringing the information necessary to the synchronization process. If block change tracking is enabled for the primary database, it will be used while creating the incremental backup making it faster. A network-based recovery can also be used to replace any missing datafiles, control files, SPFILE, or tablespaces on the primary database using the corresponding entity from the physical standby to the recovery operation. You can also use multi-section backup sets, encryption, or even compression within a network-based recovery. Active Duplicate The Active Duplicate feature generates an online backup on the TARGET database and directly transmits it via an inter-instance network connection to the AUXILIARY database for duplication (not written to disk in the source server). Consequently, this reduces the impact on the TARGET database by offloading the data transfer operation to the AUXILIARY database, also reducing the duplication time. This very useful feature has now received some important enhancements. In Oracle 11 g when this feature was initially introduced, it only allowed us to use a push process based on the image copies. Now it allows us to make use of the already known push process or to make use of the newly introduced pull process from the AUXILIARY database that is based on backup sets (the pull process is now the new default and automatically copies across all datafiles, control files, SPFILE and archive log files). Then it performs the restore of all files and uses a memory script to complete the recovery operation and open the AUXILIARY database. RMAN will dynamically determine, based on your DUPLICATE clauses, which process will be used (push or pull). It is very possible that soon Oracle will end deprecating the push process on the future releases of the database. You can now choose your choice of compression, section size, and encryption to be used during the Active Duplication process. For example, if you specify the SET ENCRYPTION option before the DUPLICATE command, all the backups sent from the target to the auxiliary database will be encrypted. For an effective use of parallelism, allocate more AUXILIARY channels instead of TARGET channels as in the earlier releases. Finally, another important new enhancement is the possibility to finish the duplication process with the AUXILIARY database in not open state (the default is to open the AUXILIARY database after the duplication is completed). This option is very useful when you are required to: Modify the block change tracking Configure fast incremental backups or flashback database settings Move the location of the database, for example, to ASM Upgrade the AUXILIARY database (due that the database must not be open with reset logs prior to applying the upgrade scripts) Or when you know that the attempt to open the database would produce errors To make it clearer, let's take a closer look at what operations RMAN will perform when a DUPLICATE command is used: Create an SPFILE string for the AUXILIARY instance. Mount the backup control file. Restore the TARGET datafiles on the AUXILIARY database. Perform incomplete recovery using all the available incremental backups and archived redo log files. Shut down and restart the AUXILIARY instance in the NOMOUNT mode. Create a new control file, create and store the new database ID in the datafiles (it will not happen if the FOR STANDBY clause is in use). Mount and opens the duplicate database using the RESETLOGS option, and create the online redo log files by default. If the NOOPEN option is used, the duplicated database will not be opened with RESETLOGS and will remain in the MOUNT state. Here are some examples of how to use the DUPLICATE command with PDBs: RMAN> DUPLICATE TARGET DATABASE TO <CDB1>; RMAN> DUPLICATE TARGET DATABASE TO <CDB1> PLUGGABLE DATABASE <PDB1>, <PDB2>, <PDB3>; Support for the third-party snapshot In the past when using a third-party snapshot technology to make a backup or clone of a database, you were forced to change the database to the backup mode (BEGIN BACKUP) before executing the storage snapshot. This requirement is no longer necessary if the following conditions are met: The database crash is consistent at the point of the snapshot Write ordering is preserved for each file within the snapshot The snapshot stores the time at which the snapshot is completed If a storage vendor cannot guarantee compliance with the conditions discussed, then you must place your database in backup mode before starting with the snapshot. The RECOVER command now has a newly introduced option called SNAPSHOT TIME that allows RMAN to recover a snapshot that was taken without being in backup mode to a consistent point-in-time. Some examples of how to use this new option are: RMAN> RECOVER DATABASE UNTIL TIME '10/12/2012 10:30:00' SNAPSHOT TIME '10/12/2012 10:00:00'; RMAN> RECOVER DATABASE UNTIL CANCEL SNAPSHOT TIME '10/12/2012 10:00:00'; Only trust your backups after you ensure that they are usable for recovery. In other words, always test your backup methodology first, ensuring that it can be used in the future in case of a disaster. Cross-platform data transport Starting with Oracle 12c, transporting data across platforms can be done making use of backup sets and also create cross-platform inconsistent tablespace backups (when the tablespace is not in the read-only mode) using image copies and backup sets. When using backup sets, you are able to make use of the compression and multi-section options, reducing downtime for the tablespace and the database platform migrations. RMAN does not catalog backup sets created for cross-platform transport in the control file, and always takes into consideration the endian format of the platforms and the database open mode. Before creating a backup set that will be used for a cross-platform data transport, the following prerequisites should be met: The compatible parameter in the SPFILE string should be 12.0 or greater The source database must be open in read-only mode when transporting an entire database due that the SYS and SYSAUX tablespaces will participate in the transport process If using Data Pump, the database must be open in read-write mode You can easily check the current compatible value and open_mode of your database by running the following SQL commands: SQL> SHOW PARAMETER compatible NAME TYPE VALUE ---------------------- ----------- ---------------------- compatible string 12.0.0.0.0 SQL> SELECT open_mode FROM v$database; OPEN_MODE -------------------- READ WRITE When making use of the FOR TRANSPORT or the TO PLATFORM clauses in the BACKUP command, you cannot make use of the following clauses: CUMULATIVE forRecoveryOfSpec INCREMENTAL LEVEL n keepOption notBackedUpSpec PROXY SECTION SIZE TAG VALIDATE Table recovery In previous versions of Oracle Database, the process to recover a table to a specific point-in-time was never easy. Oracle has now solved this major issue by introducing the possibility to do a point-in-time recovery of a table, group of tables or even table partitions without affecting the remaining database objects using RMAN. This makes the process easier and faster than ever before. Remember that Oracle has previously introduced features such as database point-in-time recovery ( DBPITR ), tablespace point-in-time recovery ( TSPITR ) and Flashback database; this is an evolution of the same technology and principles. The recovery of tables and table partitions is useful in the following situations: To recover a very small set of tables to a particular point-in-time To recover a tablespace that is not self-contained to a particular point-in-time, remember that TSPITR can only be used if the tablespace is self-contained To recover tables that are corrupted or dropped with the PURGE option, so the FLASHBACK DROP functionality is not possible to be used When logging for a Flashback table is enabled but the flashback target time or SCN is beyond the available undo To recover data that was lost after a data definition language ( DDL ) operation that changed the structure of a table To recover tables and table partitions from a RMAN backup, the TARGET database should be (prerequisites): At the READ/WRITE mode In the ARCHIVELOG mode The COMPATIBLE parameter should be set to 12.0 or higher You cannot recover tables or table partitions from the SYS, SYSTEM and SYSAUX schemas, or even from a standby database. Now let's take a closer look at the steps to do a table or table partitions recovery using RMAN: First check if all the prerequisites to do a table recovery are met. Start a RMAN session with the CONNECT TARGET command. Use the RECOVER TABLE command with all the required clauses. RMAN will determine which backup contains the data that needs to be recovered based on the point-in-time specified. RMAN creates an AUXILIARY instance, you can also specify the location of the AUXILIARY instance files using the AUXILIARY DESTINATION or SET NEWNAME clause. RMAN recovers the specified objects into the AUXILIARY instance. RMAN creates a Data Pump export dump file that contains the objects. RMAN imports the recovered objects from the dump file previously created into the TARGET database. If you want to manually import the objects to the TARGET database, you can make use of the NOTABLEIMPORT clause in the RECOVER command to achieve this goal. RMAN optionally offers the possibility to rename the recovered objects in the TARGET database using the REMAP TABLE clause, or to import the recovered objects to a different tablespace using the REMAP TABLESPACE clause. An example of how to use the new RECOVER TABLE command is: RMAN> RECOVER TABLE SCOTT.test UNTIL SEQUENCE 5481 THREAD 2 AUXILARY DESTINATION '/tmp/recover' REMAP TABLE SCOTT.test:my_test;
Read more
  • 0
  • 0
  • 3755
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-hadoop-and-hdinsight-heartbeat
Packt
30 Sep 2013
6 min read
Save for later

Hadoop and HDInsight in a Heartbeat

Packt
30 Sep 2013
6 min read
(For more resources related to this topic, see here.) Apache Hadoop is the leading Big Data platform that allows to process large datasets efficiently and at low cost. Other Big Data 0platforms are MongoDB, Cassandra, and CouchDB. This section describes Apache Hadoop core concepts and its ecosystem. Core components The following image shows core Hadoop components: At the core, Hadoop has two key components: Hadoop Distributed File System (HDFS) Hadoop MapReduce (distributed computing for batch jobs) For example, say we need to store a large file of 1 TB in size and we only have some commodity servers each with limited storage. Hadoop Distributed File System can help here. We first install Hadoop, then we import the file, which gets split into several blocks that get distributed across all the nodes. Each block is replicated to ensure that there is redundancy. Now we are able to store and retrieve the 1 TB file. Now that we are able to save the large file, the next obvious need would be to process this large file and get something useful out of it, like a summary report. To process such a large file would be difficult and/or slow if handled sequentially. Hadoop MapReduce was designed to address this exact problem statement and process data in parallel fashion across several machines in a fault-tolerant mode. MapReduce programing models use simple key-value pairs for computation. One distinct feature of Hadoop in comparison to other cluster or grid solutions is that Hadoop relies on the "share nothing" architecture. This means when the MapReduce program runs, it will use the data local to the node, thereby reducing network I/O and improving performance. Another way to look at this is when running MapReduce, we bring the code to the location where the data resides. So the code moves and not the data. HDFS and MapReduce together make a powerful combination, and is the reason why there is so much interest and momentum with the Hadoop project. Hadoop cluster layout Each Hadoop cluster has three special master nodes (also known as servers): NameNode: This is the master for the distributed filesystem and maintains a metadata. This metadata has the listing of all the files and the location of each block of a file, which are stored across the various slaves (worker bees). Without a NameNode HDFS is not accessible. Secondary NameNode: This is an assistant to the NameNode. It communicates only with the NameNode to take snapshots of the HDFS metadata at intervals configured at cluster level. JobTracker: This is the master node for Hadoop MapReduce. It determines the execution plan of the MapReduce program, assigns it to various nodes, monitors all tasks, and ensures that the job is completed by automatically relaunching any task that fails. All other nodes of the Hadoop cluster are slaves and perform the following two functions: DataNode: Each node will host several chunks of files known as blocks. It communicates with the NameNode. TaskTracker: Each node will also serve as a slave to the JobTracker by performing a portion of the map or reduce task, as decided by the JobTracker. The following image shows a typical Apache Hadoop cluster: The Hadoop ecosystem As Hadoop's popularity has increased, several related projects have been created that simplify accessibility and manageability to Hadoop. I have organized them as per the stack, from top to bottom. The following image shows the Hadoop ecosystem: Data access The following software are typically used access mechanisms for Hadoop: Hive: It is a data warehouse infrastructure that provides SQL-like access on HDFS. This is suitable for the ad hoc queries that abstract MapReduce. Pig: It is a scripting language such as Python that abstracts MapReduce and is useful for data scientists. Mahout: It is used to build machine learning and recommendation engines. MS Excel 2013: With HDInsight, you can connect Excel to HDFS via Hive queries to analyze your data. Data processing The following are the key programming tools available for processing data in Hadoop: MapReduce: This is the Hadoop core component that allows distributed computation across all the TaskTrackers Oozie: It enables creation of workflow jobs to orchestrate Hive, Pig, and MapReduce tasks The Hadoop data store The following are the common data stores in Hadoop: HBase: It is the distributed and scalable NOSQL (Not only SQL) database that provides a low-latency option that can handle unstructured data HDFS: It is a Hadoop core component, which is the foundational distributed filesystem Management and integration The following are the management and integration software: Zookeeper: It is a high-performance coordination service for distributed applications to ensure high availability Hcatalog: It provides abstraction and interoperability across various data processing software such as Pig, MapReduce, and Hive Flume: Flume is distributed and reliable software for collecting data from various sources for Hadoop Sqoop: It is designed for transferring data between HDFS and any RDBMS Hadoop distributions Apache Hadoop is an open-source software and is repackaged and distributed by vendors offering enterprise support. The following is the listing of popular distributions: Amazon Elastic MapReduce (cloud, http://aws.amazon.com/elasticmapreduce/) Cloudera (http://www.cloudera.com/content/cloudera/en/home.html) EMC PivitolHD (http://gopivotal.com/) Hortonworks HDP (http://hortonworks.com/) MapR (http://mapr.com/) Microsoft HDInsight (cloud, http://www.windowsazure.com/) HDInsight distribution differentiator HDInsight is an enterprise-ready distribution of Hadoop that runs on Windows servers and on Azure HDInsight cloud service. It is 100 percent compatible with Apache Hadoop. HDInsight was developed in partnership with Hortonworks and Microsoft. Enterprises can now harness the power of Hadoop on Windows servers and Windows Azure cloud service. The following are the key differentiators for HDInsight distribution: Enterprise-ready Hadoop: HDInsight is backed by Microsoft support, and runs on standard Windows servers. IT teams can leverage Hadoop with Platform as a Service ( PaaS ) reducing the operations overhead. Analytics using Excel: With Excel integration, your business users can leverage data in Hadoop and analyze using PowerPivot. Integration with Active Directory: HDInsight makes Hadoop reliable and secure with its integration with Windows Active directory services. Integration with .NET and JavaScript: .NET developers can leverage the integration, and write map and reduce code using their familiar tools. Connectors to RDBMS: HDInsight has ODBC drivers to integrate with SQL Server and other relational databases. Scale using cloud offering: Azure HDInsight service enables customers to scale quickly as per the project needs and have seamless interface between HDFS and Azure storage vault. JavaScript console: It consists of easy-to-use JavaScript console for configuring, running, and post processing of Hadoop MapReduce jobs. Summary In this article, we reviewed the Apache Hadoop components and the ecosystem of projects that provide a cost-effective way to deal with Big Data problems. We then looked at how Microsoft HDInsight makes the Apache Hadoop solution better by simplified management, integration, development, and reporting. Resources for Article : Further resources on this subject: Making Big Data Work for Hadoop and Solr [Article] Understanding MapReduce [Article] Advanced Hadoop MapReduce Administration [Article]
Read more
  • 0
  • 0
  • 3534

article-image-testing-camel-application
Packt
30 Sep 2013
5 min read
Save for later

Testing a Camel application

Packt
30 Sep 2013
5 min read
(For more resources related to this topic, see here.) Let's start testing. Apache Camel comes with the Camel Test Kit: some classes leverage testing framework capabilities and extend with Camel specifics. To test our application, let's add this Camel Test Kit to our list of dependencies in the POM file, as shown in the following code: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test</artifactId> <version>${camel-version}</version> </dependency> At the same time, if you have any JUnit dependency, the best solution would be to delete it for now so that Maven will resolve the dependency and we will get a JUnit version required by Camel. Let's rewrite our main program a little bit. Change the class App as shown in the following code: public class App { public static void main(String[] args) throws Exception { Main m = new Main(); m.addRouteBuilder( new AppRoute() ); m.run(); } static class AppRoute extends RouteBuilder { @Override public void configure() throws Exception { from("stream:in") .to("file:test"); } } } Instead of having an anonymous class extending RouteBuilder, we made it an inner class. That is, we are not going to test the main program. Instead, we are going to test if our routing works as expected, that is, messages from the system input are routed into the files in the test directory. At the beginning of the test, we will delete the test directory and our assertion will be that we have the directory test after we send the message and that it has exactly one file. To simplify the deleting of the directory test at the beginning of the unit test, we will use FileUtils.deleteDirectory from Apache Commons IO. Let's add it to our list of dependencies: <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-io</artifactId> <version>1.3.2</version> </dependency> In our project layout, we have a file src/test/java/com/company/AppTest.java. This is a unit test that has been created from the Maven artifact that we used to create our application. Now, let's replace the code inside that file with the following code: package com.company; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.test.junit4.CamelTestSupport; import org.apache.commons.io.FileUtils; import org.junit.After; import org.junit.BeforeClass; import org.junit.Test; import java.io.*; public class AppTest extends CamelTestSupport { static PipedInputStream in; static PipedOutputStream out; static InputStream originalIn; @Test() public void testAppRoute() throws Exception { out.write("This is a test message!n".getBytes()); Thread.sleep(2000); assertTrue(new File("test").listFiles().length == 1); } @BeforeClass() public static void setup() throws IOException { originalIn = System.in; out = new PipedOutputStream(); in = new PipedInputStream(out); System.setIn(in); FileUtils.deleteDirectory(new File("test")); } @After() public void teardown() throws IOException { out.close(); System.setIn(originalIn); } @Override public boolean isCreateCamelContextPerClass() { return false; } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new App.AppRoute(); } } Now we can run mvn compile test from the console and see that the test was run and that it is successful: Some important things to take note of in the code of our unit test are as follows: We have extended the CamelTestSupport class for Junit4 (see the package it is imported from). There are also classes that support TestNG and Junit3. We have overridden the method createRouteBuilder() to return RouteBuilder with our routes. We made our test class create CamelContext for each test method (annotated by @Test) by making isCreateCamelContextPerClass return false. System.in has been substituted with a piped stream in the startup() method and has been set back to the original value in the teardown() method. The trick is in doing it before CamelContext is created and started (now you see why we create CamelContext for each test). Also, you may see that after we send the message into the output stream piped to System.in, we made the test thread stop for couple of seconds to ensure that the message passes through the routes into the file. In short, our test running suite overrides System.in with a pipe stream so we can write into System.in from the code and deletes the directory test before the Test class is loaded. After the class is loaded and right before the testAppRoute() method, it creates CamelContext, using routes created by the overridden method createRouteBuilder(). Then it runs the test method which sends bytes of the message into the piped stream so that it gets into System.in where it is read by the Camel (note the n limiting the message). Camel then does what is written in the routes, that is, creates a file in the test directory. To be sure it's done before we do assertions, we make the thread executing the test sleep for 2 seconds. Then, we assert that we do have a file in the test directory at the end of the test. Our test works, but you see that it already gets quite hairy with piping streams and making calls to Thread.sleep()—and that's just the beginning. We haven't yet started using external systems, such as FTP servers, web services, and JMS queues. Another concern is the integration of our application with other systems. Some of them may not have a test environment. In this case, we can't easily control the side effects of our application, messages that it sends and receives from those systems; or how the systems interact with our application. To solve this problem, software developers use mocking. Summary Thus we learned about testing a Camel application in this article. Resources for Article: Further resources on this subject: Migration from Apache to Lighttpd [Article] Apache Felix Gogo [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article]
Read more
  • 0
  • 0
  • 2511

article-image-introducing-qlikview-elements
Packt
24 Sep 2013
6 min read
Save for later

Introducing QlikView elements

Packt
24 Sep 2013
6 min read
(For more resources related to this topic, see here.) People People are the only active element of data visualization, and as such, they are the most important. We briefly describe the roles of several people that participate in our project, but we mainly focus on the person who is going to analyze and visualize the data. After the meeting, we get together with our colleague, Samantha, who is the analyst that supports the sales and executive teams. She currently manages a series of highly personalized Excels that she creates from standard reports generated within the customer invoice and project management system. Her audience ranges from the CEO down to sales managers. She is not a pushover, but she is open to try new techniques, especially given that the sponsor of this project is the CEO of QDataViz, Inc. As a data discovery user, Samantha possesses the following traits: Ownership She has a stake in the project's success or failure. She, along with the company, stands to grow as a result of this project, and most importantly, she is aware of this opportunity. Driven She is focused on grasping what we teach her and is self-motivated to continue learning after the project is fi nished. The cause of her drive is unimportant as long as she remains honest. Honest She understands that data is a passive element that is open to diverse interpretations by different people. She resists basing her arguments on deceptive visualization techniques or data omission. Flexible She does not endanger her job and company results following every technological fad or whimsical idea. However, she realizes that technology does change and that a new approach can foment breakthroughs. Analytical She loves finding anomalies in the data and being the reason that action is taken to improve QDataViz, Inc. As a means to achieve what she loves, she understands how to apply functions and methods to manipulate data. Knowledgeable She is familiar with the company's data, and she understands the indicators needed to analyze its performance. Additionally, she serves as a data source and gives context to analysis. Team player She respects the roles of her colleagues and holds them accountable. In turn, she demands respect and is also obliged to meet her responsibilities. Data Our next meeting involves Samantha and Ivan, our Information Technology (IT) Director. While Ivan explains the data available in the customer invoice and project management system's well-defined databases, Samantha adds that she has vital data in Microsoft Excel that is missing from those databases. One Excel file contains the sales budget and another contains an additional customer grouping; both files are necessary to present information to the CEO. We take advantage of this discussion to highlight the following characteristics that make data easy to analyze. Reliable Ivan is going to document the origin of the tables and fields, which increases Samantha's confidence in the data. He is also going to perform a basic data cleansing and eliminate duplicate records whose only difference is a period, two transposed letters, or an abbreviation. Once the system is operational, Ivan will consider the impact any change in the customer invoice and project management system may have on the data. He will also verify that the data is continually updated while Samantha helps con firm the data's validity. Detailed Ivan will preserve as much detail as possible. If he is unable to handle large volumes of data as a whole, he will segment the detailed data by month and reduce the detail of a year's data in a consistent fashion. Conversely, he is will consider adding detail by prorating payments between the products of paid invoices in order to maintain a consistent level of detail between invoices and payments. Formal An Excel file as a data source is a short-term solution. While Ivan respects its temporary use to allow for a quick, first release of the data visualization project, he takes responsibility to find a more stable medium to long-term solution. In the span of a few months, he will consider modifying the invoice system, investing in additional software, or creating a simple portal to upload Excel files to a database. Flexible Ivan will not prevent progress solely for bureaucratic reasons. Samantha respects that Ivan's goal is to make data more standardized, secure, and recoverable. However, Ivan knows that if he does not move as quickly as business does, he will become irrelevant as Samantha and others create their own black market of company data. Referential Ivan is going to make available manifold perspectives of QDataViz, Inc. He will maintain history, budgets, and forecasts by customers, salespersons, divisions, states, and projects. Additionally, he will support segmenting these dimensions into multiple groups, subgroups, classes, and types. Tools We continue our meeting with Ivan and Samantha, but we now change our focus to what tool we will use to foster great data visualization and analysis. We create the following list of basic features we hope from this tool: Fast and easy implementation We should be able to learn the tool quickly and be able to deliver a first version of our data visualization project within a matter of weeks. In this fashion, we start receiving a return on our investment within a short period of time. Business empowerment Samantha should be able to continue her analysis with little help from us. Also, her audience should be able to easily perform their own lightweight analysis and follow up on the decisions made. Enterprise-ready Ivan should be able to maintain hundreds or thousands of users and data volumes that exceed 100 million rows. He should also be able to restrict access to certain data to certain users. Finally, he needs to have the confidence that the tools will remain available even if a server fails. Based on these expectations, we talk about data discovery tools, which are increasingly becoming part of the architecture of many organizations. Samantha can use these tools for self-service data analysis. In other words, she can create her own data visualizations without having to depend on pre-built graphs or reports. At the same time, Ivan can be reassured that the tool does not interfere with his goal of providing an enterprise solution that offers scalability, security, and high availability. The data discovery tool we are going to use is QlikView, and the following diagram shows the overall architecture we will build and where this article focuses its attention: Summary In this article, we learned about People, data, and tools which are an essential part of creating great data visualization and analysis. Resources for Article: Further resources on this subject: Meet QlikView [Article] Linking Section Access to multiple dimensions [Article] Creating sheet objects and starting new list using Qlikview 11 [Article]
Read more
  • 0
  • 0
  • 2101

article-image-executing-pdi-jobs-filesystem-simple
Packt
19 Sep 2013
7 min read
Save for later

Executing PDI jobs from a filesystem (Simple)

Packt
19 Sep 2013
7 min read
(For more resources related to this topic, see here.) Getting ready To get ready for this article, we first need to check that our Java environment is configured properly; to do this, check that the JAVA_HOME environment variable is set. Even if all the PDI scripts, when started, call other scripts that try to find out about our Java execution environment to get the values of the JAVA_HOME variable, it is always a good rule of thumb to have that variable set properly anytime we work with a Java application. The Kitchen script is in the PDI home directory, so the best thing to do to launch the script in the easiest way is to add the path to the PDI home directory to the PATH variable. This gives you the ability to start the Kitchen script from any place without specifying the absolute path to the Kitchen file location. If you do not do this, you will always have to specify the complete path to the Kitchen script file. To play with this article, we will use the samples in the directory <book_samples>/sample1; here, <book_samples> is the directory where you unpacked all the samples of the article. How to do it… For starting a PDI job in Linux or Mac, use the following steps: Open the command-line terminal and go to the <book_samples>/sample1 directory. Let's start the sample job. To identify which job file needs to be started by Kitchen, we need to use the –file argument with the following syntax: –file: <complete_filename_to_job_file> Remember to specify either an absolute path or a relative path by properly setting the correct path to the file. The simplest way to start the job is with the following syntax: $ kitchen.sh –file:./export-job.kjb If you're not positioned locally in the directory where the job files are located, you must specify the complete path to the job file as follows: $ kitchen.sh –file:/home/sramazzina/tmp/samples/export-job.kjb Another option to start our job is to separately specify the name of the directory where the job file is located and then give the name of the job file. To do this, we need to use the –dir argument together with the –file argument. The –dir argument lets you specify the location of the job file directory using the following syntax: –dir: <complete_path_to_ job_file_directory> So, if we're located in the same directory where the job resides, to start the job, we can use the following new syntax: $ kitchen.sh – dir:. –file:export-job.kjb If we're starting the job from a different directory than the directory where the job resides, we can use the absolute path and the –dir argument to set the job's directory as follows: $ kitchen.sh –dir:/home/sramazzina/tmp/samples –file:export-job.kjb For starting a PDI job with parameters in Linux or Mac, perform the following steps: Normally, PDI manages input parameters for the executing job. To set parameters using the command-line script, we need to use a proper argument. We use the –param argument to specify the parameters for the job we are going to launch. The syntax is as follows: -param: <parameter_name>= <parameter_value> Our sample job and transformation does accept a sample parameter called p_country that specifies the name of the country we want to export the customers to a file. Let's suppose we are positioned in the same directory where the job file resides and we want to call our job to extract all the customers for the country U.S.A. In this case, we can call the Kitchen script using the following syntax: $ kitchen.sh –param:p_country=USA -file=./export-job.kjb Of course, you can apply the –param switch to all the other three cases we detailed previously. For starting a PDI job in Windows, use the following steps: In Windows, a PDI job from the filesystem can be started by following the same rules that we saw previously, using the same arguments in the same way. The only difference is in the way we specify the command-line arguments. Any time we start the PDI jobs from Windows, we need to specify the arguments using the / character instead of the – character we used for Linux or Mac. Therefore, this means that: -file: <complete_filename_to_job_file> Will become: /file: <complete_filename_to_job_file> And: –dir: <complete_path_to_ job_file_directory> Will become: /dir: <complete_path_to_ job_file_directory> From the directory <book_samples>/sample1, if you want to start the job, you can run the Kitchen script using the following syntax: C:tempsamples>Kitchen.bat /file:./export-job.kjb Regarding the use of PDI parameters in command-line arguments, the second important difference on Windows is that we need to substitute the = character in the parameter assignment syntax with the : character. Therefore, this means that: –param: <parameter_name>= <parameter_value> Will become: /param: <parameter_name>: <parameter_value> From the directory <book_samples>/sample1, if you want to extract all the customers for the country U. S. A, you can start the job using the following syntax: C:tempsamples>Kitchen.bat /param:p_country:USA /file:./exportjob. kjb For starting the PDI transformations, perform the following steps: The Pan script starts PDI transformations. On Linux or Mac, you can find the pan.sh script in the PDI home directory. Assuming that you are in the same directory, <book_samples>/sample1, where the transformation is located, you can start a simple transformation with a command in the following way: $ pan.sh –file:./read-customers.ktr If you want to start a transformation by specifying some parameters, you can use the following command: $ pan.sh –param:p_country=USA –file:./read-customers.ktr In Windows, you can use the Pan.bat script, and the sample commands will be as follows: C:tempsamples>Pan.bat /file:./read-customers.ktr Again, if you want to start a transformation by specifying some parameters, you can use the following command: C:tempsamples>Pan.bat /param:p_country=USA /file:./readcustomers. ktr Summary IIn this article, you were guided through simply starting a PDI job using the script Kitchen. In this case, the PDI job we started were stored locally in the computer filesystem, but it could be anywhere in the network in any place that is directly accessible. You learned how to start simple jobs both with and without a set of input parameters previously defined in the job. Using command-line scripts was a fast way to start batches, but it was also the easiest way to schedule our jobs using our operating system's scheduler. The script accepted a set of inline arguments to pass the proper options required by the program to run our job in any specific situation. Resources for Article : Further resources on this subject: Integrating Kettle and the Pentaho Suite [Article] Installing Pentaho Data Integration with MySQL [Article] Pentaho – Using Formulas in Our Reports [Article]
Read more
  • 0
  • 0
  • 4289
article-image-oracle-b2b-overview
Packt
17 Sep 2013
12 min read
Save for later

Oracle B2B Overview

Packt
17 Sep 2013
12 min read
B2B environment setup Here is the list of some OFM concepts that will be used in this article: Domain: It is the basic administration unit that includes a special WebLogic Server instance called the Administration Server, and optionally one or many Java components. Java component: It is a Java EE application deployed to an Oracle WebLogic Server domain as part of a domain template. For example, SOA Suite is a Java component. Managed server: It is an additional WebLogic Server included in a domain, to host Java components such as SOA Suite. We will use the UNIX operating system for our tutorials. The following table depicts the directory environment variables used throughout the article for configuring the Oracle SOA Suite deployment: Name Variable What It Is Example Middleware home MW_HOME The top-level directory for all OFM products WebLogic Server home WL_HOME Contains installed files necessary to host a WebLogic Server $MW_HOME/wlserver_10.3 Oracle home SOA_ORACLE_HOME Oracle SOA Suite product directory $MW_HOME/Oracle_SOA1 Oracle Common Home ORACLE_COMMON_HOME Contains the binary and library files required for the Oracle Enterprise Manager Fusion Middleware Control and Java Required Files (JRF) $MW_HOME/oracle_common Domain home SOA_DOMAIN_HOME The absolute path of the source domain containing the SOA Suite Java component $MW_HOME/user_projects/domains/SOADomain Java home JAVA_HOME Specifies the location of JDK (must be 1.6.04 or higher) or JRockit $MW_HOME/jdk160_29 Ant Home ANT_HOME Specifies the location of Ant archive location $MW_HOME/org.apache.ant_1.7.1 The following figure depicts a snapshot of the SOA Suite directory's hierarchical structure: For the recommended SOA Suite directory location, please refer to the OFM Enterprise Development guide for SOA Suite that can be found at http://docs.oracle.com/cd/E16764_01/core.1111/e12036/toc.htm. JDeveloper installation tips JDeveloper is a development tool that will be used in the article. It is a full service Integrated Development Environment (IDE), which allows for the development of SOA projects along with a host of other Oracle products, including Java. If it has not been installed yet, one may consider downloading and installing the VM VirtualBox (VBox) Image of the entire package of SOA Suite, B2B, and JDeveloper, provided by Oracle on the Oracle download site found at http://www.oracle.com/technetwork/middleware/soasuite/learnmore/vmsoa-172279.html. All you need to do is to install Oracle VM VirtualBox, and import the SOA/BPM appliance. This is for evaluation and trial purposes, and is not recommended for production use; however, for the purpose of following, along with the tutorial in the article, it is perfect. The following table shows minimum and recommended requirements for the VBox Image: Minimum Recommended Memory (RAM) 4-6 GB 8 GB Disk Space 25 GB 50 GB While VM's are convenient, they do use quite a bit of disk space and memory. If you don't have a machine that meets the minimum requirements, it will be a challenge to try the exercises. The other alternative is to download the bits for the platform you are using from the Oracle download page, and install each software package, and configure them accordingly, including a JDK, a DB, WebLogic Server, SOA Suite, and JDeveloper, among other things you may need. If you decide that you have enough system resources to run the VBox Image, here are some of the major steps that you need to perform to download and install it. Please follow the detailed instructions found in the Introduction and Readme file that can be downloaded from http://www.oracle.com/technetwork/middleware/soasuite/learnmore/soabpmvirtualboxreadme-1612068.pdf, in order to have the complete set of instructions. Download the Introduction and Readme file, and review. Enable hardware virtualization in your PC BIOS if necessary. To download and install the VirtualBox software (engine that runs the virtual machine on your host machine), click on the link Download and install Oracle VM VirtualBox on the download page. To download the 7 pieces of the ZIP file, click on each file download ending with 7z.00[1-7] on the download page. To download the MD5 Checksum tool if you don't have one, click on the link Download MD5sums if you're on Windows to check your download worked okay on the download page. Run the MD5 Checksum tool to verify the 7 downloaded files: md5sums oel5u5-64bit-soabpm-11gr1-ps5-2-0-M.7z.001. Repeat for all 7 files. (This takes quite a while, but it is best to do it, so that you can verify that your download is complete and accurate.) Compare the results of the program with the results in the download that ends with .mdsum. They should match exactly. Extract the VBox Image from the .001 file using a Zip/Unzip tool. Use a ZIP tool such as 7-Zip (available as freeware for Windows), WinZip, or other to extract the .ova file from the 7 files into a single file on your platform. Using 7-Zip, if you extract from the first file; it will find the other 6 files and combine them all as it extracts. Start VirtualBox and set preferences such as the location of the VBox Image on your disk (follow instructions in the readme file). Import the new .ova file that was extracted from the ZIP file. Check settings and adjust memory/CPU. Start the appliance (VBox Image). Login as oracle with password oracle (check Readme). Choose the domain type dev_soa_osb. Set up a shared folder, you can use to share files between your machine and the virtual machine, and restart the VM. Once you are logged back in, start the admin server using the text based menu. Once the server is started, you can start the graphical desktop using the text based menu. Click on the jDeveloper Icon on the desktop of the VM to start jDeveloper. Choose Default Role when prompted for a role. At the time of writing, the latest available version is 11g PS5 (11.1.1.1.6). The VBox Image comes with SOA Suite, Oracle 11g XE Database, and JDeveloper, pre-installed on a Linux Virtual Machine. Using the VirtualBox technology, you can run this Linux machine virtually on your laptop, desktop, or on a variety of other platforms. For the purpose of this article, you should choose the dev_soa_osb type of domain. System requirements Oracle B2B is installed as a part of the SOA Suite installation. The SOA Suite installation steps are well documented, and are beyond the scope of this article. If you have never installed Oracle SOA Suite 11g, check with the Installation Guide for Oracle SOA Suite and Oracle Business Process Management Suite 11g. It can be downloaded from the Oracle Technology Network (OTN) Documentation downloads page at http://docs.oracle.com/cd/E23943_01/doc.1111/e13925/toc.htm. There are several important topics that did not have enough coverage in the SOA Suite Installation Guide. One of them is how to prepare the environment for the SOA/B2B installation. To begin, it is important to validate whether your environment meets the minimum requirements specified in the Oracle Fusion Middleware System Requirements and Specifications document. It can be downloaded from the Oracle Technology Network (OTN) Documentation downloads page at http://docs.oracle.com/html/E18558_01/fusion_requirements.htm. The spreadsheet provides very important SOA Suite installation recommendations, such as the minimum disk space information and memory requirements that could help the IT hardware team with its procurement planning process. For instance, the Oracle recommended hardware and system requirements for SOA Suite are: Minimum Physical Memory required: 2 gigabytes Minimum available Memory Required: 4 gigabytes CPU: dual-core Pentium, 1.5 GHz or greater Disk Space: 15 gigabytes or more This document also has information about supported databases and database versions. Another important document that has plenty of relevant information is Oracle Fusion Middleware 11g Release 1 (11.1.1.x) Certification Matrix. It can be downloaded from the Oracle Technology Network (OTN) Documentation downloads page at http://www.oracle.com/technetwork/middleware/downloads/fmw-11gr1certmatrix.xls. It is indeed a treasure chest of information. This spreadsheet may save you from a lot of headache. The last thing someone wants to run into is the need to re-install, just because they did not properly read and /or missed some recommendations. Here are some important points from the spreadsheet you don't want to miss: Hardware platform version's compatibility with a particular SOA Suite release Supported JDK versions Interoperability support for SOA Suite with WebLogic Server Supported database versions In conclusion, the following list includes a complete SOA Suite software stack (as used in the article): Oracle WebLogic Server (10.1.3.6) (Required) Repository Creation Utility (RCU) (11.1.1.6.0) (Required) SOA Suite 11g (11.1.1.6.0) (Required) JDeveloper (11.1.1.6.0) (Required) JDeveloper Extension for SOA (Required) Oracle B2B Document Editor release 7.05 (Optional) Oracle Database 11g (Required) Oracle B2B installation and post-installation configuration notes There are several important installation and post-installation steps that may directly or indirectly impact the B2B component's behavior. Creating a WebLogic domain is one of the most important SOA Suite 11g installation steps. The BEA engineers, who used to work with WebLogic prior to 11g, never before had to select SOA Suite components while creating a domain. This process is completely new for the Oracle engineers who are familiar only with prior releases of SOA Suite. There are several steps in this process that, if missed, might require a complete re-installation. A common mistake that people make when creating a new domain is that they don't check the Enterprise Manager checkbox. As a result, Enterprise Manager is not available, meaning that neither instance monitoring and tracking, nor access to the B2B configuration properties is available. Make sure you do not make such a mistake by selecting the Oracle Enterprise Manager checkbox. Oracle Enterprise Manager has been assigned a new name: Enterprise Manager Fusion Middleware Control. While planning SOA Suite deployment architecture, it is recommended to choose ahead of time between the following two WebLogic domain configurations: Developers domain Production domain In the Developers domain configuration, SOA Suite is installed as part of the administration server, implying that a separate managed server will not be created. This configuration could be a good choice for a development server, or a personal laptop, or any environment where available memory is limited. One should always keep in mind that SOA Suite requires up to 4 gigabytes of available memory. To set up the Developers domain, select the Oracle SOA Suite for developers checkbox on the Select Domain Source page, as shown in the following screenshot: Oracle strongly recommends against using this configuration in a production environment by warning that it will not be supported; that is, Oracle Support won't be able to provide assistance for any issues that happen to occur in this environment. Conversely, if the Oracle SOA Suite checkbox is selected, as shown in the following screenshot, a managed server will be created with a default name soa_server1. Creating a separate managed server (which is a WebLogic Java Virtual Machine) and deploying SOA Suite to this managed server, provides a more scalable configuration. If SOA Suite for developers was installed, you need to perform the following steps to activate the B2B user interface: Login to the Oracle WebLogic Server Administration Console using the following URL: http :: //<localserver>:7001/console (note that 7001 is the default port unless a different port was chosen during the installation process). Provide the default administrator account (the WebLogic user, unless it was changed during the installation process). Select Deployments from the Domain Structure list. On the bottom-right side of the page, select b2bui from the Deployments list (as shown in the following screenshot). On the next page, click on the Targets tab. Select the Component checkbox to enable the Change Target button. Click on the Change Target button. Select the AdminServer checkbox and click on the Yes button. Click on the Activate Changes button. Click on the Deployments link in the WebLogic domain structure. The B2B user interface is activated. If the SOA Suite production configuration was chosen, these steps are no longer necessary. However, you must first configure Node Manager. To do that, execute the setNMProps script and start Node Manager. $ORACLE_COMMON_HOME/common/bin/setNMProps.sh $MW_HOME/wlserver_n/server/bin/startNodeManager.sh Oracle B2B web components Oracle B2B Gateway is deployed as part of the Oracle SOA Service Infrastructure, or SOA-Infra. SOA Infrastructure is a Java EE compliant application running on Oracle WebLogic Server. Java EE compliant application: It is a wrapper around web applications and Enterprise Java Bean (EJB) applications Web Application: It usually represents the User Interface Layer, and includes Java Server pages (JSP), Servlets, HTML, and so on Servlet: It is a a module of Java code that runs as a server-side application Java Server Pages (JSP): It is a programming technology used to make dynamic web pages WAR archive: It is an artifact for the web application deployment Enterprise Java Beans: These are server-side domain objects that fit into a standard component-based architecture for building enterprise applications with Java EJB application: It is a collection of Enterprise Java Beans The following table shows a list of B2B web components installed as part of the SOA Infrastructure application. They include an enterprise application, several EJBs, a web application, and a web service. The B2B web application provides a link to the B2B Interface. The B2B MetadataWS Web Service provides Oracle SOA Service Infrastructure with access to the metadata repository. Stateless EJBs are used by the B2B Engine. This table might be helpful to understand how Oracle B2B integrates with Oracle SOA Suite. It could also be useful while developing B2B high availability architecture. Name Application Type b2b Web Application b2bui JEE Application B2BInstanceMessageBean EJB B2BStarterBeanWLS EJB B2BUtilityBean EJB B2BMetadataWS Web Service
Read more
  • 0
  • 0
  • 5600

article-image-linking-opencv-ios-project
Packt
16 Sep 2013
7 min read
Save for later

Linking OpenCV to an iOS project

Packt
16 Sep 2013
7 min read
(For more resources related to this topic, see here.) Getting ready First you should download the OpenCV framework for iOS from the official website at http://opencv.org. In this article, we will use Version 2.4.6. How to do it... The following are the main steps to accomplish the task: Add the OpenCV framework to your project. Convert image to the OpenCV format. Process image with a simple OpenCV call. Convert image back. Display image as before. Let's implement the described steps: We continue modifying the previous project, so that you can use it; otherwise create a new project with UIImageView. We'll start by adding the OpenCV framework to the Xcode project. There are two ways to do it. You can add the framework as a resource. This is a straightforward approach. Alternatively, the framework can be added through project properties by navigating to Project | Build Phases | Link Binary With Libraries. To open project properties you should click to the project name in the Project Navigator area. Next, we'll include OpenCV header files to our project. To avoid conflicts, we will add the following code to the very beginning of the file, above all other imports: #ifdef __cplusplus#import <opencv2/opencv.hpp>#endif This is needed, because OpenCV redefines some names, for example, min/max functions. Set the value of Compile Sources As property as Objective-C++. The property is available in the project settings and can be accessed by navigating to Project | Build Settings | Apple LLVM compiler 4.1 - Language. To convert the images from UIImageto cv::Mat, you can use the following functions: UIImage* MatToUIImage(const cv::Mat& image) { NSData *data = [NSData dataWithBytes:image.data length:image.elemSize()*image.total()]; CGColorSpaceRef colorSpace; if (image.elemSize() == 1) { colorSpace = CGColorSpaceCreateDeviceGray(); } else { colorSpace = CGColorSpaceCreateDeviceRGB(); } CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); // Creating CGImage from cv::Mat CGImageRef imageRef = CGImageCreate(image.cols, //width image.rows, //height 8, //bits per component 8*image.elemSize(),//bits per pixel image.step.p[0], //bytesPerRow colorSpace, //colorspace kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info provider, //CGDataProviderRef NULL, //decode false, //should interpolate kCGRenderingIntentDefault //intent ); // Getting UIImage from CGImage UIImage *finalImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); return finalImage;}void UIImageToMat(const UIImage* image, cv::Mat& m, bool alphaExist = false){ CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage); CGFloat cols = image.size.width, rows = image.size.height; CGContextRef contextRef; CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast; if (CGColorSpaceGetModel(colorSpace) == 0) { m.create(rows, cols, CV_8UC1); //8 bits per component, 1 channel bitmapInfo = kCGImageAlphaNone; if (!alphaExist) bitmapInfo = kCGImageAlphaNone; contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8, m.step[0], colorSpace, bitmapInfo); } else { m.create(rows, cols, CV_8UC4); // 8 bits per component, 4 channels if (!alphaExist) bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault; contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8, m.step[0], colorSpace, bitmapInfo); } CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef);} These functions are included into the library starting from Version 2.4.6 of OpenCV. In order to use them, you should include the ios.h header file. #import "opencv2/highgui/ios.h" Let's consider a simple example that extracts edges from the image. In order to do so, you have to add the following code to the viewDidLoad()method: viewDidLoad() method:- (void)viewDidLoad{ [super viewDidLoad]; UIImage* image = [UIImage imageNamed:@"lena.png"]; // Convert UIImage* to cv::Mat UIImageToMat(image, cvImage); if (!cvImage.empty()) { cv::Mat gray; // Convert the image to grayscale cv::cvtColor(cvImage, gray, CV_RGBA2GRAY); // Apply Gaussian filter to remove small edges cv::GaussianBlur(gray, gray, cv::Size(5, 5), 1.2, 1.2); // Calculate edges with Canny cv::Mat edges; cv::Canny(gray, edges, 0, 50); // Fill image with white color cvImage.setTo(cv::Scalar::all(255)); // Change color on edges cvImage.setTo(cv::Scalar(0, 128, 255, 255), edges); // Convert cv::Mat to UIImage* and show the resulting image imageView.image = MatToUIImage(cvImage); }} Now run your application and check whether the application finds edges on the image correctly. How it works... Frameworks are intended to simplify the process of handling dependencies. They encapsulate header and binary files, so the Xcode sees them, and you don't need to add all the paths manually. Simply speaking, the iOS framework is just a specially structured folder containing include files and static libraries for different architectures (for example, armv7, armv7s, and x86). But Xcode knows where to search for proper binaries for each build configuration, so this approach is the simplest way to link external library on the iOS. All dependencies are handled automatically and added to the final application package. Usually, iOS applications are written in Objective-C language. Header files have a *.h extension and source files have *.m. Objective-C is a superset of C, so you can easily mix these languages in one file. But OpenCV is primarily written in C++, so we need to use C++ in the iOS project, and we need to enable support of Objective-C++. That's why we have set the language property to Objective-C++. Source files in Objective-C++ language usually have the *.mm extension. To include OpenCV header files, we use the #importdirective. It is very similar to #include in C++, while there is one distinction. It automatically adds guards for the included file, while in C++ we usually add them manually: #ifndef __SAMPLE_H__#define __SAMPLE_H__…#endif In the code of the example, we just convert the loaded image from a UIImage object to cv::Matby calling the UIImageToMat function. Please be careful with this function, because it entails a memory copy, so frequent calls to this function will negatively affect your application's performance. Please note that this is probably the most important performance tip—to be very careful while working with memory in mobile applications. Avoid memory reallocations and copying as much as possible. Images require quite large chunks of memory, and you should reuse them between iterations. For example, if your application has some pipeline, you should preallocate all buffers and use the same memory while processing new frames. After converting images, we do some simple image processing with OpenCV. First, we convert our image to the single-channel one. After that, we use the GaussianBlur filter to remove small details. Then we use the Canny method to detect edges in the image. To visualize results, we create a white image and change the color of the pixels that lie on detected edges. The resulting cv::Mat object is converted back to UIImage and displayed on the screen. There's more... The following is additional advice. Objective-C++ There is one more way to add support of Objective-C++ to your project. You should just change the extension of the source files to .mm where you plan to use C++ code. This extension is specific to Objective-C++ code. Converting to cv::Mat If you don't want to use UIImage, but want to load an image to cv::Mat directly, you can do it using the following code: // Create file handleNSFileHandle* handle = [NSFileHandle fileHandleForReadingAtPath:filePath];// Read content of the fileNSData* data = [handle readDataToEndOfFile];// Decode image from the data buffercvImage = cv::imdecode(cv::Mat(1, [data length], CV_8UC1, (void*)data.bytes), CV_LOAD_IMAGE_UNCHANGED); In this example we read the file content to the buffer and call the cv::imdecode function to decode the image. But there is one important note; if you later want to convert cv::Mat to the UIImage, you should change the channel order from BGR to RGB, as OpenCV's native image format is BGR. Summary This article explained how to link the OpenCV library and call any function from it. Resources for Article: Further resources on this subject: A quick start – OpenCV fundamentals [Article] Using Image Processing Techniques [Article] OpenCV: Segmenting Images [Article]
Read more
  • 0
  • 0
  • 5537

article-image-report-authoring
Packt
16 Sep 2013
12 min read
Save for later

Report Authoring

Packt
16 Sep 2013
12 min read
(For more resources related to this topic, see here.) In this article, we will cover some fundamental techniques that will be used in your day-to-day life as a Report Studio author. In each recipe, we will take a real-life example and see how it can be accomplished. At the end of the article, you will learn several concepts and ideas which you can mix-and-match to build complex reports. Though this article is called Report Authoring Basic Concepts, it is not a beginner's guide or a manual. It expects the following: You are familiar with the Report Studio environment, components, and terminologies You know how to add items on the report page and open various explorers and panes You can locate the properties window and know how to test run the report Based on my personal experience, I will recommend this article to new developers with two days to two months of experience. In the most raw terminology, a report is a bunch of rows and columns. The aim is to extract the right rows and columns from the database and present them to the users. The selection of columns drive what information is shown in the report, and the selection of rows narrow the report to a specific purpose and makes it meaningful. The selection of rows is controlled by filters. Report Studio provides three types of filtering: detail , summary , and slicer. Slicers are used with dimensional models.). In the first recipe of this article, we will cover when and why to use the detail and summary filters. Once we get the correct set of rows by applying the filters, the next step is to present the rows in the most business-friendly manner. Grouping and ordering plays an important role in this. The second recipe will introduce you to the sorting technique for grouped reports. With grouped reports, we often need to produce subtotals and totals. There are various types of aggregation possible. For example, average, total, count, and so on. Sometimes, the nature of business demands complex aggregation as well. In the third recipe, you will learn how to introduce aggregation without increasing the length of the query. You will also learn how to achieve different aggregation for subtotals and totals. The fourth recipe will build upon the filtering concept you have learnt earlier. It will talk about implementing the if-then-elselogic in filters. Then we will see some techniques on data formatting, creating sections in a report, and hiding a column in a crosstab. Finally, the eighth and last recipe of this article will show you how to use prompt's Use Value and Display Value properties to achieve better performing queries. The examples used in all the recipes are based on the GO Data Warehouse (query) package that is supplied with IBM Cognos 10.1.1 installation. These recipe samples can be downloaded from the Packt Publishing website. They use the relational schema from the Sales and Marketing (query) / Sales (query) namespace. The screenshots used throughout this article are taken from Cognos Version 10.1.1 and 10.2. Summary filters and detail filters Business owners need to see the sales quantity of their product lines to plan their strategy. They want to concentrate only on the highest selling product for each product line. They would also like the facility to select only those orders that are shipped in a particular month for this analysis. In this recipe, we will create a list report with product line, product name, and quantity as columns. We will also create an optional filter on the Shipment Month Key. Also, we will apply correct filtering to bring up only the top selling product per product line. Getting ready Create a new list report based on the GO Data Warehouse (query) package. From the Sales (query) namespace, bring up Products / Product line , Products / Product , and Sales fact / Quantity as columns, the way it is shown in the following screenshot: How to do it... Here we want to create a list report that shows product line, product name, and quantity, and we want to create an optional filter on Shipment Month. The report should also bring up only the top selling product per product line. In order to achieve this, perform the following steps: We will start by adding the optional filter on Shipment Month. To do that, click anywhere on the list report on the Report page. Then, click on Filters from the toolbar. In the Filters dialog box, add a new detail filter. In the Create Filter screen, select Advanced and then click on OK as shown in the following screenshot: By selecting Advanced , we will be able to filter the data based on the fields that are not part of our list table like the Month Key in our example as you will see in the next step. Define the filter as follows: [Sales (query)].[Time (ship date)].[Month key (ship date)] = ?ShipMonth? Validate the filter and then click on OK. Set the usage to Optional as shown in the following screenshot: Now we will add a filter to bring only the highest sold product per product line. To achieve this, select Product line and Product (press Ctrl and select the columns) and click on the group button from the toolbar. This will create a grouping as shown in the following screenshot: Now select the list and click on the filter button again and select Edit Filters . This time go to the Summary Filters tab and add a new filter. In the Create Filter screen, select Advanced and then click on OK. Define the filter as follows: [Quantity] = maximum([Quantity] for [Product line]). Set usage to Required and set the scope to Product as shown in the following screenshot: Now run the report to test the functionality. You can enter 200401as the Month Key as that has data in the Cognos supplied sample. How it works... Report Studio allows you to define two types of filters. Both work at different levels of granularity and hence have different applications. The detail filter The detail filter works at the lowest level of granularity in a selected cluster of objects. In our example, this grain is the Sales entries stored in Sales fact . By putting a detail filter on Shipment Month, we are making sure that only those sales entries which fall within the selected month are pulled out. The summary filter In order to achieve the highest sold product per product line, we need to consider the aggregated sales quantity for the products. If we put a detail filter on quantity, it will work at sales entry level. You can try putting a detail filter of [Quantity] = maximum([Quantity]for[Productline])and you will see that it gives incorrect results. So, we need to put a summary filter here. In order to let the query engine know that we are interested in filtering sales aggregated at product level, we need to set the SCOPE to Product . This makes the query engine calculate [Quantity]at product level and then allows only those products where the value matches maximum([Quantity]for [Product line]). There's more... When you define multiple levels of grouping, you can easily change the scope of summary filters to decide the grain of filtering. For example, if you need to show only those products whose sales are more than 1000 and only those product lines whose sales are more than 25000, you can quickly put two summary filters for code with the correct Scope setting. Before/after aggregation The detail filter can also be set to apply after aggregation (by changing the application property). However, I think this kills the logic of the detail filter. Also, there is no control on the grain at which the filter will apply. Hence, Cognos sets it to before aggregation by default, which is the most natural usage of the detail filter. See also The Implementing if-then-else in filtering recipe Sorting grouped values The output of the previous recipe brings the right information back on the screen. It filters the rows correctly and shows the highest selling product per product line for the selected shipment month. For better representation and to highlight the best-selling product lines, we need to sort the product lines in descending order of quantity. Getting ready Open the report created in the previous recipe in Cognos Report Studio for further amendments. How to do it... In the report created in the previous recipe, we managed to show data filtered by the shipment month. To improve the reports look and feel, we will sort the output to highlight the best-selling products. To start this, perform the following steps: Open the report in Cognos Report Studio. Select the Quantity column. Click on the Sort button from the toolbar and choose Sort Descending . Run the report to check if sorting is working. You will notice that sorting is not working. Now go back to Report Studio, select Quantity , and click on the Sort button again. This time choose Edit Layout Sorting under the Other Sort Options header. Expand the tree for Product line . Drag Quantity from Detail Sort List to Sort List under Product line as shown in the following screenshot: Click on the OK button and test the report. This time the rows are sorted in descending order of Quantity as required. How it works... The sort option by default works at the detailed level. This means the non-grouped items are sorted by the specified criteria within their own groups. Here we want to sort the product lines that are grouped (not the detailed items). In order to sort the groups, we need to define a more advanced sorting using the Edit Layout Sorting options shown in this recipe. There's more... You can also define sorting for the whole list report from the Edit Layout Sorting dialog box. You can use different items and ordering for different groups and details. You can also choose to sort certain groups by the data items that are not shown in the report. You need to bring only those items from source (model) to the query, and you will be able to pick it in the sorting dialog. Aggregation and rollup aggregation Business owners want to see the unit cost of every product. They also want the entries to be grouped by product line and see the highest unit cost for each product line. At the end of the report, they want to see the average unit cost for the whole range. Getting ready Create a simple list report with Products / Product line , Products / Product , and Sales fact / Unit cost as columns. How to do it... In this recipe, we want to examine how to aggregate the data and what is meant by rollup aggregation. Using the new report that you have created, this is how we are going to start this recipe: We will start by examining the Unit cost column. Click on this column and check the Aggregate Function property. Set this property to Average . Add grouping for Product line and Product by selecting those columns and then clicking on the GROUP button from the toolbar. Click on the Unit cost column and then click on the Summarize button from the toolbar. Select the Total option from the list. Now, again click on the Summarize button and choose the Average option as shown in the following screenshot: The previous step will create footers as shown in the following screenshot: Now delete the line with the <Average (Unit cost)> measure from Product line . Similarly, delete the line with the <Unit cost> measure from Summary . The report should look like the following screenshot: Click on the Unit cost column and change its Rollup Aggregate Function property to Maximum . Run the report to test it. How it works... In this recipe, we have seen two properties of the data items related to aggregation of the values. The aggregation property We first examined the aggregation property of unit cost and ensured that it was set to average. Remember that the unit cost here comes from the sales table. The grain of this table is sales entries or orders. This means there will be many entries for each product and their unit cost will repeat. We want to show only one entry for each product and the unit cost needs to be rolled up correctly. The aggregation property determines what value is shown for unit cost when calculated at product level. If it is set to Total , it will wrongly add up the unit costs for each sales entry. Hence, we are setting it to Average . It can be set to Minimum or Maximum depending on business requirements. The rollup aggregation property In order to show the maximum unit cost for product type, we create an aggregate type of footer in step 4 and set the Rollup Aggregation to Maximum in step 8. Here we could have directly selected Maximum from the Summarize drop-down toolbox. But that creates a new data item called Maximum (Unit Cost) . Instead, we ask Cognos to aggregate the number in the footer and drive the type by rollup aggregation property. This will reduce one data item in query subject and native SQL. Multiple aggregation We also need to show the overall average at the bottom. For this we have to create a new data item. Hence, we select unit cost and create an Average type of aggregation in step 5. This calculates the Average (Unit Cost) and places it on the product line and in the overall footer. We then deleted the aggregations that are not required in step 7. There's more... The rollup aggregation of any item is important only when you create the aggregation of Aggregate type. When it is set to automatic, Cognos will decide the function based on the data type, which is not preferred. It is good practice to always set the aggregation and rollup aggregation to a meaningful function rather than leaving them as automatic.
Read more
  • 0
  • 0
  • 2060
article-image-ravendbnet-client-api
Packt
11 Sep 2013
12 min read
Save for later

RavenDB.NET Client API

Packt
11 Sep 2013
12 min read
(For more resources related to this topic, see here.) The RavenDB .NET Client API RavenDB provides a set of .NET client libraries for interacting with it, which can be accessed from any .NET-based programming languages. By using these libraries, you can manage RavenDB, construct requests, send them to the server, and receive responses. The .NET Client API exposes all aspects of the RavenDB and allows developers to easily integrate RavenDB services into their own applications. The .NET Client API is involved with loading and saving the data, and it is responsible for integrating with the .NET System.Transactions namespace. With System.Transactions, there is already a general way to work transactionally across different resources. Also the .NET Client API is responsible for batching requests to the server, caching, and more. The .NET Client API is easy to use and uses several conventions to control how it works. These conventions can be modified by the user to meet its application needs. It is not recommended to use System.Transactions. In fact, it is recommended to avoid it. System.Transactions is supported in RavenDB mostly to allow integration while using collaboration tools between services for example, NServiceBus. Setting up your development environment Before you can start developing an application for RavenDB, you must first set up your development environment. This setup involves installing the following software packages on your development computer: Visual Studio 2012 (required) RavenDB Client (required) NuGet Package Manager (optional) You may download and install the latest Visual Studio version from the Microsoft website: http://msdn.microsoft.com/En-us/library/vstudio/e2h7fzkw.aspx In order to use RavenDB in your own .NET application, you have to add a reference to the Raven.Client.Lightweight.dll and the Raven.Abstractions.dll files, (which you can find in the ~Client folder of the distribution package) into your Visual Studio project. The easiest way to add a reference to these DLLs is by using the NuGet Package Manager, which you may use to add the RavenDB.Client package. NuGet Package Manager is a Visual Studio extension that makes it easy to add, remove, and update libraries and tools in Visual Studio projects that use the .NET framework. You can find more information on the NuGet Package Manager extension by visiting the official website at http://nuget.org/. Time for action – installing NuGet Package Manager The NuGet Package Manager extension is the easiest way to add the RavenDB Client library to a Visual Studio project. If you do not have NuGet Package Manager already installed, you can install it as follows: Start Visual Studio. From the TOOLS menu, click on Extensions and Updates.... In the Extensions and Updates... dialog, click on Online. If you don’t see NuGet Package Manager, type nuget package manager in the search box. Select the NuGet Package Manager extension and click on Download. After the download completes, you will be prompted to install the package. After the installation completes, you might be prompted to restart Visual Studio. What just happened? We installed the NuGet Package Manager extension to Visual Studio, which you will use to add RavenDB Client to your Visual Studio project. Creating a simple application Let’s write a simple application to learn how to interact with RavenDB. Time for action – adding RavenDB Client to a Visual Studio project Let’s go ahead and create a new Visual Studio project. You will add the RavenDB Client using the Nuget Package Manager extension to this project to be able to connect to RavenDB and begin using it: Start Visual Studio and click on New Project from the Start page or from the File menu, navigate to New and then Project. In the Templates pane, click on Installed and then click on Templates and expand the Visual C# node. Under Visual C#, click on Windows. In the list of project templates, select Console Application. Name the project as RavenDB_Ch03 and click on OK. From the TOOLS menu, click on Library Package Manager. If you do not see this menu item, make sure that the NuGet Package Manager extension has been installed correctly. Click on Manage NuGet Packages for Solution.... In the Manage NugGet Packages dialog, select Online. In the search box, type ravendb. Select the package named RavenDB Client. Click on Install and accept the license. After the package installs, click on Close to close the dialog box. What just happened? We created the RavenDB_Ch03 Visual Studio project and added the RavenDB Client to get connected to the RavenDB server. Once the RavenDB Client is installed, by expanding the References node of your project in Visual Studio, you can see the RavenDB DLLs (Raven.Abstractions, Raven.Client.Lightweight) added automatically to your project by the Nuget Package Manager extension. You should ensure that the RavenDB Client version matches the server version you are running. This can lead to some really frustrating runtime errors when the versions don’t match. You can also install RavenDB Client using the Package Manager console (Visual Studio | Tools | Library Package Manager | Package Manager Console). To install the latest RavenDB Client, run the following command in the Package Manager console: Install-Package RavenDB.Client or Install-Package RavenDB.Client–Version {version number} to add a specific version. Connecting to RavenDB To begin using RavenDB we need to get a new instance of the DocumentStore object, which points to the server and acts as the main communication channel manager. Once a new instance of DocumentStore has been created, the next step is to create a new session against that Document Store. The session object is responsible for implementing the Unit of Work pattern. A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you’re done, it figures out everything that needs to be done to alter the database as a result of your work. The session object is used to interact with RavenDB and provides a fully transactional way of performing database operations, and allows the consumer to store data into the database, and load it back when necessary using queries or by document ID. In order to perform an action against the RavenDB store, we need to ensure that we have an active and valid RavenDB session before starting the action, and dispose it off once it ends. Basically, before disposing of the session, we will call the SaveChanges() method on the session object to ensure that all changes will be persisted. To create a new RavenDB session, we will call the OpenSession() method on the DocumentStore object, which will return a new instance of the IDocumentSession object. Time for action – connecting to RavenDB Your Visual Studio project now has a reference to all needed RavenDB Client DLLs and is ready to get connected to the RavenDB server. You will add a new class and necessary code to create a communication channel with the RavenDB server: Open the RavenDB_Ch03 project. Modify the Main() method (in Program.cs class) to look like the following code snippet: Press Ctrl + Shift + S to save all files in the solution. What just happened? We just added the DocumentStore object initialization code to the Main() method of the RavenDB_Ch03 project. Within the Main() method, you create a new instance of the DocumentStore object, which is stored in the DocumentStore variable. The DocumentStore object basically acts as the connection manager and must be created only once per application. It is important to point out that this is a heavyweight and thread safe object. Note that, when you create many of these your application will run slow and will have a larger memory footprint. To create an instance of the DocumentStore object, you need to specify the URL that points to the RavenDB server. The URL has to include the TCP port number (8080 is the default port number) on which RavenDB server is listening (line 17 in the previous code snippet). In order to point to the Orders database, you set the value of DefaultDatabase to Orders, which is the name of our database (line 18). To get the new instance of the IDocumentStore object, you have to call the Initialize() method on the DocumentStore object. With this instance, you can establish the connection to the RavenDB server (line 19). The whole DocumentStore initialization code is surrounded by a using statement. This is used to specify when resources should be released. Basically, the using statement calls the Dispose() method of each object for which the resources should be released. Interacting with RavenDB using the .NET Client API RavenDB stores documents in JSON format. It converts the .NET objects into their JSON equivalent when storing documents, and back again by mapping the .NET object property names to the JSON property names and copies the values when loading documents. This process that makes writing and reading data structures to and from a document file extremely easy is called Serialization and Deserialization. Interacting with RavenDB is very easy. You might create a new DocumentStore object and then open a session, do some operations, and finally apply the changes to the RavenDB server. The session object will manage all changes internally, but changes will be committed to the underlying document database only when the SaveChanges() method is called. This is important to note because all changes to the document will be discarded if this method is not invoked. RavenDB is safe by default. This unique feature means that the database is configured to stop users querying for large amount of data. It is never a good idea to have a query that returns thousands of records, which is inefficient and may take a long time to process. By default, RavenDB limits the number of documents on the client side to 128 (this is a configurable option) if you don't make use of the Take() method on the Query object. Basically, if you need to get data beyond the 128 documents limit, you need to page your query results (by using Take() and Skip() methods). RavenDB also has a very useful option to stop you from creating the dreaded SELECT N+1 scenario—this feature stops after 30 requests are sent to the server per session, (which also is a configurable option). The recommended practice is to keep the ratio of SaveChanges() calls to session instance at 1:1. Reaching the limit of 30 remote calls while maintaining the 1:1 ratio is typically a symptom of a significant N+1 problem. To retrieve or store documents, you might create a class type to hold your data document and use the session instance to save or load that document, which will be automatically serialized to JSON. Loading a document To load a single document from a RavenDB database, you need to invoke the Load() method on the Session object you got from the DocumentStore instance. Then you need to specify the type of the document you will load from the RavenDB database. Time for action – loading a document You will create the Order class to hold the order data information and will add the LoadOrder()method, which you will call to load an Order document from the RavenDB server: Open the RavenDB_Ch03 project, add a new class and name it Order. Add the following code snippet to the Order class: Add the DisplayOrder() method to the Program class using the following code snippet: Add the Load<Order>() method to the Program class using the following code snippet: Save all the files and press F6 to build the solution. Switch to Windows Explorer and go to the RavenDB installation folder and launch RavenDB server using the Start.cmd command file. Return to Visual Studio, once the RavenDB server is running, press F5 to run RavenDB_Ch03 to see the document information in the output console window. What just happened? You just wrote the necessary code to load your first document from the RavenDB server. Let’s take a look at the code you added to the RavenDB_Ch03 project. You added the Order class to the project. This class will hold the data information for the Order document you will load from the server. It contains six fields (lines 11 to 16 in the previous code snippet) that will be populated with values from the JSON Order document stored on the server. By adding a field named Id, RavenDB will automatically populate this field with the document ID when the document is retrieved from the server. You added the DisplayOrder() method to the Program class. This method is responsible for displaying the Order field’s values to the output window. You also added the Load<Order>() method (lines 26 to 30) to the Program class. This method is surrounded by a using statement to ensure that the resources will be disposed at the end of execution of this method. To open a RavenDB session you call the OpenSession() method on the IDocumentStore object. This session is handled by the session variable (line 26). The Load() method is a generic method. It will load a specific entity with a specific ID, which you need to provide when calling the method. So in the calling code to the Load() method (line 28), you provide the Order entity and the document ID that we want to retrieve from the server which is Orders/A179854. Once the Order document with the Id field as Orders/A179854 is retrieved from the server, you send it to the DisplayOrder() method to be displayed (line 29). Finally, you build and run the RavenDB_Ch03 project. Have a go hero – loading multiple documents You know now how to load any single document from the RavenDB server using the Load() method. What if you have more than one document to load? You can use the Load() method to load more than one document in a single call. It seems easy to do. Well give it a go! Arrays are very useful!
Read more
  • 0
  • 0
  • 2700

article-image-creating-sample-web-scraper
Packt
11 Sep 2013
10 min read
Save for later

Creating a sample web scraper

Packt
11 Sep 2013
10 min read
(For more resources related to this topic, see here.) As web scrapers like to say: "Every website has an API. Sometimes it's just poorly documented and difficult to use." To say that web scraping is a useful skill is an understatement. Whether you're satisfying a curiosity by writing a quick script in an afternoon or building the next Google, the ability to grab any online data, in any amount, in any format, while choosing how you want to store it and retrieve it, is a vital part of any good programmer's toolkit. By virtue of reading this article, I assume that you already have some idea of what web scraping entails, and perhaps have specific motivations for using Java to do it. Regardless, it is important to cover some basic definitions and principles. Very rarely does one hear about web scraping in Java—a language often thought to be solely the domain of enterprise software and interactive web apps of the 90's and early 2000's. However, there are many reasons why Java is an often underrated language for web scraping: Java's excellent exception-handling lets you compile code that elegantly handles the often-messy Web Reusable data structures allow you to write once and use everywhere with ease and safety Java's concurrency libraries allow you to write code that can process other data while waiting for servers to return information (the slowest part of any scraper) The Web is big and slow, but the Java RMI allows you to write code across a distributed network of machines, in order to collect and process data quickly There are a variety of standard libraries for getting data from servers, as well as third-party libraries for parsing this data, and even executing JavaScript (which is needed for scraping some websites) In this article, we will explore these, and other benefits of Java in web scraping, and build several scrapers ourselves. Although it is possible, and recommended, to skip to the sections you already have a good grasp of, keep in mind that some sections build up the code and concepts of other sections. When this happens, it will be noted in the beginning of the section. How is this legal? Web scraping has always had a "gray hat" reputation. While websites are generally meant to be viewed by actual humans sitting at a computer, web scrapers find ways to subvert that. While APIs are designed to accept obviously computer-generated requests, web scrapers must find ways to imitate humans, often by modifying headers, forging POST requests and other methods. Web scraping often requires a great deal of problem solving and ingenuity to figure out how to get the data you want. There are often few roadmaps or tried-and-true procedures to follow, and you must carefully tailor the code to each website—often riding between the lines of what is intended and what is possible. Although this sort of hacking can be fun and challenging, you have to be careful to follow the rules. Like many technology fields, the legal precedent for web scraping is scant. A good rule of thumb to keep yourself out of trouble is to always follow the terms of use and copyright documents on websites that you scrape (if any). There are some cases in which the act of web crawling is itself in murky legal territory, regardless of how the data is used. Crawling is often prohibited in the terms of service of websites where the aggregated data is valuable (for example, a site that contains a directory of personal addresses in the United States), or where a commercial or rate-limited API is available. Twitter, for example, explicitly prohibits web scraping (at least of any actual tweet data) in its terms of service: "crawling the Services is permissible if done in accordance with the provisions of the robots.txt file, however, scraping the Services without the prior consent of Twitter is expressly prohibited" Unless explicitly prohibited by the terms of service, there is no fundamental difference between accessing a website (and its information) via a browser, and accessing it via an automated script. The robots.txt file alone has not been shown to be legally binding, although in many cases the terms of service can be. Writing a simple scraper (Simple) Wikipedia is not just a helpful resource for researching or looking up information but also a very interesting website to scrape. They make no efforts to prevent scrapers from accessing the site, and, with a very well-marked-up HTML, they make it very easy to find the information you're looking for. In this project, we will scrape an article from Wikipedia and retrieve the first few lines of text from the body of the article. Getting ready It is recommended that you have some working knowledge of Java, and the ability to create and execute Java programs at this point. As an example, we will use the article from the following Wikipedia link: http://en.wikipedia.org/wiki/Java Note that this article is about the Indonesian island nation Java, not the programming language. Regardless, it seems appropriate to use it as a test subject. We will be using the jsoup library to parse HTML data from websites and convert it into a manipulatable object, with traversable, indexed values (much like an array). In this xercise, we will show you how to download, install, and use Java libraries. In addition, we'll also be covering some of the basics of the jsoup library in particular. How to do it... Now that we're starting to get into writing scrapers, let's create a new project to keep them all bundled together. Carry out the following steps for this task: Open Eclipse and create a new Java project called Scraper. Packages are still considered to be handy for bundling collections of classes together within a single project (projects contain multiple packages, and packages contain multiple classes). You can create a new package by highlighting the Scraper project in Eclipse and going to File | New | Package . By convention, in order to prevent programmers from creating packages with the same name (and causing namespace problems), packages are named starting with the reverse of your domain name (for example, com.mydomain.mypackagename). For the rest of the article, we will begin all our packages with com.packtpub.JavaScraping appended with the package name. Let's create a new package called com.packtpub.JavaScraping.SimpleScraper. Create a new class, WikiScraper, inside the src folder of the package. Download the jsoup core library, the first link, from the following URL: http://jsoup.org/download Place the .jar file you downloaded into the lib folder of the package you just created. In Eclipse, right-click in the Package Explorer window and select Refresh. This will allow Eclipse to update the Package Explorer to the current state of the workspace folder. Eclipse should show your jsoup-1.7.2.jar file (this file may have a different name depending on the version you're using) in the Package Explorer window. Right-click on the jsoup JAR file and select Build Path | Add to Build Path. In your WikiScraper class file, write the following code: package com.packtpub.JavaScraping.SimpleScraper; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import java.net.*; import java.io.*; public class WikiScraper { public static void main(String[] args) { scrapeTopic("/wiki/Python"); } public static void scrapeTopic(String url){ String html = getUrl("http://www.wikipedia.org/"+url); Document doc = Jsoup.parse(html); String contentText = doc.select("#mw-content-text > p").first().text(); System.out.println(contentText); } public static String getUrl(String url){ URL urlObj = null; try{ urlObj = new URL(url); } catch(MalformedURLException e){ System.out.println("The url was malformed!"); return ""; } URLConnection urlCon = null; BufferedReader in = null; String outputText = ""; try{ urlCon = urlObj.openConnection(); in = new BufferedReader(new InputStreamReader(urlCon.getInputStream())); String line = ""; while((line = in.readLine()) != null){ outputText += line; } in.close(); }catch(IOException e){ System.out.println("There was an error connecting to the URL"); return ""; } return outputText; } } Assuming you're connected to the internet, this should compile and run with no errors, and print the first paragraph of text from the article. How it works... Unlike our HelloWorld example, there are a number of libraries needed to make this code work. We can incorporate all of these using the import statements before the class declaration. There are a number of jsoup objects needed, along with two Java libraries, java.io and java.net , which are needed for creating the connection and retrieving information from the Web. As always, our program starts out in the main method of the class. This method calls the scrapeTopic method, which will eventually print the data that we are looking for (the first paragraph of text in the Wikipedia article) to the screen. scrapeTopic requires another method, getURL, in order to do this. getUrl is a function that we will be using throughout the article. It takes in an arbitrary URL and returns the raw source code as a string. Essentially, it creates a Java URL object from the URL string, and calls the openConnection method on that URL object. The openConnection method returns a URLConnection object, which can be used to create a BufferedReader object. BufferedReader objects are designed to read from, potentially very long, streams of text, stopping at a certain size limit, or, very commonly, reading streams one line at a time. Depending on the potential size of the pages you might be reading (or if you're reading from an unknown source), it might be important to set a buffer size here. To simplify this exercise, however, we will continue to read as long as Java is able to. The while loop here retrieves the text from the BufferedReader object one line at a time and adds it to outputText, which is then returned. After the getURL method has returned the HTML string to scrapeTopic, jsoup is used. jsoup is a Java library that turns HTML strings (such as the string returned by our scraper) into more accessible objects. There are many ways to access and manipulate these objects, but the function you'll likely find yourself using most often is the select function. The jsoup select function returns a jsoup object (or an array of objects, if more than one element matches the argument to the select function), which can be further manipulated, or turned into text, and printed. The crux of our script can be found in this line: String contentText = doc.select("#mw-content-text > p").first().text(); This finds all the elements that match #mw-content-text > p (that is, all p elements that are the children of the elements with the CSS ID mw-content-text), selects the first element of this set, and turns the resulting object into plain text (stripping out all tags, such as <a> tags or other formatting that might be in the text). The program ends by printing this line out to the console. There's more... Jsoup is a powerful library that we will be working with in many applications throughout this article. For uses that are not covered in the article, I encourage you to read the complete documentation at http://jsoup.org/apidocs/. What if you find yourself working on a project where jsoup's capabilities aren't quite meeting your needs? There are literally dozens of Java-based HTML parsers on the market. Summary Thus in this article we took the first step towards web scraping with Java, and also learned how to scrape an article from Wikipedia and retrieve the first few lines of text from the body of the article. Resources for Article : Further resources on this subject: Making a simple cURL request (Simple) [Article] Web Scraping with Python [Article] Generating Content in WordPress Top Plugins—A Sequel [Article]  
Read more
  • 0
  • 0
  • 9647
Modal Close icon
Modal Close icon