Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-building-click-go-robot
Packt
28 Aug 2015
16 min read
Save for later

Building a "Click-to-Go" Robot

Packt
28 Aug 2015
16 min read
 In this article by Özen Özkaya and Giray Yıllıkçı, author of the book Arduino Computer Vision Programming, you will learn how to approach computer vision applications, how to divide an application development process into basic steps, how to realize these design steps and how to combine a vision system with the Arduino. Now it is time to connect all the pieces into one! In this article you will learn about building a vision-assisted robot which can go to any point you want within the boundaries of the camera's sight. In this scenario there will be a camera attached to the ceiling and, once you get the video stream from the robot and click on any place in the view, the robot will go there. This application will give you an all-in-one development application. Before getting started, let's try to draw the application scheme and define the potential steps. We want to build a vision-enabled robot which can be controlled via a camera attached to the ceiling and, when we click on any point in the camera view, we want our robot to go to this specific point. This operation requires a mobile robot that can communicate with the vision system. The vision system should be able to detect or recognize the robot and calculate the position and orientation of the robot. The vision system should also give us the opportunity to click on any point in the view and it should calculate the path and the robot movements to get to the destination. This scheme requires a communication line between the robot and the vision controller. In the following illustration, you can see the physical scheme of the application setup on the left hand side and the user application window on the right hand side: After interpreting the application scheme, the next step is to divide the application into small steps by using the computer vision approach. In the data acquisition phase, we'll only use the scene's video stream. There won't be an external sensor on the robot because, for this application, we don't need one. Camera selection is important and the camera distance (the height from the robot plane) should be enough to see the whole area. We'll use the blue and red circles above the robot to detect the robot and calculate its orientation. We don't need smaller details. A resolution of about 640x480 pixels is sufficient for a camera distance of 120 cm. We need an RGB camera stream because we'll use the color properties of the circles. We will use the Logitech C110, which is an affordable webcam. Any other OpenCV compatible webcam will work because this application is not very demanding in terms of vision input. If you need more cable length you can use a USB extension cable. In the preprocessing phase, the first step is to remove the small details from the surface. Blurring is a simple and effective operation for this purpose. If you need to, you can resize your input image to reduce the image size and processing time. Do not forget that, if you resize to too small a resolution, you won't be able to extract useful information. The following picture is of the Logitech C110 webcam: The next step is processing. There are two main steps in this phase. The first step is to detect the circles in the image. The second step is to calculate the robot orientation and the path to the destination point. The robot can then follow the path and reach its destination. In color processing with which we can apply color filters to the image to get the image masks of the red circle and the blue circle, as shown in the following picture. Then we can use contour detection or blob analysis to detect the circles and extract useful features. It is important to keep it simple and logical: Blob analysis detects the bounding boxes of two circles on the robot and, if we draw a line between the centers of the circles, once we calculate the line angle, we will get the orientation of the robot itself. The mid-point of this line will be the center of the robot. If we draw a line from the center of the robot to the destination point we obtain the straightest route. The circles on the robot can also be detected by using the Hough transform for circles but, because it is a relatively slow algorithm and it is hard to extract image statistics from the results, the blob analysis-based approach is better. Another approach is by using the SURF, SIFT or ORB features. But these methods probably won't provide fast real-time behavior, so blob analysis will probably work better. After detecting blobs, we can apply post-filtering to remove the unwanted blobs. We can use the diameter of the circles, the area of the bounding box, and the color information, to filter the unwanted blobs. By using the properties of the blobs (extracted features), it is possible to detect or recognize the circles, and then the robot. To be able to check if the robot has reached the destination or not, a distance calculation from the center of the robot to the destination point would be useful. In this scenario, the robot will be detected by our vision controller. Detecting the center of the robot is sufficient to track the robot. Once we calculate the robot's position and orientation, we can combine this information with the distance and orientation to the destination point and we can send the robot the commands to move it! Efficient planning algorithms can be applied in this phase but, we'll implement a simple path planning approach. Firstly, the robot will orientate itself towards the destination point by turning right or left and then it will go forward to reach the destination. This scenario will work for scenarios without obstacles. If you want to extend the application for a complex environment with obstacles, you should implement an obstacle detection mechanism and an efficient path planning algorithm. We can send the commands such as Left!, Right!, Go!, or Stop! to the robot over a wireless line. RF communication is an efficient solution for this problem. In this scenario, we need two NRF24L01 modules—the first module is connected to the robot controller and the other is connected to the vision controller. The Arduino is the perfect means to control the robot and communicate with the vision controller. The vision controller can be built on any hardware platform such as a PC, tablet, or a smartphone. The vision controller application can be implemented on lots of operating systems as OpenCV is platform-independent. We preferred Windows and a laptop to run our vision controller application. As you can see, we have divided our application into small and easy-to-implement parts. Now it is time to build them all! Building a robot It is time to explain how to build our Click-to-Go robot. Before going any further we would like to boldly say that robotic projects can teach us the fundamental fields of science such as mechanics, electronics, and programming. As we go through the building process of our Click-to-Go robot, you will see that we have kept it as simple as possible. Moreover, instead of buying ready-to-use robot kits, we have built our own simple and robust robot. Of course, if you are planning to buy a robot kit or already have a kit available, you can simply adapt your existing robot into this project. Our robot design is relatively simple in terms of mechanics. We will use only a box-shaped container platform, two gear motors with two individual wheels, a battery to drive the motors, one nRF24L01 Radio Frequency (RF) transceiver module, a bunch of jumper wires, an L293D IC and, of course, one Arduino Uno board module. We will use one more nRF24L01 and one more Arduino Uno for the vision controller communication circuit. Our Click-to-Go robot will be operated by a simplified version of a differential drive. A differential drive can be summarized as a relative speed change on the wheels, which assigns a direction to the robot. In other words, if both wheels spin at the same rate, the robot goes forward. To drive in reverse, the wheels spin in the opposite direction. To turn left, the left wheel turns backwards and the right wheel stays still or turns forwards. Similarly, to turn right, the right wheel turns backwards and the left stays still or turns forwards. You can get curved paths by varying the rotation speeds of the wheels. Yet, to cover every aspect of this comprehensive project, we will drive the wheels of both the motors forward to go forwards. To turn left, the left wheel stays still and the right wheel turns forward. Symmetrically, to turn right, the right motor stays still and the left motor runs forward. We will not use running motors in a reverse direction to go backwards. Instead, we will change the direction of the robot by turning right or left. Building mechanics As we stated earlier, the mechanics of the robot are fairly simple. First of all we need a small box-shaped container to use as both a rigid surface and the storage for the battery and electronics. For this purpose, we will use a simple plywood box. We will attach gear motors in front of the plywood box and any kind of support surface to the bottom of the box. As can be seen in the following picture, we used a small wooden rod to support the back of the robot to level the box: If you think that the wooden rod support is dragging, we recommend adding a small ball support similar to Pololu's ball caster, shown at https://www.pololu.com/product/950. It is not a very expensive component and it significantly improves the mobility of the robot. You may want to drill two holes next to the motor wirings to keep the platform tidy. The easiest way to attach the motors and the support rod is by using two-sided tape. Just make sure that the tape is not too thin. It is much better to use two-sided foamy tape. The topside of the robot can be covered with a black shell to enhance the contrast between the red and blue circles. We will use these circles to ascertain the orientation of the robot during the operation, as mentioned earlier. For now, don't worry too much about this detail. Just be aware that we need to cover the top of the robot with a flat surface. We will explain in detail on how these red and blue circles are used. It is worth mentioning that we used large water bottle lids. It is better to use matt surfaces instead of shiny surfaces to avoid glare in the image. The finished Click-to-Go robot should be similar to the robot shown in the following picture. The robot's head is on the side with the red circle: As we have now covered building the mechanics of our robot we can move on to building the electronics. Building the electronics We will use two separate Arduino Unos for this vision-enabled robot project, one each for the robot and the transmitter system. The electronic setup needs a little bit more attention than the mechanics. The electronic components of the robot and the transmitter units are similar. However, the robot needs more work. We have selected nRF24L01 modules for the wireless communication module,. These modules are reliable and easy to find from both the Internet and local hobby stores. It is possible to use any pair of wireless connectivity modules but, for this project, we will stick with nRF24L01 modules, as shown in this picture: For the driving motors we will need to use a quadruple half-H driver, L293D. Again, every electronic shop should have these ICs. As a reminder, you may need to buy a couple of spare L293D ICs in case you burn the IC by mistake. Following is the picture of the L293D IC: We will need a bunch of jumper wires to connect the components together. It is nice to have a small breadboard for the robot/receiver, to wire the L293D. The transmitter part is very simple so a breadboard is not essential. Robot/receiver and transmitter drawings The drawings of both the receiver and the transmitter have two common modules: Arduino Uno and nRF24L01 connectivity modules. The connections of the nRF24L01 modules on both sides are the same. In addition to these connectivity modules, for the receiver, we need to put some effort into connecting the L293D IC and the battery to power up the motors. In the following picture, we can see a drawing of the transmitter. As it will always be connected to the OpenCV platform via the USB cable, there is no need to feed the system with an external battery: As shown in the following picture of the receiver and the robot, it is a good idea to separate the motor battery from the battery that feeds the Arduino Uno board because the motors may draw high loads or create high loads, which can easily damage the Arduino board's pin outs. Another reason is to keep the Arduino working even if the battery motor has drained. Separating the feeder batteries is a very good practice to follow if you are planning to use more than one 12V battery. To keep everything safe, we fed the Arduino Uno with a 6V battery pack and the motors with a 9V battery: Drawings of receiver systems can be little bit confusing and lead to errors. It is a good idea to open the drawings and investigate how the connections are made by using Fritzing. You can download the Fritzing drawings of this project from https://github.com/ozenozkaya/click_to_go_robot_drawings. To download the Fritzing application, visit the Fritzing download page: http://fritzing.org/download/ Building the robot controller and communications We are now ready to go through the software implementation of the robot and the transmitter. Basically what we are doing here is building the required connectivity to send data to the remote robot continuously from OpenCV via a transmitter. OpenCV will send commands to the transmitter through a USB cable to the first Arduino board, which will then send the data to the unit on the robot. And it will send this data to the remote robot over the RF module. Follow these steps: Before explaining the code, we need to import the RF24 library. To download RF24 library drawings please go to the GitHub link at https://github.com/maniacbug/RF24. After downloading the library, go to Sketch | Include Library | Add .ZIP Library… to include the library in the Arduino IDE environment. After clicking Add .ZIP Library…, a window will appear. Go into the downloads directory and select the RF24-master folder that you just downloaded. Now you are ready to use the RF24 library. As a reminder, it is pretty much the same to include a library in Arduino IDE as on other platforms. It is time to move on to the explanation of the code! It is important to mention that we use the same code for both the robot and the transmitter, with a small trick! The same code works differently for the robot and the transmitter. Now, let's make everything simpler during the code explanation. The receiver mode needs to ground an analog 4 pin. The idea behind the operation is simple; we are setting the role_pin to high through its internal pull-up resistor. So, it will read high even if you don't connect it, but you can still safely connect it to ground and it will read low. Basically, the analog 4 pin reads 0 if the there is a connection with a ground pin. On the other hand, if there is no connection to the ground, the analog 4 pin value is kept as 1. By doing this at the beginning, we determine the role of the board and can use the same code on both sides. Here is the code: #include <SPI.h> #include "nRF24L01.h" #include "RF24.h" #define MOTOR_PIN_1 3 #define MOTOR_PIN_2 5 #define MOTOR_PIN_3 6 #define MOTOR_PIN_4 7 #define ENABLE_PIN 4 #define SPI_ENABLE_PIN 9 #define SPI_SELECT_PIN 10 const int role_pin = A4; typedef enum {transmitter = 1, receiver} e_role; unsigned long motor_value[2]; String input_string = ""; boolean string_complete = false; RF24 radio(SPI_ENABLE_PIN, SPI_SELECT_PIN); const uint64_t pipes[2] = { 0xF0F0F0F0E1LL, 0xF0F0F0F0D2LL }; e_role role = receiver; void setup() { pinMode(role_pin, INPUT); digitalWrite(role_pin, HIGH); delay(20); radio.begin(); radio.setRetries(15, 15); Serial.begin(9600); Serial.println(" Setup Finished"); if (digitalRead(role_pin)) { Serial.println(digitalRead(role_pin)); role = transmitter; } else { Serial.println(digitalRead(role_pin)); role = receiver; } if (role == transmitter) { radio.openWritingPipe(pipes[0]); radio.openReadingPipe(1, pipes[1]); } else { pinMode(MOTOR_PIN_1, OUTPUT); pinMode(MOTOR_PIN_2, OUTPUT); pinMode(MOTOR_PIN_3, OUTPUT); pinMode(MOTOR_PIN_4, OUTPUT); pinMode(ENABLE_PIN, OUTPUT); digitalWrite(ENABLE_PIN, HIGH); radio.openWritingPipe(pipes[1]); radio.openReadingPipe(1, pipes[0]); } radio.startListening(); } void loop() { // TRANSMITTER CODE BLOCK // if (role == transmitter) { Serial.println("Transmitter"); if (string_complete) { if (input_string == "Right!") { motor_value[0] = 0; motor_value[1] = 120; } else if (input_string == "Left!") { motor_value[0] = 120; motor_value[1] = 0; } else if (input_string == "Go!") { motor_value[0] = 120; motor_value[1] = 110; } else { motor_value[0] = 0; motor_value[1] = 0; } input_string = ""; string_complete = false; } radio.stopListening(); radio.write(motor_value, 2 * sizeof(unsigned long)); radio.startListening(); delay(20); } // RECEIVER CODE BLOCK // if (role == receiver) { Serial.println("Receiver"); if (radio.available()) { bool done = false; while (!done) { done = radio.read(motor_value, 2 * sizeof(unsigned long)); delay(20); } Serial.println(motor_value[0]); Serial.println(motor_value[1]); analogWrite(MOTOR_PIN_1, motor_value[1]); digitalWrite(MOTOR_PIN_2, LOW); analogWrite(MOTOR_PIN_3, motor_value[0]); digitalWrite(MOTOR_PIN_4 , LOW); radio.stopListening(); radio.startListening(); } } } void serialEvent() { while (Serial.available()) { // get the new byte: char inChar = (char)Serial.read(); // add it to the inputString: input_string += inChar; // if the incoming character is a newline, set a flag // so the main loop can do something about it: if (inChar == '!' || inChar == '?') { string_complete = true; Serial.print("data_received"); } } } This example code is taken from one of the examples in the RF24 library. We have changed it in order to serve our needs in this project. The original example can be found in the RF24-master/Examples/pingpair directory. Summary We have combined everything we have learned up to now and built an all-in-one application. By designing and building the Click-to-Go robot from scratch you have embraced the concepts. You can see that the vision approach very well, even for complex applications. You now know how to divide a computer vision application into small pieces, how to design and implement each design step, and how to efficiently use the tools you have. Resources for Article: Further resources on this subject: Getting Started with Arduino[article] Arduino Development[article] Programmable DC Motor Controller with an LCD [article]
Read more
  • 0
  • 0
  • 2651

article-image-rendering-stereoscopic-3d-models-using-opengl
Packt
24 Aug 2015
8 min read
Save for later

Rendering Stereoscopic 3D Models using OpenGL

Packt
24 Aug 2015
8 min read
In this article, by Raymond C. H. Lo and William C. Y. Lo, authors of the book OpenGL Data Visualization Cookbook, we will demonstrate how to visualize data with stunning stereoscopic 3D technology using OpenGL. Stereoscopic 3D devices are becoming increasingly popular, and the latest generation's wearable computing devices (such as the 3D vision glasses from NVIDIA, Epson, and more recently, the augmented reality 3D glasses from Meta) can now support this feature natively. The ability to visualize data in a stereoscopic 3D environment provides a powerful and highly intuitive platform for the interactive display of data in many applications. For example, we may acquire data from the 3D scan of a model (such as in architecture, engineering, and dentistry or medicine) and would like to visualize or manipulate 3D objects in real time. Unfortunately, OpenGL does not provide any mechanism to load, save, or manipulate 3D models. Thus, to support this, we will integrate a new library named Open Asset Import Library (Assimp) into our code. The main dependencies include the GLFW library that requires OpenGL version 3.2 and higher. (For more resources related to this topic, see here.) Stereoscopic 3D rendering 3D television and 3D glasses are becoming much more prevalent with the latest trends in consumer electronics and technological advances in wearable computing. In the market, there are currently many hardware options that allow us to visualize information with stereoscopic 3D technology. One common format is side-by-side 3D, which is supported by many 3D glasses as each eye sees an image of the same scene from a different perspective. In OpenGL, creating side-by-side 3D rendering requires asymmetric adjustment as well as viewport adjustment (that is, the area to be rendered) – asymmetric frustum parallel projection or equivalently to lens-shift in photography. This technique introduces no vertical parallax and widely adopted in the stereoscopic rendering. To illustrate this concept, the following diagram shows the geometry of the scene that a user sees from the right eye: The intraocular distance (IOD) is the distance between two eyes. As we can see from the diagram, the Frustum Shift represents the amount of skew/shift for asymmetric frustrum adjustment. Similarly, for the left eye image, we perform the transformation with a mirrored setting. The implementation of this setup is described in the next section. How to do it... The following code illustrates the steps to construct the projection and view matrices for stereoscopic 3D visualization. The code uses the intraocular distance, the distance of the image plane, and the distance of the near clipping plane to compute the appropriate frustum shifts value. In the source file, common/controls.cpp, we add the implementation for the stereo 3D matrix setup: void computeStereoViewProjectionMatrices(GLFWwindow* window, float IOD, float depthZ, bool left_eye){ int width, height; glfwGetWindowSize(window, &width, &height); //up vector glm::vec3 up = glm::vec3(0,-1,0); glm::vec3 direction_z(0, 0, -1); //mirror the parameters with the right eye float left_right_direction = -1.0f; if(left_eye) left_right_direction = 1.0f; float aspect_ratio = (float)width/(float)height; float nearZ = 1.0f; float farZ = 100.0f; double frustumshift = (IOD/2)*nearZ/depthZ; float top = tan(g_initial_fov/2)*nearZ; float right = aspect_ratio*top+frustumshift*left_right_direction; //half screen float left = -aspect_ratio*top+frustumshift*left_right_direction; float bottom = -top; g_projection_matrix = glm::frustum(left, right, bottom, top, nearZ, farZ); // update the view matrix g_view_matrix = glm::lookAt( g_position-direction_z+ glm::vec3(left_right_direction*IOD/2, 0, 0), //eye position g_position+ glm::vec3(left_right_direction*IOD/2, 0, 0), //centre position up //up direction ); In the rendering loop in main.cpp, we define the viewports for each eye (left and right) and set up the projection and view matrices accordingly. For each eye, we translate our camera position by half of the intraocular distance, as illustrated in the previous figure: if(stereo){ //draw the LEFT eye, left half of the screen glViewport(0, 0, width/2, height); //computes the MVP matrix from the IOD and virtual image plane distance computeStereoViewProjectionMatrices(g_window, IOD, depthZ, true); //gets the View and Model Matrix and apply to the rendering glm::mat4 projection_matrix = getProjectionMatrix(); glm::mat4 view_matrix = getViewMatrix(); glm::mat4 model_matrix = glm::mat4(1.0); model_matrix = glm::translate(model_matrix, glm::vec3(0.0f, 0.0f, -depthZ)); model_matrix = glm::rotate(model_matrix, glm::pi<float>() * rotateY, glm::vec3(0.0f, 1.0f, 0.0f)); model_matrix = glm::rotate(model_matrix, glm::pi<float>() * rotateX, glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 mvp = projection_matrix * view_matrix * model_matrix; //sends our transformation to the currently bound shader, //in the "MVP" uniform variable glUniformMatrix4fv(matrix_id, 1, GL_FALSE, &mvp[0][0]); //render scene, with different drawing modes if(drawTriangles) obj_loader->draw(GL_TRIANGLES); if(drawPoints) obj_loader->draw(GL_POINTS); if(drawLines) obj_loader->draw(GL_LINES); //Draw the RIGHT eye, right half of the screen glViewport(width/2, 0, width/2, height); computeStereoViewProjectionMatrices(g_window, IOD, depthZ, false); projection_matrix = getProjectionMatrix(); view_matrix = getViewMatrix(); model_matrix = glm::mat4(1.0); model_matrix = glm::translate(model_matrix, glm::vec3(0.0f, 0.0f, -depthZ)); model_matrix = glm::rotate(model_matrix, glm::pi<float>() * rotateY, glm::vec3(0.0f, 1.0f, 0.0f)); model_matrix = glm::rotate(model_matrix, glm::pi<float>() * rotateX, glm::vec3(1.0f, 0.0f, 0.0f)); mvp = projection_matrix * view_matrix * model_matrix; glUniformMatrix4fv(matrix_id, 1, GL_FALSE, &mvp[0][0]); if(drawTriangles) obj_loader->draw(GL_TRIANGLES); if(drawPoints) obj_loader->draw(GL_POINTS); if(drawLines) obj_loader->draw(GL_LINES); } The final rendering result consists of two separate images on each side of the display, and note that each image is compressed horizontally by a scaling factor of two. For some display systems, each side of the display is required to preserve the same aspect ratio depending on the specifications of the display. Here are the final screenshots of the same models in true 3D using stereoscopic 3D rendering: Here's the rendering of the architectural model in stereoscopic 3D: How it works... The stereoscopic 3D rendering technique is based on the parallel axis and asymmetric frustum perspective projection principle. In simpler terms, we rendered a separate image for each eye as if the object was seen at a different eye position but viewed on the same plane. Parameters such as the intraocular distance and frustum shift can be dynamically adjusted to provide the desired 3D stereo effects. For example, by increasing or decreasing the frustum asymmetry parameter, the object will appear to be moved in front or behind the plane of the screen. By default, the zero parallax plane is set to the middle of the view volume. That is, the object is set up so that the center position of the object is positioned at the screen level, and some parts of the object will appear in front of or behind the screen. By increasing the frustum asymmetry (that is, positive parallax), the scene will appear to be pushed behind the screen. Likewise, by decreasing the frustum asymmetry (that is, negative parallax), the scene will appear to be pulled in front of the screen. The glm::frustum function sets up the projection matrix, and we implemented the asymmetric frustum projection concept illustrated in the drawing. Then, we use the glm::lookAt function to adjust the eye position based on the IOP value we have selected. To project the images side by side, we use the glViewport function to constrain the area within which the graphics can be rendered. The function basically performs an affine transformation (that is, scale and translation) which maps the normalized device coordinate to the window coordinate. Note that the final result is a side-by-side image in which the graphic is scaled by a factor of two vertically (or compressed horizontally). Depending on the hardware configuration, we may need to adjust the aspect ratio. The current implementation supports side-by-side 3D, which is commonly used in most wearable Augmented Reality (AR) or Virtual Reality (VR) glasses. Fundamentally, the rendering technique, namely the asymmetric frustum perspective projection described in our article, is platform-independent. For example, we have successfully tested our implementation on the Meta 1 Developer Kit (https://www.getameta.com/products) and rendered the final results on the optical see-through stereoscopic 3D display: Here is the front view of the Meta 1 Developer Kit, showing the optical see-through stereoscopic 3D display and 3D range-sensing camera: The result is shown as follows, with the stereoscopic 3D graphics rendered onto the real world (which forms the basis of augmented reality): See also In addition, we can easily extend our code to support shutter glasses-based 3D monitors by utilizing the Quad Buffered OpenGL APIs (refer to the GL_BACK_RIGHT and GL_BACK_LEFT flags in the glDrawBuffer function). Unfortunately, such 3D formats require specific hardware synchronization and often require higher frame rate display (for example, 120Hz) as well as a professional graphics card. Further information on how to implement stereoscopic 3D in your application can be found at http://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf. Summary In this article, we covered how to visualize data with stunning stereoscopic 3D technology using OpenGL. OpenGL does not provide any mechanism to load, save, or manipulate 3D models. Thus, to support this, we have integrated a new library named Assimp into the code. Resources for Article: Further resources on this subject: Organizing a Virtual Filesystem [article] Using OpenCL [article] Introduction to Modern OpenGL [article]
Read more
  • 0
  • 1
  • 16295

article-image-identifying-big-data-evidence-hadoop
Packt
21 Aug 2015
13 min read
Save for later

Identifying Big Data Evidence in Hadoop

Packt
21 Aug 2015
13 min read
In this article by Joe Sremack, author of the book Big Data Forensics, we will cover the following topics: An overview of how to identify Big Data forensic evidence Techniques for previewing Hadoop data (For more resources related to this topic, see here.) Hadoop and other Big Data systems pose unique challenges to forensic investigators. Hadoop clusters are distributed systems with voluminous data storage, complex data processing, and data that is split and made redundant at the data block level. Unlike with traditional forensics, performing forensics on Hadoop using the traditional methods is not always feasible. Instead, forensic investigators, experts, and legal professionals—such as attorneys and court officials—need to understand how forensics is performed against these complex systems. The first step in a forensic investigation is to identify the evidence. In this article, several of the concepts involved in identifying forensic evidence from Hadoop and its applications are covered. Identifying forensic evidence is a complex process for any type of investigation. It involves surveying a set of possible sources of evidence and determining which sources warrant collection. Data in any organization's systems is rarely well organized or documented. Investigators will need to take a set of investigation requirements and determine which data need to be collected. This requires a few first steps: Properly reviewing system and data documentation. Interviewing staff. Locating backup and non-centralized data repositories. Previewing data. The process of identifying Big Data evidence is made difficult by the large volume of data, distributed filesystems, the numerous types of data, and the potential for large-scale redundancy in evidence. Big Data solutions are also unique in that evidence can reside in different layers. Within Hadoop, evidence can take on multiple forms—such as file stored in the Hadoop Distributed File System (HDFS) to data extracted from application. To properly identify the evidence in Hadoop, multiple layers are examined. While all the data may reside in HDFS, the form may differ in a Hadoop application (for example, HBase), or the data may be more easily extracted to a viable format through HDFS using an application (such as Pig or Sqoop). Identifying Big Data evidence can also be complicated by redundancies caused by: Systems that input to or receive output from Big Data systems Archived systems that may have previously stored the evidence in the Big Data system A primary goal of identifying evidence is to capture all relevant evidence while minimizing redundant information. Outsiders looking at a company's data needs may assume that identifying information is as simple as asking several individuals where the data resides. In reality, the process is much more complicated for a number of possible reasons: The organization may be an adverse party and cannot be trusted to provide reliable information about the data The organization is large and no single person knows where all data is stored and what the contents of the data are The organization is divided into business units with no two business units knowing what data the other one stores Data is stored with a third-party data hosting provider IT staff may know where data and systems reside, but only the business users know the type of content the data stores For example, one might assume a pharmaceutical sales company would have an internal system structured with the following attributes: A division where the data is collected from a sales database An HR department database containing employee compensation, performance, and retention information A database of customer demographic information An accounting department database to assess what costs are associated with each sale In such a system, that data is then cleanly unified and compelling analyses are created to drive sales. In reality, an investigator will probably find that the Big Data sales system is actually comprised of a larger set of data that originates inside and outside the organization. There may be a collection of spreadsheets on sales employees' desktops and laptops, along with some of the older versions on backup tapes and file server shared folders. There may be a new Salesforce database implemented two years ago that is incomplete and is actually the replacement for a previous database, which was custom-developed and used by 75% of employees. A Hadoop instance running HBase for analysis receives a filtered set of data from social media feeds, the Salesforce database, and sales reports. All of these data sources may be managed by different teams, so identifying how to collect this information requires a series of steps to isolate the relevant information. The problem for large—or even midsize—companies is much more difficult than our pharmaceutical sales company example. Simply creating a map of every data source and the contents of those systems could require weeks of in-depth interviews with key business owners and staff. Several departments may have their own databases and Big Data solutions that may or may not be housed in a centralized repository. Backups for these systems could be located anywhere. Data retention policies will vary by department—and most likely by system. Data warehouses and other aggregators may contain important information that will not show themselves through normal interviews with staff. These data warehouses and aggregators may have previously generated reports that could serve as valuable reference points for future analysis; however, all data may not be available online, and some data may be inaccessible. In such cases, the company's data will most likely reside in off-site servers maintained by an outsourcing vendor, or worse, in a cloud-based solution. Big Data evidence can be intertwined with non-Big Data evidence. Email, document files, and other evidence can be extremely valuable for performing an investigation. The process for identifying Big Data evidence is very similar to the process for identifying other evidence, so the identification process described in this book can be carried out in conjunction with identifying other evidence. An important consideration for investigators to keep in mind is whether Big Data evidence should be collected (that is, determining whether it is relevant or if the same evidence can be collected more easily from other non-Big Data systems). Investigators must also consider whether evidence needs to be collected to meet the requirements of an investigation. The following figure illustrates the process for identifying Big Data evidence: Initial Steps The process for identifying evidence is: Examining requirements Examining the organization's system architecture Determining the kinds of data in each system Assessing which systems to collect In the book, Big Data Forensics, the topics of examining requirements and examining the organization's system architecture are covered in detail. The purpose of these two steps is to take the requirements of the investigation and match those to known data sources. From this, the investigator can begin to document which data sources should be examined and what types of data may be relevant. Assessing data viability Assessing the viability of data serves several purposes. It can: Allow the investigator to identify which data sources are potentially relevant Yield information that can corroborate the interview and documentation review information Highlight data limitations or gaps Provide the investigator with information to create a better data collection plan Up until this point in the investigation, the investigator has only gathered information about the data. Previewing and assessing samples of the data gives the investigator the chance to actually see what information is contained in the data and determine which data sources can meet the requirements of the investigation. Assessing the viability and relevance of data in a Big Data forensic investigation is different from that of a traditional digital forensic investigation. In a traditional digital forensic investigation, the data is typically not previewed out of fear of altering the data or metadata. With Big Data, however, the data can be previewed in some situations where metadata is not relevant or available. This factor opens up the opportunity for a forensic investigator to preview data when identifying which data should be collected. The main considerations for each source of data include the following: Data quality Data completeness Supporting documentation Validating the collected data Previous systems where the data resided How the data enter and leave the system The available formats for extraction How well the data meet the data requirements There are several methods for previewing data. The first is to review a data extract or the results of a query—or collect sample text files that are stored in Hadoop. This method allows the investigator to determine the types of information available and how the information is represented in the data. In highly complex systems consisting of thousands of data sources, this may not be feasible or requires a significant investment of time and effort. The second method is to review reports or canned query output that were derived from the data. Some Big Data solutions are designed with reporting applications connected to the Big Data system. These reports are a powerful tool, enabling an investigator to quickly gain an understanding of the contents of the system without requiring much up-front effort to gain access to the systems. Data retention policies and data purge schedules should be reviewed and considered in this step as well. Given the large volume of data involved, many organizations routinely purge data after a certain period of time. Data purging can mean the archival of data to near-line or offline storage, or it can mean the destruction of old data without backup. When data is archived, the investigator should also determine whether any of the data in near-line or offline backup media needs to be collected or if the live system data is sufficient. Regardless, the investigator should determine what the next purge cycle is and whether that necessitates an expedited collection to prevent loss of critical information. Additionally, the investigator should determine whether the organization should implement a litigation hold, which halts data purging during the investigation. When data is purged without backup, the investigator must determine: How the purge affects the investigation When the data needs to be collected Whether supplemental data sources must be collected to account for the lost data (for example, reports previously created from the purged data or other systems that created or received the purged data) Identifying HDFS evidence HDFS evidence can be identified in a number of ways. In some cases, the investigator does not want to preview the data to retain the integrity of the metadata. In other cases, the investigator is in only interested in collecting a subset of the data. Limiting the data can be necessary when the data volume prohibits a complete collection or forensically imaging the entire cluster is impossible. The primary methods for identifying HDFS evidence are to: Generate directory listings from the cluster Review the total data volume on each of the nodes Generating directory listings from the cluster is a straightforward process of accessing the cluster from a client and running the Hadoop directory listing command. The cluster is accessed from a client by either directly logging in through a cluster node or by logging in from a remote machine. The command to print a directory listing is as follows: # hdfs dfs –lsr / This generates a recursive directory listing of all HDFS files starting from the root directory. This command produces the filenames, directories, permissions, and file sizes of all files. The output can be piped to an output file, which should be saved to an external storage device for review. Identifying Hive evidence Hive evidence can be identified through HiveQL commands. The following table lists the commands that can be used to get a full listing of all databases and tables as well as the tables' formats: Command Description SHOW DATABASES; This lists all available databases SHOW TABLES; This lists all tables in current database USE databaseName; This makes databaseName the current database DESCRIBE (FORMATTED|EXTENDED) table; This lists the formatting details about the table Identifying all tables and their formats requires iterating through every database and generating a list of tables and each table's formats. This process can be performed either manually or through an automated HiveQL script file. These commands do not provide information about database and table metadata—such as number of records and last modified date—but they do give a full listing of all available, online Hive data. HiveQL can also be used to preview the data using subset queries. The following example shows how to identify the top ten rows in a Hive table: SELECT * FROM Table1 LIMIT 10 Identifying HBase evidence HBase evidence is stored in tables, and identifying the names of the tables and the properties of each is important for data collection. HBase stores metadata information in the -ROOT- and .META. tables. These tables can be queried using HBase shell commands to identify the information about all tables in the HBase cluster. Information about the HBase cluster can be gathered using the status command from the HBase shell: status This produces the following output: 2 servers, 0 dead, 1.5000 average load For additional information about the names and locations of the servers—as well as the total disk sizes for the memstores and HFiles—the status command can be given the detailed parameter. The list command outputs every HBase table. The one table created in HBase, testTable, is shown via the following command: list This produces the following output: TABLE testTable 1 row(s) in 0.0370 seconds => ["testTable"] Information about each table can be generated using the describe command: describe 'testTable' The following output is generated: 'testTable', {NAME => 'account', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}, {NAME => 'address', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} 1 row(s) in 0.0300 seconds The describe command yields several useful pieces of information about each table. Each of the column families are listed, and for each family, the encoding, number of columns (represented as versions), and whether the deleted cells are retained are also listed. Security information about each table can be gathered using the user_permission command as follows: user_permission 'testTable' This command is useful for identifying the users who currently have access to the table. As mentioned before, user accounts are not as meaningful in Hadoop because of the distributed nature of Hadoop configurations, but in some cases, knowing who had access to tables can be tied back to system logs to identify individuals who accessed the system and data. Summary Hadoop evidence comes in many forms. The methods for identifying the evidence require the forensic investigator to understand the Hadoop architecture and the options for identifying the evidence within HDFS and Hadoop applications. In Big Data Forensics, these topics are covered in more depth—from the internals of Hadoop to conducting a collection of a distributed server. Resources for Article: Further resources on this subject: Introduction to Hadoop[article] Hadoop Monitoring and its aspects[article] Understanding Hadoop Backup and Recovery Needs [article]
Read more
  • 0
  • 0
  • 2188

article-image-scientific-computing-apis-python
Packt
20 Aug 2015
23 min read
Save for later

Scientific Computing APIs for Python

Packt
20 Aug 2015
23 min read
In this article, by Hemant Kumar Mehta author of the book Mastering Python Scientific Computing we will have comprehensive discussion of features and capabilities of various scientific computing APIs and toolkits in Python. Besides the basics, we will also discuss some example programs for each of the APIs. As symbolic computing is relatively different area of computerized mathematics, we have kept a special sub section within the SymPy section to discuss basics of computerized algebra system. In this article, we will cover following topics: Scientific numerical computing using NumPy and SciPy Symbolic Computing using SymPy (For more resources related to this topic, see here.) Numerical Scientific Computing in Python The scientific computing mainly demands for facility of performing calculations on algebraic equations, matrices, differentiations, integrations, differential equations, statistics, equation solvers and much more. By default Python doesn't come with these functionalities. However, development of NumPy and SciPy has enabled us to perform these operations and much more advanced functionalities beyond these operations. NumPy and SciPy are very powerful Python packages that enable the users to efficiently perform the desired operations for all types of scientific applications. NumPy package NumPy is the basic Python package for the scientific computing. It provides facility of multi-dimensional arrays and basic mathematical operations such as linear algebra. Python provides several data structure to store the user data, while the most popular data structures are lists and dictionaries. The list objects may store any type of Python object as an element. These elements can be processed using loops or iterators. The dictionary objects store the data in key, value format. The ndarrays data structure The ndaarays are also similar to the list but highly flexible and efficient. The ndarrays is an array object to represent multidimensional array of fixed-size items. This array should be homogeneous. It has an associated object of dtype to define the data type of elements in the array. This object defines type of the data (integer, float, or Python object), size of data in bytes, byte ordering (big-endian or little-endian). Moreover, if the type of data is record or sub-array then it also contains details about them. The actual array can be constructed using any one of the array, zeros or empty methods. Another important aspect of ndarrays is that the size of arrays can be dynamically modified. Moreover, if the user needs to remove some elements from the arrays then it can be done using the module for masked arrays. In a number of situations, scientific computing demands deletion/removal of some incorrect or erroneous data. The numpy.ma module provides the facility of masked array to easily remove selected elements from arrays. A masked array is nothing but the normal ndarrays with a mask. Mask is another associated array with true or false values. If for a particular position mask has true value then the corresponding element in the main array is valid and if the mask is false then the corresponding element in the main array is invalid or masked. In such case while performing any computation on such ndarrays the masked elements will not be considered. File handling Another important aspect of scientific computing is storing the data into files and NumPy supports reading and writing on both text as well as binary files. Mostly, text files are good way for reading, writing and data exchange as they are inherently portable and most of the platforms by default have capabilities to manipulate them. However, for some of the applications sometimes it is better to use binary files or the desired data for such application can only be stored in binary files. Sometimes the size of data and nature of data like image, sound etc. requires them to store in binary files. In comparison to text files binary files are harder to manage as they have specific formats. Moreover, the size of binary files are comparatively very small and the read/ write operations are very fast then the read/ write text files. This fast read/ write is most suitable for the application working on large datasets. The only drawback of binary files manipulated with NumPy is that they are accessible only through NumPy. Python has text file manipulation functions such as open, readlines and writelines functions. However, it is not performance efficient to use these functions for scientific data manipulation. These default Python functions are very slow in reading and writing the data in file. NumPy has high performance alternative that load the data into ndarrays before actual computation.  In NumPy, text files can be accessed using numpy.loadtxt and numpy.savetxt functions.  The loadtxt function can be used to load the data from text files to the ndarrays. NumPy also has a separate functions to manipulate the data in binary files. The function for reading and writing are numpy.load and numpy.save respectively. Sample NumPy programs The NumPy array can be created from a list or tuple using the array, this method can transform sequences of sequences into two dimensional array. import numpy as np x = np.array([4,432,21], int) print x                            #Output [  4 432  21] x2d = np.array( ((100,200,300), (111,222,333), (123,456,789)) ) print x2d Output: [  4 432  21] [[100 200 300] [111 222 333] [123 456 789]] Basic matrix arithmetic operation can be easily performed on two dimensional arrays as used in the following program.  Basically these operations are actually applied on elements hence the operand arrays must be of equal size, if the size is not matching then performing these operations will cause a runtime error. Consider the following example for arithmetic operations on one dimensional array. import numpy as np x = np.array([4,5,6]) y = np.array([1,2,3]) print x + y                      # output [5 7 9] print x * y                      # output [ 4 10 18] print x - y                       # output [3 3 3]    print x / y                       # output [4 2 2] print x % y                    # output [0 1 0] There is a separate subclass named as matrix to perform matrix operations. Let us understand matrix operation by following example which demonstrates the difference between array based multiplication and matrix multiplication. The NumPy matrices are 2-dimensional and arrays can be of any dimension. import numpy as np x1 = np.array( ((1,2,3), (1,2,3), (1,2,3)) ) x2 = np.array( ((1,2,3), (1,2,3), (1,2,3)) ) print "First 2-D Array: x1" print x1 print "Second 2-D Array: x2" print x2 print "Array Multiplication" print x1*x2   mx1 = np.matrix( ((1,2,3), (1,2,3), (1,2,3)) ) mx2 = np.matrix( ((1,2,3), (1,2,3), (1,2,3)) ) print "Matrix Multiplication" print mx1*mx2   Output: First 2-D Array: x1 [[1 2 3]  [1 2 3]  [1 2 3]] Second 2-D Array: x2 [[1 2 3]  [1 2 3]  [1 2 3]] Array Multiplication [[1 4 9]  [1 4 9]  [1 4 9]] Matrix Multiplication [[ 6 12 18]  [ 6 12 18]  [ 6 12 18]] Following is a simple program to demonstrate simple statistical functions given in NumPy: import numpy as np x = np.random.randn(10)   # Creates an array of 10 random elements print x mean = x.mean() print mean std = x.std() print std var = x.var() print var First Sample Output: [ 0.08291261  0.89369115  0.641396   -0.97868652  0.46692439 -0.13954144  -0.29892453  0.96177167  0.09975071  0.35832954] 0.208762357623 0.559388806817 0.312915837192 Second Sample Output: [ 1.28239629  0.07953693 -0.88112438 -2.37757502  1.31752476  1.50047537   0.19905071 -0.48867481  0.26767073  2.660184  ] 0.355946458357 1.35007701045 1.82270793415 The above programs are some simple examples of NumPy. SciPy package SciPy extends Python and NumPy support by providing advanced mathematical functions such as differentiation, integration, differential equations, optimization, interpolation, advanced statistical functions, equation solvers etc. SciPy is written on top of the NumPy array framework. SciPy has utilized the arrays and the basic operations on the arrays provided in NumPy and extended it to cover most of the mathematical aspects regularly required by scientists and engineers for their applications. In this article we will cover examples of some basic functionality. Optimization package The optimization package in SciPy provides facility to solve univariate and multivariate minimization problems. It provides solutions to minimization problems using a number of algorithms and methods. The minimization problem has wide range of application in science and commercial domains. Generally, we perform linear regression, search for function's minimum and maximum values, finding the root of a function, and linear programming for such cases. All these functionalities are supported by the optimization package.  Interpolation package A number of interpolation methods and algorithms are provided in this package as built-in functions. It provides facility to perform univariate and multivariate interpolation, one dimensional and two dimensional Splines etc. We use univariate interpolation when data is dependent of one variable and if data is around more than one variable then we use multivariate interpolation. Besides these functionalities it also provides additional functionality for Lagrange and Taylor polynomial interpolators. Integration and differential equations in SciPy Integration is an important mathematical tool for scientific computations. The SciPy integrations sub-package provides functionalities to perform numerical integration. SciPy provides a range of functions to perform integration on equations and data. It also has ordinary differential equation integrator. It provides various functions to perform numerical integrations using a number of methods from mathematics using numerical analysis. Stats module SciPy Stats module contains a functions for most of the probability distributions and wide range or statistical functions. Supported probability distributions include various continuous distribution, multivariate distributions and discrete distributions. The statistical functions range from simple means to the most of the complex statistical concepts, including skewness, kurtosis chi-square test to name a few. Clustering package and Spatial Algorithms in SciPy Clustering analysis is a popular data mining technique having wide range of application in scientific and commercial applications. In Science domain biology, particle physics, astronomy, life science, bioinformatics are few subjects widely using clustering analysis for problem solution. Clustering analysis is being used extensively in computer science for computerized fraud detection, security analysis, image processing etc. The clustering package provides functionality for K-mean clustering, vector quantization, hierarchical and agglomerative clustering functions. The spatial class has functions to analyze distance between data points using triangulations, Voronoi diagrams, and convex hulls of a set of points. It also has KDTree implementations for performing nearest-neighbor lookup functionality. Image processing in SciPy           SciPy provides support for performing various image processing operations including basic reading and writing of image files, displaying images, simple image manipulations operations such as cropping, flipping, rotating etc. It has also support for image filtering functions such as mathematical morphing, smoothing, denoising and sharpening of images. It also supports various other operations such as image segmentation by labeling pixels corresponding to different objects, Classification, Feature extraction for example edge detection etc. Sample SciPy programs In the subsequent subsections we will discuss some example programs using SciPy modules and packages. We start with a simple program performing standard statistical computations. After this, we will discuss a program performing finding a minimal solution using optimizations. At last we will discuss image processing programs. Statistics using SciPy The stats module of SciPy has functions to perform simple statistical operations and various probability distributions. The following program demonstrates simple statistical calculations using SciPy stats.describe function. This single function operates on an array and returns number of elements, minimum value, maximum value, mean, variance, skewness and kurtosis. import scipy as sp import scipy.stats as st s = sp.randn(10) n, min_max, mean, var, skew, kurt = st.describe(s) print("Number of elements: {0:d}".format(n)) print("Minimum: {0:3.5f} Maximum: {1:2.5f}".format(min_max[0], min_max[1])) print("Mean: {0:3.5f}".format(mean)) print("Variance: {0:3.5f}".format(var)) print("Skewness : {0:3.5f}".format(skew)) print("Kurtosis: {0:3.5f}".format(kurt)) Output: Number of elements: 10 Minimum: -2.00080 Maximum: 0.91390 Mean: -0.55638 Variance: 0.93120 Skewness : 0.16958 Kurtosis: -1.15542 Optimization in SciPY Generally, in mathematical optimization a non convex function called Rosenbrock function is used to test the performance of the optimization algorithm. The following program is demonstrating the minimization problem on this function. The Rosenbrock function of N variable is given by following equation and it has minimum value 0 at xi =1. The program for the above function is: import numpy as np from scipy.optimize import minimize   # Definition of Rosenbrock function def rosenbrock(x):      return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)   x0 = np.array([1, 0.7, 0.8, 2.9, 1.1]) res = minimize(rosenbrock, x0, method = 'nelder-mead', options = {'xtol': 1e-8, 'disp': True})   print(res.x) Output is: Optimization terminated successfully.          Current function value: 0.000000          Iterations: 516          Function evaluations: 827 [ 1.  1.  1.  1.  1.] The last line is the output of print(res.x) where all the elements of array are 1. Image processing using SciPy Following two programs are developed to demonstrate the image processing functionality of SciPy. First of these program is simply displaying the standard test image widely used in the field of image processing called Lena. The second program is applying geometric transformation on this image. It performs image cropping and rotation by 45 %. The following program is displaying Lena image using matplotlib API. The imshow method renders the ndarrays into an image and the show method displays the image. from scipy import misc l = misc.lena() misc.imsave('lena.png', l) import matplotlib.pyplot as plt plt.gray() plt.imshow(l) plt.show() Output: The output of the above program is the following screen shot: The following program is performing geometric transformation. This program is displaying transformed images and along with the original image as a four axis array. import scipy from scipy import ndimage import matplotlib.pyplot as plt import numpy as np   lena = scipy.misc.lena() lx, ly = lena.shape crop_lena = lena[lx/4:-lx/4, ly/4:-ly/4] crop_eyes_lena = lena[lx/2:-lx/2.2, ly/2.1:-ly/3.2] rotate_lena = ndimage.rotate(lena, 45)   # Four axes, returned as a 2-d array f, axarr = plt.subplots(2, 2) axarr[0, 0].imshow(lena, cmap=plt.cm.gray) axarr[0, 0].axis('off') axarr[0, 0].set_title('Original Lena Image') axarr[0, 1].imshow(crop_lena, cmap=plt.cm.gray) axarr[0, 1].axis('off') axarr[0, 1].set_title('Cropped Lena') axarr[1, 0].imshow(crop_eyes_lena, cmap=plt.cm.gray) axarr[1, 0].axis('off') axarr[1, 0].set_title('Lena Cropped Eyes') axarr[1, 1].imshow(rotate_lena, cmap=plt.cm.gray) axarr[1, 1].axis('off') axarr[1, 1].set_title('45 Degree Rotated Lena')   plt.show() Output: The SciPy and NumPy are core of Python's support for scientific computing as they provide solid functionality of numerical computing. Symbolic computations using SymPy Computerized computations performed over the mathematical symbols without evaluating or changing their meaning is called as symbolic computations. Generally the symbolic computing is also called as computerized algebra and such computerized system are called computer algebra system. The following subsection has a brief and good introduction to SymPy. Computer Algebra System (CAS) Let us discuss the concept of CAS. CAS is a software or toolkit to perform computations on mathematical expressions using computers instead of doing it manually. In the beginning, using computers for these applications was named as computer algebra and now this concept is called as symbolic computing. CAS systems may be grouped into two types. First is the general purpose CAS and the second type is the CAS specific to particular problem. The general purpose systems are applicable to most of the area of algebraic mathematics while the specialized CAS are the systems designed for the specific area such as group theory or number theory. Most of the time, we prefer the general purpose CAS to manipulate the mathematical expressions for scientific applications. Features of a general purpose CAS Various desired features of a general purpose computer algebra system for scientific applications are as: A user interface to manipulate mathematical expressions. An interface for programming  and debugging Such systems requires simplification of various mathematical expressions hence, a simplifier is a most essential component of such computerized algebra system. The general purpose CAS system must support exhaustive set of functions to perform various mathematical operations required by any algebraic computations Most of the applications perform extensive computations an efficient memory management is highly essential. The system must provide support to perform mathematical computations on high precision numbers and large quantities. A brief idea of SymPy SymPy is an open source and Python based implementation of computerized algebra system (CAS). The philosophy behind the SymPy development is to design and develop a CAS having all the desired features yet its code as simple as possible so that it will be highly and easily extensible. It is written completely in Python and do not requires any external library. The basic idea about using SymPy is the creation and manipulation of expressions. Using SymPy, the user represents mathematical expressions in Python language using SymPy classes and objects. These expressions are composed of numbers, symbols, operators, functions etc. The functions are the modules to perform a mathematical functionality such as logarithms, trigonometry etc. The development of SymPy was started by Ondřej Čertíkin August 2006. Since then, it has been grown considerably with the contributions more than hundreds of the contributors. This library now consists of 26 different integrated modules. These modules have capability to perform computations required for basic symbolic arithmetic, calculus, algebra, discrete mathematics, quantum physics, plotting and printing with the option to export the output of the computations to LaTeX and other formats. The capabilities of SymPy can be divided into two categories as core capability and advanced capabilities as SymPy library is divided into core module with several advanced optional modules. The various supported functionality by various modules are as follows: Core capabilities The core capability module supports basic functionalities required by any mathematical algebra operations to be performed. These operations include basic arithmetic like multiplications, addition, subtraction and division, exponential etc. It also supports simplification of expressions to simplify complex expressions. It provides the functionality of expansion of series and symbols. Core module also supports functions to perform operations related to trigonometry, hyperbola, exponential, roots of equations, polynomials, factorials and gamma functions, logarithms etc. and a number of special functions for B-Splines, spherical harmonics, tensor functions, orthogonal polynomials etc. There is strong support also given for pattern matching operations in the core module. Core capabilities of the SymPy also include the functionalities to support substitutions required by algebraic operations. It not only supports the high precision arithmetic operations over integers, rational and gloating point numbers but also non-commutative variables and symbols required in polynomial operations. Polynomials Various functions to perform polynomial operations belong to the polynomial module. These functions includes basic polynomial operations such as division, greatest common divisor (GCD) least common multiplier (LCM), square-free factorization, representation of  polynomials with symbolic coefficients, some special operations like computation of resultant, deriving trigonometric identities, Partial fraction decomposition, facilities for Gröbner basis over polynomial rings and fields. Calculus Various functionalities supporting different operations required by basic and advanced calculus are provided in this module. It supports functionalities required by limits, there is a limit function for this. It also supports differentiation and integrations and series expansion, differential equations and calculus of finite differences. SymPy is also having special support for definite integrals and integral transforms. In differential it supports numerical differential, composition of derivatives and fractional derivatives.  Solving equations Solver is the name of the SymPy module providing equations solving functionality. This module supports solving capabilities for complex polynomials, roots of polynomials and solving system of polynomial equations. There is a function to solve the algebraic equations. It not only provides support for solutions for differential equations including ordinary differential equations, some forms of partial differential equations, initial and boundary values problems etc. but also supports solution of difference equations. In mathematics, difference equation is also called recurrence relations, that is an equation that recursively defines a sequence or multidimensional array of values. Discrete math Discrete mathematics includes those mathematical structures which are discrete in nature rather than the continuous mathematics like calculus. It deals with the integers, graphs, statements from logic theory etc. This module has full support for binomial coefficient, products, summations etc. This module also supports various functions from number theory including residual theory, Euler's Totient, partition and a number of functions dealing with prime numbers and their factorizations. SymPy also supports creation and manipulations of logic expressions using symbolic and Boolean values. Matrices SymPy has a strong support for various operations related to the matrices and determinants. Matrix belongs to linear algebra category of mathematics. It supports creation of matrix, basic matrix operations like multiplication, addition, matrix of zeros and ones, creation of random matrix and performing operations on matrix elements. It also supports special functions line computation of Hessian matrix for a function, Gram-Schmidt process on set of vectors, Computation of Wronskian for matrix of functions etc. It has also full support for Eigenvalues/eigenvectors, matrix inversion, solution of matrix and determinants.  For computing determinants of the matrix, it also supports Bareis' fraction-free algorithm and berkowitz algorithms besides the other methods. For matrices it also supports nullspace calculation, cofactor expansion tools, derivative calculation for matrix elements, calculation of dual of matrix etc. Geometry SymPy is also having module that supports various operations associated with the two-dimensional (2-D) geometry. It supports creation of various 2-D entity or objects such as point, line, circle, ellipse, polygon, triangle, ray, segment etc. It also allows us to perform query on these entities such as area of some of the suitable objects line ellipse/ circle or triangle, intersection points of lines etc. It also supports other queries line tangency determination, finding similarity and intersection of entities. Plotting There is a very good module that allows us to draw two-dimensional and three-dimensional plots. At present, the plots are rendered using the matplotlib package. It also supports other packages such as TextBackend, Pyglet, textplot etc.  It has a very good interactive interface facility of customizations and plotting of various geometric entities. The plotting module has the following functions: Plotting 2-D line plots Plotting of 2-D parametric plots. Plotting of 2-D implicit and region plots. Plotting of 3-D plots of functions involving two variables. Plotting of 3-D line and surface plots etc. Physics There is a module to solve the problem from Physics domain. It supports functionality for mechanics including classical and quantum mechanics, high energy physics. It has functions to support Pauli Algebra, quantum harmonic oscillators in 1-D and 3-D. It is also having functionality for optics. There is a separate module that integrates unit systems into SymPy. This will allow users to select the specific unit system for performing his/ her computations and conversion between the units. The unit systems are composed of units and constant for computations. Statistics The statistics module introduced in SymPy to support the various concepts of statistics required in mathematical computations. Apart from supporting various continuous and discrete statistical distributions, it also supports functionality related to the symbolic probability. Generally, these distributions support functions for random number generations in SymPy. Printing SymPy is having a module for provide full support for Pretty-Printing. Pretty-print is the idea of conversions of various stylistic formatting into the text files such as source code, text files and markup files or similar content. This module produces the desired output by printing using ASCII and or Unicode characters. It supports various printers such as LATEX and MathML printer. It is also capable of producing source code in various programming languages such as c, Python or FORTRAN. It is also capable of producing contents using markup languages like HTML/ XML. SymPy modules The following list has formal names of the modules discussed in above paragraphs: Assumptions: assumption engine Concrete: symbolic products and summations Core: basic class structure: Basic, Add, Mul, Pow etc. functions: elementary and special functions galgebra: geometric algebra geometry: geometric entities integrals: symbolic integrator interactive: interactive sessions (e.g. IPython) logic: boolean algebra, theorem proving matrices: linear algebra, matrices mpmath: fast arbitrary precision numerical math ntheory: number theoretical functions parsing: Mathematica and Maxima parsers physics: physical units, quantum stuff plotting: 2D and 3D plots using Pyglet polys: polynomial algebra, factorization printing: pretty-printing, code generation series: symbolic limits and truncated series simplify: rewrite expressions in other forms solvers: algebraic, recurrence, differential statistics: standard probability distributions utilities: test framework, compatibility stuf There are numerous symbolic computing systems available in various mathematical toolkits. There are some proprietary software such as Maple/ Mathematica and there are some open source alternatives also such as Singular/ AXIOM. However, these products have their own scripting language, difficult to extend their functionality and having slow development cycle. Whereas SymPy is highly extensible, designed and developed in Python language and open source API that supports speedy development life cycle. Simple exemplary programs These are some very simple examples to get idea about the capacity of SymPy. These are less than ten lines of SymPy source codes which covers topics ranging from basis symbol manipulations to limits, differentiations and integrations. We can test the execution of these programs on SymPy live running SymPy online on Google App Engine available on http://live.sympy.org/. Basic symbol manipulation The following code is defines three symbols, an expression on these symbols and finally prints the expression. import sympy a = sympy.Symbol('a') b = sympy.Symbol('b') c = sympy.Symbol('c') e = ( a * b * b + 2 * b * a * b) + (a * a + c * c) print e Output: a**2 + 3*a*b**2 + c**2     (here ** represents power operation). Expression expansion in SymPy The following program demonstrates the concept of expression expansion. It defines two symbols and a simple expression on these symbols and finally prints the expression and its expanded form. import sympy a = sympy.Symbol('a') b = sympy.Symbol('b') e = (a + b) ** 4 print e print e.expand() Output: (a + b)**4 a**4 + 4*a**3*b + 6*a**2*b**2 + 4*a*b**3 + b**4 Simplification of expression or formula The SymPy has facility to simplify the mathematical expressions. The following program is having two expressions to simplify and displays the output after simplifications of the expressions. import sympy x = sympy.Symbol('x') a = 1/x + (x*exp(x) - 1)/x simplify(a) simplify((x ** 3 +  x ** 2 - x - 1)/(x ** 2 + 2 * x + 1)) Output: ex x – 1 Simple integrations The following program is calculates the integration of two simple functions. import sympy from sympy import integrate x = sympy.Symbol('x') integrate(x ** 3 + 2 * x ** 2 + x, x) integrate(x / (x ** 2 + 2 * x), x) Output: x**4/4+2*x**3/3+x**2/2 log(x + 2) Summary In this article, we have discussed the concepts, features and selective sample programs of various scientific computing APIs and toolkits. The article started with a discussion of NumPy and SciPy. After covering NymPy, we have discussed concepts associated with symbolic computing and SymPy. In the remaining article we have discussed the Interactive computing and data analysis & visualization alog with their APIs or toolkits. IPython is the python toolkit for interactive computing. We have also discussed the data analysis package Pandas and the data visualization API names Matplotlib. Resources for Article: Further resources on this subject: Optimization in Python [article] How to do Machine Learning with Python [article] Bayesian Network Fundamentals [article]
Read more
  • 0
  • 0
  • 9389

article-image-dimensionality-reduction-principal-component-analysis
Packt
18 Aug 2015
12 min read
Save for later

Dimensionality Reduction with Principal Component Analysis

Packt
18 Aug 2015
12 min read
In this article by Eric Mayor, author of the book Learning Predictive Analytics with R, we will be discussing how to use PCA in R. Nowadays, the access to data is easier and cheaper than ever before. This leads to a proliferation of data in organizations' data warehouses and on the Internet. Analyzing this data is not trivial, as the quantity often makes the analysis difficult or unpractical. For instance, the data is often more abundant than available memory on the machines. The available computational power is also often not enough to analyze the data in a reasonable time frame. One solution is to have recourse to technologies that deal with high dimensionality in data (big data). These solutions typically use the memory and computing power of several machines for analysis (computer clusters). However, most organizations do not have such infrastructure. Therefore, a more practical solution is to reduce the dimensionality of the data while keeping the essential of the information intact. (For more resources related to this topic, see here.) Another reason to reduce dimensionality is that, in some cases, there are more attributes than observations. If some scientists were to store the genome of all inhabitants of Europe and the United States, the number of cases (approximately 1 billion) would be much less than the 3 billion base pairs in the human DNA. Most analyses do not work well or not at all when observations are less than attributes. Confronted with this problem, the data analysts might select groups of attributes, which go together (like height and weight, for instance) according to their domain knowledge, and reduce the dimensionality of the dataset. The data is often structured across several relatively independent dimensions, with each dimension measured using several attributes. This is where principal component analysis (PCA) is an essential tool, as it permits each observation to receive a score on each of the dimensions (determined by PCA itself) while allowing to discard the attributes from which the dimensions are computed. What is meant here is that for each of the obtained dimensions, values (scores) will be produced that combines several attributes. These can be used for further analysis. In the next section, we will use PCA to combine several attributes in the questionnaire data. We will see that the participants' self-report on items such as lively, excited, enthusiastic (and many more) can be combined in a single dimension we call positive arousal. In this sense, PCA performs both dimensionality reduction (discard the attributes) and feature extraction (compute the dimensions). The features can then be used for classification and regression problems. Another use of PCA is to check that the underlying structure of the data corresponds to a theoretical model. For instance, in a questionnaire, a group of questions might assess construct A, and another group construct B, and so on. PCA will find two different factors if indeed there is more similarity in the answers of participants within each group of questions compared to the overall questionnaire. Researchers in fields such as psychology use PCA mainly to test their theoretical model in this fashion. This article is a shortened version of the chapter with the same name in my book Learning predictive analytics with R. In what follows, we will examine how to use PCA in R. We will notably discover how to interpret the results and perform diagnostics. Learning PCA in R In this section, we will learn more about how to use PCA in order to obtain knowledge from data, and ultimately reduce the number of attributes (also called features). The first dataset we will use is the msg dataset from the psych package. The msq (for motivational state questionnaire) data set is composed of 92 attributes, of which 72 is the rating of adjectives by 3896 participants describing their mood. We will only use these 72 attributes for the current purpose, which is the exploration of the structure of the questionnaire. We will, therefore, start by installing and loading the package and the data, and assign the data we are interested in (the 75 attributes mentioned) to an object called motiv. install.packages("psych") library(psych) data(msq) motiv = msq[,1:72] Dealing with missing values Missing values are a common problem in real-life datasets like the one we use here. There are several ways to deal with them, but here we will only mention omitting the cases where missing values are encountered. Let's see how many missing values (we will call them NAs) there are for each attribute in this dataset: apply(is.na(motiv),2,sum) View of missing data in the dataset We can see that for many attributes this is unproblematic (there are only few missing values). However, in the case of several attributes, the number of NAs is quite high (anxious: 1849 NAs, cheerful: 1850, idle: 1848, inactive: 1846, tranquil: 1843). The most probable explanation is that these items have been dropped from some of the samples in which the data has been collected. Removing all cases with NAs from the dataset would result in an empty dataset in this particular case. Try the following in your console to verify this claim: na.omit(motiv) For this reason, we will simply deal with the problem by removing these attributes from the analysis. Remember that there are other ways, such as data imputation, to solve this issue while keeping the attributes. In order to do so, we first need to know which column number corresponds to the attributes we want to suppress. An easy way to do this is simply by printing the names of each columns vertically (using cbind() for something it was not exactly made for). We will omit cases with missing values on other attributes from the analysis later. head(cbind(names(motiv)),5) Here, we only print the result for the first five columns of the data frame: [,1] [1,] active [2,] afraid [3,] alert [4,] angry [5,] anxious We invite the reader to verify on his screen whether we suppress the correct columns from the analysis using the following vector to match the attributes: ToSuppress = c(5, 15, 37, 38, 66) Let's check whether it is correct: names(motiv[ToSuppress]) The following is the output: [1] "anxious" "cheerful" "idle" "inactive" "tranquil" Naming the components using the loadings We will run the analysis using the principal() function from the psych package. This will allow examining the loadings in a sorted fashion and making them independent from the others using a rotation. We will extract five components. The psych package has already been loaded, as the data we are analyzing comes from a dataset, which is included in psych. Yet, we need to install another package, the GPArotation package, which is required for the analysis we will perform next: install.packages("GPArotation") library(GPArotation) We can now run the analysis. This time we want to apply an orthogonal rotation varimax, in order to obtain independent factorial scores. Precisions are provided in the paper Principal component analysis by Abdi and Williams (2010). We also want that the analysis uses imputed data for missing values and estimates the scores for each observation on each retained principal component. We use the print.psych() function to print the sorted loadings, which will make the interpretation of the principal components easier: Pca2 = principal(motiv[,-ToSuppress],nfactors = 5, rotate = "varimax", missing = T, scores = T) print.psych(Pca2, sort =T) The annotated output displayed in the following screenshot has been slightly altered in order to allow it to fit on one page. Results of the PCA with the principal() function using varimax rotation The name of the items are displayed in the first column of the first matrix of results. The column item displays their order. The component loadings for the five retained components are displayed next (RC1 to RC5). We will comment on h2 and u2 later. The loadings of the attributes on the loadings can be used to name the principal components. As can be seen in the preceding screenshot, the first component could be called Positive arousal, the second component could be called Negative arousal, the third could be called Serenity, the fourth Exhaustion, and the last component could be called Fear. It is worth noting that the MSQ has theoretically four components obtained from two dimensions: energy and tension. Therefore, the fifth component we found is not accounted for by the model. The h2 value indicates the proportion of variance of the variable that is accounted for by the selected components. The u2 value indicates the part of variance not explained by the components. The sum of h2 and u2 is 1. Below the first result matrix, a second matrix indicates the proportion of variance explained by the components on the whole dataset. For instance, the first component explains 21 percent of the variance in the dataset Proportion Var and 38 percent of the variance explained by the five components. Overall, the five components explain 56 percent of the variance in the dataset Cumulative Var. As a remainder, the purpose of PCA is to replace all the original attributes by the scores on these components in further analysis. This is what we will discuss next. PCA scores At this point, you might wonder where data reduction comes into play. Well, the computation of the scores for each factor allows reducing the number of attributes for a dataset. This has been done in the previous section. Accessing the PCA scores We will now examine these scores more thoroughly. Let's start by checking whether the scores are indeed uncorrelated: round(cor(Pca2$scores),3) This is the output: RC1 RC2 RC3 RC4 RC5 RC1 1.000 -0.001 0.000 0.000 0.001 RC2 -0.001 1.000 0.000 0.000 0.000 RC3 0.000 0.000 1.000 -0.001 0.001 RC4 0.000 0.000 -0.001 1.000 0.000 RC5 0.001 0.000 0.001 0.000 1.000 As can be seen in the previous output, the correlations between the values are equal to or lower than 0.001 for every pair of components. The components are basically uncorrelated, as we requested. Let's start by checking that it indeed contains the same number of rows: nrow(Pca2$scores) == nrow(msq)  Here's the output: [1] TRUE As the principal() function didn't remove any cases but imputed the data instead, we can now append the factor scores to the original dataset (a copy of it): bound = cbind(msq,Pca2$score) PCA diagnostics Here, we will briefly discuss two diagnostics that can be performed on the dataset that will be subjected to analysis. These diagnostics should be performed analyzing the data with PCA in order to ascertain that PCA is an optimal analysis for the considered data. The first diagnostic is Bartlett's test of sphericity. This test examines the relationship between the variables together, instead of 2 x 2 as in a correlation. To be more specific, it tests whether the correlation matrix is different from an identity matrix (which has ones on the diagonals and zeroes elsewhere). The null hypothesis is that the variables are independent (no underlying structure). The paper When is a correlation matrix appropriate for factor analysis? Some decision rules by Dzuiban and Shirkey (1974) provides more information on the topic. This test can be performed by the cortest.normal() function in the psych package. In this case, the first argument is the correlation matrix of the data we want to subject to PCA, and the n1 argument is the number of cases. We will use the same data set as in the PCA (we first assign the name M to it): M = na.omit(motiv[-ToSuppress]) cortest.normal(cor(M), n1 = nrow(M)) The output is provided as follows: Tests of correlation matrices Call:cortest.normal(R1 = cor(M), n1 = nrow(M)) Chi Square value 829268 with df = 2211 with probability < 0 The last line of the output shows that the data subjected to the analysis is clearly different from an identity matrix: The probability to obtain these results if it were an identity matrix is close to zero (do not pay attention to the > 0, it is simply extremely close to zero). You might be curious about what is the output of an identity matrix. It turns out we have something close to it: the correlations of the PCA scores with the varimax rotation that we examined before. Let's subject the scores to the analysis: cortest.normal(cor(Pca2$scores), n1 = nrow(Pca2$scores)) Tests of correlation matrices Call:cortest.normal(R1 = cor(Pca2$scores), n1 = nrow(Pca2$scores)) Chi Square value 0.01 with df = 10 with probability < 1 In this case, the results show that the correlation matrix is not significantly different from an identity matrix. The other diagnostic is the Kaiser Meyer Olkin (KMO) index, which indicates the part of the data that can be explained by elements present in the dataset. The higher this score the more the proportion of the data is explainable, for instance, by PCA. The KMO (also called MSA for Measure of Sample Adequacy) ranges from 0 (nothing is explainable) to 1 (everything is explainable). It can be returned for each item separately, or for the overall dataset. We will examine this value. It is the first component of the object returned by the KMO()function from psych package. The second component is the list of the values for the individual items (not examined here). It simply takes a matrix, a data frame, or a correlation matrix as an argument. Let's run in on our data: KMO(motiv)[1] This returns a value of 0.9715381, meaning that most of our data is explainable by the analysis. Summary In this article, we have discussed how the PCA algorithm works, how to select the appropriate number of components and how to use PCA scores for further analysis. Resources for Article: Further resources on this subject: Big Data Analysis (R and Hadoop) [article] How to do Machine Learning with Python [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 10701

article-image-oracle-12c-sql-plsql-new-features
Oli Huggins
16 Aug 2015
30 min read
Save for later

Oracle 12c SQL and PL/SQL New Features

Oli Huggins
16 Aug 2015
30 min read
In this article by Saurabh K. Gupta, author of the book Oracle Advanced PL/SQL Developer Professional Guide, Second Edition you will learn new features in Oracle 12c SQL and PL/SQL. (For more resources related to this topic, see here.) Oracle 12c SQL and PL/SQL new features SQL is the most widely used data access language while PL/SQL is an exigent language that can integrate seamlessly with SQL commands. The biggest benefit of running PL/SQL is that the code processing happens natively within the Oracle Database. In the past, there have been debates and discussions on server side programming while the client invokes the PL/SQL routines to perform a task. The server side programming approach has many benefits. It reduces the network round trips between the client and the database. It reduces the code size and eases the code portability because PL/SQL can run on all platforms, wherever Oracle Database is supported. Oracle Database 12c introduces many language features and enhancements which are focused on SQL to PL/SQL integration, code migration, and ANSI compliance. This section discusses the SQL and PL/SQL new features in Oracle database 12c. IDENTITY columns Oracle Database 12c Release 1 introduces identity columns in the tables in compliance with the American National Standard Institute (ANSI) SQL standard. A table column, marked as IDENTITY, automatically generate an incremental numeric value at the time of record creation. Before the release of Oracle 12c, developers had to create an additional sequence object in the schema and assign its value to the column. The new feature simplifies code writing and benefits the migration of a non-Oracle database to Oracle. The following script declares an identity column in the table T_ID_COL: /*Create a table for demonstration purpose*/ CREATE TABLE t_id_col (id NUMBER GENERATED AS IDENTITY, name VARCHAR2(20)) / The identity column metadata can be queried from the dictionary views USER_TAB_COLSand USER_TAB_IDENTITY_COLS. Note that Oracle implicitly creates a sequence to generate the number values for the column. However, Oracle allows the configuration of the sequence attributes of an identity column. The custom sequence configuration is listed under IDENTITY_OPTIONS in USER_TAB_IDENTITY_COLS view: /*Query identity column information in USER_TAB_COLS*/ SELECT column_name, data_default, user_generated, identity_column FROM user_tab_cols WHERE table_name='T_ID_COL' / COLUMN_NAME DATA_DEFAULT USE IDE -------------- ------------------------------ --- --- ID "SCOTT"."ISEQ$$_93001".nextval YES YES NAME YES NO Let us check the attributes of the preceding sequence that Oracle has implicitly created. Note that the query uses REGEXP_SUBSTR to print the sequence configuration in multiple rows: /*Check the sequence configuration from USER_TAB_IDENTITY_COLS view*/ SELECT table_name,column_name, generation_type, REGEXP_SUBSTR(identity_options,'[^,]+', 1, LEVEL) identity_options FROM user_tab_identity_cols WHERE table_name = 'T_ID_COL' CONNECT BY REGEXP_SUBSTR(identity_options,'[^,]+',1,level) IS NOT NULL / TABLE_NAME COLUMN_NAME GENERATION IDENTITY_OPTIONS ---------- ---------------------- --------------------------------- T_ID_COL ID ALWAYS START WITH: 1 T_ID_COL ID ALWAYS INCREMENT BY: 1 T_ID_COL ID ALWAYS MAX_VALUE: 9999999999999999999999999999 T_ID_COL ID ALWAYS MIN_VALUE: 1 T_ID_COL ID ALWAYS CYCLE_FLAG: N T_ID_COL ID ALWAYS CACHE_SIZE: 20 T_ID_COL ID ALWAYS ORDER_FLAG: N 7 rows selected While inserting data in the table T_ID_COL, do not include the identity column as its value is automatically generated: /*Insert test data in the table*/ BEGIN INSERT INTO t_id_col (name) VALUES ('Allen'); INSERT INTO t_id_col (name) VALUES ('Matthew'); INSERT INTO t_id_col (name) VALUES ('Peter'); COMMIT; END; / Let us check the data in the table. Note the identity column values: /*Query the table*/ SELECT id, name FROM t_id_col / ID NAME ----- -------------------- 1 Allen 2 Matthew 3 Peter The sequence created under the covers for identity columns is tightly coupled with the column. If a user tries to insert a user-defined input for the identity column, the operation throws an exception ORA-32795: INSERT INTO t_id_col VALUES (7,'Steyn'); insert into t_id_col values (7,'Steyn') * ERROR at line 1: ORA-32795: cannot insert into a generated always identity column Default column value to a sequence in Oracle 12c Oracle Database 12c allows developers to default a column directly to a sequence‑generated value. The DEFAULT clause of a table column can be assigned to SEQUENCE.CURRVAL or SEQUENCE.NEXTVAL. The feature will be useful while migrating non-Oracle data definitions to Oracle. The DEFAULT ON NULL clause Starting with Oracle Database 12c, a column can be assigned a default non-null value whenever the user tries to insert NULL into the column. The default value will be specified in the DEFAULT clause of the column with a new ON NULL extension. Note that the DEFAULT ON NULL cannot be used with an object type column. The following script creates a table t_def_cols. A column ID has been defaulted to a sequence while the column DOJ will always have a non-null value: /*Create a sequence*/ CREATE SEQUENCE seq START WITH 100 INCREMENT BY 10 / /*Create a table with a column defaulted to the sequence value*/ CREATE TABLE t_def_cols ( id number default seq.nextval primary key, name varchar2(30), doj date default on null '01-Jan-2000' ) / The following PL/SQL block inserts the test data: /*Insert the test data in the table*/ BEGIN INSERT INTO t_def_cols (name, doj) values ('KATE', '27-FEB-2001'); INSERT INTO t_def_cols (name, doj) values ('NANCY', '17-JUN-1998'); INSERT INTO t_def_cols (name, doj) values ('LANCE', '03-JAN-2004'); INSERT INTO t_def_cols (name) values ('MARY'); COMMIT; END; / Query the table and check the values for the ID and DOJ columns. ID gets the value from the sequence SEQ while DOJ for MARY has been defaulted to 01-JAN-2000. /*Query the table to verify sequence and default on null values*/ SELECT * FROM t_def_cols / ID NAME DOJ ---------- -------- --------- 100 KATE 27-FEB-01 110 NANCY 17-JUN-98 120 LANCE 03-JAN-04 130 MARY 01-JAN-00 Support for 32K VARCHAR2 Oracle Database 12c supports the VARCHAR2, NVARCHAR2, and RAW datatypes up to 32,767 bytes in size. The previous maximum limit for the VARCHAR2 (and NVARCHAR2) and RAW datatypes was 4,000 bytes and 2,000 bytes respectively. The support for extended string datatypes will benefit the non-Oracle to Oracle migrations. The feature can be controlled using the initialization parameter MAX_STRING_SIZE. It accepts two values: STANDARD (default)—The maximum size prior to the release of Oracle Database 12c will apply. EXTENDED—The new size limit for string datatypes apply. Note that, after the parameter is set to EXTENDED, the setting cannot be rolled back. The steps to increase the maximum string size in a database are: Restart the database in UPGRADE mode. In the case of a pluggable database, the PDB must be opened in MIGRATEmode. Use the ALTER SYSTEM command to set MAX_STRING_SIZE to EXTENDED. As SYSDBA, execute the $ORACLE_HOME/rdbms/admin/utl32k.sql script. The script is used to increase the maximum size limit of VARCHAR2, NVARCHAR2, and RAW wherever required. Restart the database in NORMAL mode. As SYSDBA, execute utlrp.sql to recompile the schema objects with invalid status. The points to be considered while working with the 32k support for string types are: COMPATIBLE must be 12.0.0.0 After the parameter is set to EXTENDED, the parameter cannot be rolled back to STANDARD In RAC environments, all the instances of the database comply with the setting of MAX_STRING_SIZE Row limiting using FETCH FIRST For Top-N queries, Oracle Database 12c introduces a new clause, FETCH FIRST, to simplify the code and comply with ANSI SQL standard guidelines. The clause is used to limit the number of rows returned by a query. The new clause can be used in conjunction with ORDER BY to retrieve Top-N results. The row limiting clause can be used with the FOR UPDATE clause in a SQL query. In the case of a materialized view, the defining query should not contain the FETCH clause. Another new clause, OFFSET, can be used to skip the records from the top or middle, before limiting the number of rows. For consistent results, the offset value must be a positive number, less than the total number of rows returned by the query. For all other offset values, the value is counted as zero. Keywords with the FETCH FIRST clause are: FIRST | NEXT—Specify FIRST to begin row limiting from the top. Use NEXT with OFFSET to skip certain rows. ROWS | PERCENT—Specify the size of the result set as a fixed number of rows or percentage of total number of rows returned by the query. ONLY | WITH TIES—Use ONLY to fix the size of the result set, irrespective of duplicate sort keys. If you want records with matching sort keys, specify WITH TIES. The following query demonstrates the use of the FETCH FIRST and OFFSET clauses in Top-N queries: /*Create the test table*/ CREATE TABLE t_fetch_first (empno VARCHAR2(30), deptno NUMBER, sal NUMBER, hiredate DATE) / The following PL/SQL block inserts sample data for testing: /*Insert the test data in T_FETCH_FIRST table*/ BEGIN INSERT INTO t_fetch_first VALUES (101, 10, 1500, '01-FEB-2011'); INSERT INTO t_fetch_first VALUES (102, 20, 1100, '15-JUN-2001'); INSERT INTO t_fetch_first VALUES (103, 20, 1300, '20-JUN-2000'); INSERT INTO t_fetch_first VALUES (104, 30, 1550, '30-DEC-2001'); INSERT INTO t_fetch_first VALUES (105, 10, 1200, '11-JUL-2012'); INSERT INTO t_fetch_first VALUES (106, 30, 1400, '16-AUG-2004'); INSERT INTO t_fetch_first VALUES (107, 20, 1350, '05-JAN-2007'); INSERT INTO t_fetch_first VALUES (108, 20, 1000, '18-JAN-2009'); COMMIT; END; / The SELECT query pulls in the top-5 rows when sorted by their salary: /*Query to list top-5 employees by salary*/ SELECT * FROM t_fetch_first ORDER BY sal DESC FETCH FIRST 5 ROWS ONLY / EMPNO DEPTNO SAL HIREDATE -------- ------ ------- --------- 104 30 1550 30-DEC-01 101 10 1500 01-FEB-11 106 30 1400 16-AUG-04 107 20 1350 05-JAN-07 103 20 1300 20-JUN-00 The SELECT query lists the top 25% of employees (2) when sorted by their hiredate: /*Query to list top-25% employees by hiredate*/ SELECT * FROM t_fetch_first ORDER BY hiredate FETCH FIRST 25 PERCENT ROW ONLY / EMPNO DEPTNO SAL HIREDATE -------- ------ ----- --------- 103 20 1300 20-JUN-00 102 20 1100 15-JUN-01 The SELECT query skips the first five employees and displays the next two—the 6th and 7th employee data: /*Query to list 2 employees after skipping first 5 employees*/ SELECT * FROM t_fetch_first ORDER BY SAL DESC OFFSET 5 ROWS FETCH NEXT 2 ROWS ONLY / Invisible columns Oracle Database 12c supports invisible columns, which implies that the visibility of a column. A column marked invisible does not appear in the following operations: SELECT * FROM queries on the table SQL* Plus DESCRIBE command Local records of %ROWTYPE Oracle Call Interface (OCI) description A column can be made invisible by specifying the INVISIBLE clause against the column. Columns of all types (except user-defined types), including virtual columns, can be marked invisible, provided the tables are not temporary tables, external tables, or clustered ones. An invisible column can be explicitly selected by the SELECT statement. Similarly, the INSERTstatement will not insert values in an invisible column unless explicitly specified. Furthermore, a table can be partitioned based on an invisible column. A column retains its nullity feature even after it is made invisible. An invisible column can be made visible, but the ordering of the column in the table may change. In the following script, the column NICKNAME is set as invisible in the table t_inv_col: /*Create a table to demonstrate invisible columns*/ CREATE TABLE t_inv_col (id NUMBER, name VARCHAR2(30), nickname VARCHAR2 (10) INVISIBLE, dob DATE ) / The information about the invisible columns can be found in user_tab_cols. Note that the invisible column is marked as hidden: /*Query the USER_TAB_COLS for metadata information*/ SELECT column_id, column_name, hidden_column FROM user_tab_cols WHERE table_name = 'T_INV_COL' ORDER BY column_id / COLUMN_ID COLUMN_NAME HID ---------- ------------ --- 1 ID NO 2 NAME NO 3 DOB NO NICKNAME YES Hidden columns are different from invisible columns. Invisible columns can be made visible and vice versa, but hidden columns cannot be made visible. If we try to make the NICKNAME visible and NAME invisible, observe the change in column ordering: /*Script to change visibility of NICKNAME column*/ ALTER TABLE t_inv_col MODIFY nickname VISIBLE / /*Script to change visibility of NAME column*/ ALTER TABLE t_inv_col MODIFY name INVISIBLE / /*Query the USER_TAB_COLS for metadata information*/ SELECT column_id, column_name, hidden_column FROM user_tab_cols WHERE table_name = 'T_INV_COL' ORDER BY column_id / COLUMN_ID COLUMN_NAME HID ---------- ------------ --- 1 ID NO 2 DOB NO 3 NICKNAME NO NAME YES Temporal databases Temporal databases were released as a new feature in ANSI SQL:2011. The term temporal data can be understood as a piece of information that can be associated with a period within which the information is valid. Before the feature was included in Oracle Database 12c, data whose validity is linked with a time period had to be handled either by the application or using multiple predicates in the queries. Oracle 12c partially inherits the feature from the ANSI SQL:2011 standard to support the entities whose business validity can be bracketed with a time dimension. If you're relating a temporal database with the Oracle Database 11g Total Recall feature, you're wrong. The total recall feature records the transaction time of the data in the database to secure the transaction validity and not the functional validity. For example, an investment scheme is active between January to December. The date recorded in the database at the time of data loading is the transaction timestamp. [box type="info" align="" class="" width=""]Starting from Oracle 12c, the Total Recall feature has been rebranded as Flashback Data Archive and has been made available for all versions of Oracle Database.[/box] The valid time temporal can be enabled for a table by adding a time dimension using the PERIOD FOR clause on the date or timestamp columns of the table. The following script creates a table t_tmp_db with the valid time temporal: /*Create table with valid time temporal*/ CREATE TABLE t_tmp_db( id NUMBER, name VARCHAR2(30), policy_no VARCHAR2(50), policy_term number, pol_st_date date, pol_end_date date, PERIOD FOR pol_valid_time (pol_st_date, pol_end_date)) / Create some sample data in the table: /*Insert test data in the table*/ BEGIN INSERT INTO t_tmp_db VALUES (100, 'Packt', 'PACKT_POL1', 1, '01-JAN-2015', '31-DEC-2015'); INSERT INTO t_tmp_db VALUES (110, 'Packt', 'PACKT_POL2', 2, '01-JAN-2015', '30-JUN-2015'); INSERT INTO t_tmp_db VALUES (120, 'Packt', 'PACKT_POL3', 3, '01-JUL-2015', '31-DEC-2015'); COMMIT; END; / Let us set the current time period window using DBMS_FLASHBACK_ARCHIVE. Grant the EXECUTE privilege on the package to the scott user. /*Connect to sysdba to grant execute privilege to scott*/ conn / as sysdba GRANT EXECUTE ON dbms_flashback_archive to scott / Grant succeeded. /*Connect to scott*/ conn scott/tiger /*Set the valid time period as CURRENT*/ EXEC DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('CURRENT'); PL/SQL procedure successfully completed. Setting the valid time period as CURRENT means that all the tables with a valid time temporal will only list the rows that are valid with respect to today's date. You can set the valid time to a particular date too. /*Query the table*/ SELECT * from t_tmp_db / ID POLICY_NO POL_ST_DATE POL_END_DATE --------- ---------- ----------------------- ------------------- 100 PACKT_POL1 01-JAN-15 31-DEC-15 110 PACKT_POL2 01-JAN-15 30-JUN-15 [box type="info" align="" class="" width=""]Due to dependency on the current date, the result may vary when the reader runs the preceding queries.[/box] The query lists only those policies that are active as of March 2015. Since the third policy starts in July 2015, it is currently not active. In-Database Archiving Oracle Database 12c introduces In-Database Archiving to archive the low priority data in a table. The inactive data remains in the database but is not visible to the application. You can mark old data for archival, which is not actively required in the application except for regulatory purposes. Although the archived data is not visible to the application, it is available for querying and manipulation. In addition, the archived data can be compressed to improve backup performance. A table can be enabled by specifying the ROW ARCHIVAL clause at the table level, which adds a hidden column ORA_ARCHIVE_STATE to the table structure. The column value must be updated to mark a row for archival. For example: /*Create a table with row archiving*/ CREATE TABLE t_row_arch( x number, y number, z number) ROW ARCHIVAL / When we query the table structure in the USER_TAB_COLS view, we find an additional hidden column, which Oracle implicitly adds to the table: /*Query the columns information from user_tab_cols view*/ SELECT column_id,column_name,data_type, hidden_column FROM user_tab_cols WHERE table_name='T_ROW_ARCH' / COLUMN_ID COLUMN_NAME DATA_TYPE HID ---------- ------------------ ---------- --- ORA_ARCHIVE_STATE VARCHAR2 YES 1 X NUMBER NO 2 Y NUMBER NO 3 Z NUMBER NO Let us create test data in the table: /Insert test data in the table*/ BEGIN INSERT INTO t_row_arch VALUES (10,20,30); INSERT INTO t_row_arch VALUES (11,22,33); INSERT INTO t_row_arch VALUES (21,32,43); INSERT INTO t_row_arch VALUES (51,82,13); commit; END; / For testing purpose, let us archive the rows in the table where X > 50 by updating the ora_archive_state column: /*Update ORA_ARCHIVE_STATE column in the table*/ UPDATE t_row_arch SET ora_archive_state = 1 WHERE x > 50 / COMMIT / By default, the session displays only the active records from an archival-enabled table: /*Query the table*/ SELECT * FROM t_row_arch / X Y Z ------ -------- ---------- 10 20 30 11 22 33 21 32 43 If you wish to display all the records, change the session setting: /*Change the session parameter to display the archived records*/ ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL / Session altered. /*Query the table*/ SELECT * FROM t_row_arch / X Y Z ---------- ---------- ---------- 10 20 30 11 22 33 21 32 43 51 82 13 Defining a PL/SQL subprogram in the SELECT query and PRAGMA UDF Oracle Database 12c includes two new features to enhance the performance of functions when called from SELECT statements. With Oracle 12c, a PL/SQL subprogram can be created inline with the SELECT query in the WITH clause declaration. The function created in the WITH clause subquery is not stored in the database schema and is available for use only in the current query. Since a procedure created in the WITH clause cannot be called from the SELECT query, it can be called in the function created in the declaration section. The feature can be very handy in read-only databases where the developers were not able to create PL/SQL wrappers. Oracle Database 12c adds the new PRAGMA UDF to create a standalone function with the same objective. Earlier, the SELECT queries could invoke a PL/SQL function, provided the function didn't change the database purity state. The query performance used to degrade because of the context switch from SQL to the PL/SQL engine (and vice versa) and the different memory representations of data type representation in the processing engines. In the following example, the function fun_with_plsql calculates the annual compensation of an employee's monthly salary: /*Create a function in WITH clause declaration*/ WITH FUNCTION fun_with_plsql (p_sal NUMBER) RETURN NUMBER IS BEGIN RETURN (p_sal * 12); END; SELECT ename, deptno, fun_with_plsql (sal) "annual_sal" FROM emp / ENAME DEPTNO annual_sal ---------- --------- ---------- SMITH 20 9600 ALLEN 30 19200 WARD 30 15000 JONES 20 35700 MARTIN 30 15000 BLAKE 30 34200 CLARK 10 29400 SCOTT 20 36000 KING 10 60000 TURNER 30 18000 ADAMS 20 13200 JAMES 30 11400 FORD 20 36000 MILLER 10 15600 14 rows selected. [box type="info" align="" class="" width=""]If the query containing the WITH clause declaration is not a top-level statement, then the top level statement must use the WITH_PLSQL hint. The hint will be used if INSERT, UPDATE, or DELETE statements are trying to use a SELECT with a WITHclause definition. Failure to include the hint results in an exception ORA-32034: unsupported use of WITH clause.[/box] A function can be created with the PRAGMA UDF to inform the compiler that the function is always called in a SELECT statement. Note that the standalone function created in the following code carries the same name as the one in the last example. The local WITH clause declaration takes precedence over the standalone function in the schema. /*Create a function with PRAGMA UDF*/ CREATE OR REPLACE FUNCTION fun_with_plsql (p_sal NUMBER) RETURN NUMBER is PRAGMA UDF; BEGIN RETURN (p_sal *12); END; / Since the objective of the feature is performance, let us go ahead with a case study to compare the performance when using a standalone function, a PRAGMA UDF function, and a WITHclause declared function. Test setup The exercise uses a test table with 1 million rows, loaded with random data. /*Create a table for performance test study*/ CREATE TABLE t_fun_plsql (id number, str varchar2(30)) / /*Generate and load random data in the table*/ INSERT /*+APPEND*/ INTO t_fun_plsql SELECT ROWNUM, DBMS_RANDOM.STRING('X', 20) FROM dual CONNECT BY LEVEL <= 1000000 / COMMIT / Case 1: Create a PL/SQL standalone function as it used to be until Oracle Database 12c. The function counts the numbers in the str column of the table. /*Create a standalone function without Oracle 12c enhancements*/ CREATE OR REPLACE FUNCTION f_count_num (p_str VARCHAR2) RETURN PLS_INTEGER IS BEGIN RETURN (REGEXP_COUNT(p_str,'d')); END; / The PL/SQL block measures the elapsed and CPU time when working with a pre-Oracle 12c standalone function. These numbers will serve as the baseline for our case study. /*Set server output on to display messages*/ SET SERVEROUTPUT ON /*Anonymous block to measure performance of a standalone function*/ DECLARE l_el_time PLS_INTEGER; l_cpu_time PLS_INTEGER; CURSOR C1 IS SELECT f_count_num (str) FROM t_fun_plsql; TYPE t_tab_rec IS TABLE OF PLS_INTEGER; l_tab t_tab_rec; BEGIN l_el_time := DBMS_UTILITY.GET_TIME (); l_cpu_time := DBMS_UTILITY.GET_CPU_TIME (); OPEN c1; FETCH c1 BULK COLLECT INTO l_tab; CLOSE c1; DBMS_OUTPUT.PUT_LINE ('Case 1: Performance of a standalone function'); DBMS_OUTPUT.PUT_LINE ('Total elapsed time:'||to_char(DBMS_UTILITY.GET_TIME () - l_el_time)); DBMS_OUTPUT.PUT_LINE ('Total CPU time:'||to_char(DBMS_UTILITY.GET_CPU_TIME () - l_cpu_time)); END; / Performance of a standalone function: Total elapsed time:1559 Total CPU time:1366 PL/SQL procedure successfully completed. Case 2: Create a PL/SQL function using PRAGMA UDF to count the numbers in the str column. /*Create the function with PRAGMA UDF*/ CREATE OR REPLACE FUNCTION f_count_num_pragma (p_str VARCHAR2) RETURN PLS_INTEGER IS PRAGMA UDF; BEGIN RETURN (REGEXP_COUNT(p_str,'d')); END; / Let us now check the performance of the PRAGMA UDF function using the following PL/SQL block. /*Set server output on to display messages*/ SET SERVEROUTPUT ON /*Anonymous block to measure performance of a PRAGMA UDF function*/ DECLARE l_el_time PLS_INTEGER; l_cpu_time PLS_INTEGER; CURSOR C1 IS SELECT f_count_num_pragma (str) FROM t_fun_plsql; TYPE t_tab_rec IS TABLE OF PLS_INTEGER; l_tab t_tab_rec; BEGIN l_el_time := DBMS_UTILITY.GET_TIME (); l_cpu_time := DBMS_UTILITY.GET_CPU_TIME (); OPEN c1; FETCH c1 BULK COLLECT INTO l_tab; CLOSE c1; DBMS_OUTPUT.PUT_LINE ('Case 2: Performance of a PRAGMA UDF function'); DBMS_OUTPUT.PUT_LINE ('Total elapsed time:'||to_char(DBMS_UTILITY.GET_TIME () - l_el_time)); DBMS_OUTPUT.PUT_LINE ('Total CPU time:'||to_char(DBMS_UTILITY.GET_CPU_TIME () - l_cpu_time)); END; / Performance of a PRAGMA UDF function: Total elapsed time:664 Total CPU time:582 PL/SQL procedure successfully completed. Case 3: The following PL/SQL block dynamically executes the function in the WITH clause subquery. Note that, unlike other SELECT statements, a SELECT query with a WITH clause declaration cannot be executed statically in the body of a PL/SQL block. /*Set server output on to display messages*/ SET SERVEROUTPUT ON /*Anonymous block to measure performance of inline function*/ DECLARE l_el_time PLS_INTEGER; l_cpu_time PLS_INTEGER; l_sql VARCHAR2(32767); c1 sys_refcursor; TYPE t_tab_rec IS TABLE OF PLS_INTEGER; l_tab t_tab_rec; BEGIN l_el_time := DBMS_UTILITY.get_time; l_cpu_time := DBMS_UTILITY.get_cpu_time; l_sql := 'WITH FUNCTION f_count_num_with (p_str VARCHAR2) RETURN NUMBER IS BEGIN RETURN (REGEXP_COUNT(p_str,'''||''||'d'||''')); END; SELECT f_count_num_with(str) FROM t_fun_plsql'; OPEN c1 FOR l_sql; FETCH c1 bulk collect INTO l_tab; CLOSE c1; DBMS_OUTPUT.PUT_LINE ('Case 3: Performance of an inline function'); DBMS_OUTPUT.PUT_LINE ('Total elapsed time:'||to_char(DBMS_UTILITY.GET_TIME () - l_el_time)); DBMS_OUTPUT.PUT_LINE ('Total CPU time:'||to_char(DBMS_UTILITY.GET_CPU_TIME () - l_cpu_time)); END; / Performance of an inline function: Total elapsed time:830 Total CPU time:718 PL/SQL procedure successfully completed. Comparative analysis Comparing the results from the preceding three cases, it's clear that the Oracle 12c flavor of PL/SQL functions out-performs the pre-12c standalone function by a high margin. From the following matrix, it is apparent that the usage of the PRAGMA UDF or WITH clause declaration enhances the code performance by (roughly) a factor of 2. Case Description Elapsed Time CPU time Performance gain factor by CPU time Standalone PL/SQL function in pre-Oracle 12c database 1559 1336 1x Standalone PL/SQL PRAGMA UDF function in Oracle 12c 664 582 2.3x Function created in WITH clause declaration in Oracle 12c 830 718 1.9x   [box type="info" align="" class="" width=""]Note that the numbers may slightly differ in the reader's testing environment but you should be able to draw the same conclusion by comparing them.[/box] The PL/SQL program unit whitelisting Prior to Oracle 12c, a standalone or packaged PL/SQL unit could be invoked by all other programs in the session's schema. Oracle Database 12c allows users to prevent unauthorized access to PL/SQL program units. You can now specify the list of whitelist program units that can invoke a particular program. The PL/SQL program header or the package specification can specify the list of program units in the ACCESSIBLE BY clause in the program header. All other program units, including cross-schema references (even SYS owned objects), trying to access a protected subprogram will receive an exception, PLS-00904: insufficient privileges to access object [object name]. The feature can be very useful in an extremely sensitive development environment. Suppose, a package PKG_FIN_PROC contains the sensitive implementation routines for financial institutions, the packaged subprograms are called by another PL/SQL package PKG_FIN_INTERNALS. The API layer exposes a fixed list of programs through a public API called PKG_CLIENT_ACCESS. In order to restrict access to the packaged routines in PKG_FIN_PROC, the users can build a safety net so as to allow access to only authorized programs. The following PL/SQL package PKG_FIN_PROC contains two subprograms—P_FIN_QTR and P_FIN_ANN. The ACCESSIBLE BY clause includes PKG_FIN_INTERNALS which means that all other program units, including anonymous PL/SQL blocks, are blocked from invoking PKG_FIN_PROC constructs. /*Package with the accessible by clause*/ CREATE OR REPLACE PACKAGE pkg_fin_proc ACCESSIBLE BY (PACKAGE pkg_fin_internals) IS PROCEDURE p_fin_qtr; PROCEDURE p_fin_ann; END; / [box type="info" align="" class="" width=""]The ACCESSIBLE BY clause can be specified for schema-level programs only.[/box] Let's see what happens when we invoke the packaged subprogram from an anonymous PL/SQL block. /*Invoke the packaged subprogram from the PL/SQL block*/ BEGIN pkg_fin_proc.p_fin_qtr; END; / pkg_fin_proc.p_fin_qtr; * ERROR at line 2: ORA-06550: line 2, column 4: PLS-00904: insufficient privilege to access object PKG_FIN_PROC ORA-06550: line 2, column 4: PL/SQL: Statement ignored Well, the compiler throws an exception as invoking the whitelisted package from an anonymous block is not allowed. The ACCESSIBLE BY clause can be included in the header information of PL/SQL procedures and functions, packages, and object types. Granting roles to PL/SQL program units Before Oracle Database 12c, a PL/SQL unit created with the definer's rights (default AUTHID) always executed with the definer's rights, whether or not the invoker has the required privileges. It may lead to an unfair situation where the invoking user may perform unwanted operations without needing the correct set of privileges. Similarly for an invoker's right unit, if the invoking user possesses a higher set of privileges than the definer, he might end up performing unauthorized operations. Oracle Database 12c secures the definer's rights by allowing the defining user to grant complementary roles to individual PL/SQL subprograms and packages. From the security standpoint, the granting of roles to schema level subprograms provides granular control as the privileges of the invoker are validated at the time of execution. In the following example, we will create two users: U1 and U2. The user U1 creates a PL/SQL procedure P_INC_PRICE that adds a surcharge to the price of a product by a certain amount. U1 grants the execute privilege to user U2. Test setup Let's create two users and give them the required privileges. /*Create a user with a password*/ CREATE USER u1 IDENTIFIED BY u1 / User created. /*Grant connect privileges to the user*/ GRANT CONNECT, RESOURCE TO u1 / Grant succeeded. /*Create a user with a password*/ CREATE USER u2 IDENTIFIED BY u2 / User created. /*Grant connect privileges to the user*/ GRANT CONNECT, RESOURCE TO u2 / Grant succeeded. The user U1 contains the PRODUCTS table. Let's create and populate the table. /*Connect to U1*/ CONN u1/u1 /*Create the table PRODUCTS*/ CREATE TABLE products ( prod_id INTEGER, prod_name VARCHAR2(30), prod_cat VARCHAR2(30), price INTEGER ) / /*Insert the test data in the table*/ BEGIN DELETE FROM products; INSERT INTO products VALUES (101, 'Milk', 'Dairy', 20); INSERT INTO products VALUES (102, 'Cheese', 'Dairy', 50); INSERT INTO products VALUES (103, 'Butter', 'Dairy', 75); INSERT INTO products VALUES (104, 'Cream', 'Dairy', 80); INSERT INTO products VALUES (105, 'Curd', 'Dairy', 25); COMMIT; END; / The procedure p_inc_price is designed to increase the price of a product by a given amount. Note that the procedure is created with the definer's rights. /*Create the procedure with the definer's rights*/ CREATE OR REPLACE PROCEDURE p_inc_price (p_prod_id NUMBER, p_amt NUMBER) IS BEGIN UPDATE products SET price = price + p_amt WHERE prod_id = p_prod_id; END; / The user U1 grants execute privilege on p_inc_price to U2. /*Grant execute on the procedure to the user U2*/ GRANT EXECUTE ON p_inc_price TO U2 / The user U2 logs in and executes the procedure P_INC_PRICE to increase the price of Milk by 5 units. /*Connect to U2*/ CONN u2/u2 /*Invoke the procedure P_INC_PRICE in a PL/SQL block*/ BEGIN U1.P_INC_PRICE (101,5); COMMIT; END; / PL/SQL procedure successfully completed. The last code listing exposes a gray area. The user U2, though not authorized to view PRODUCTS data, manipulates its data with the definer's rights. We need a solution to the problem. The first step is to change the procedure from definer's rights to invoker's rights. /*Connect to U1*/ CONN u1/u1 /*Modify the privilege authentication for the procedure to invoker's rights*/ CREATE OR REPLACE PROCEDURE p_inc_price (p_prod_id NUMBER, p_amt NUMBER) AUTHID CURRENT_USER IS BEGIN UPDATE products SET price = price + p_amt WHERE prod_id = p_prod_id; END; / Now, if we execute the procedure from U2, it throws an exception because it couldn't find the PRODUCTS table in its schema. /*Connect to U2*/ CONN u2/u2 /*Invoke the procedure P_INC_PRICE in a PL/SQL block*/ BEGIN U1.P_INC_PRICE (101,5); COMMIT; END; / BEGIN * ERROR at line 1: ORA-00942: table or view does not exist ORA-06512: at "U1.P_INC_PRICE", line 5 ORA-06512: at line 2 In a similar scenario in the past, the database administrators could have easily granted select or updated privileges to U2, which is not an optimal solution from the security standpoint. Oracle 12c allows the users to create program units with invoker's rights but grant the required roles to the program units and not the users. So, an invoker right unit executes with invoker's privileges, plus the PL/SQL program role. Let's check out the steps to create a role and assign it to the procedure. SYSDBA creates the role and assigns it to the user U1. Using the ADMIN or DELEGATE option with the grant enables the user to grant the role to other entities. /*Connect to SYSDBA*/ CONN / as sysdba /*Create a role*/ CREATE ROLE prod_role / /*Grant role to user U1 with delegate option*/ GRANT prod_role TO U1 WITH DELEGATE OPTION / Now, user U1 assigns the required set of privileges to the role. The role is then assigned to the required subprogram. Note that only roles, and not individual privileges, can be assigned to the schema level subprograms. /*Connect to U1*/ CONN u1/u1 /*Grant SELECT and UPDATE privileges on PRODUCTS to the role*/ GRANT SELECT, UPDATE ON PRODUCTS TO prod_role / /*Grant role to the procedure*/ GRANT prod_role TO PROCEDURE p_inc_price / User U2 tries to execute the procedure again. The procedure is successfully executed which means the value of "Milk" has been increased by 5 units. /*Connect to U2*/ CONN u2/u2 /*Invoke the procedure P_INC_PRICE in a PL/SQL block*/ BEGIN U1.P_INC_PRICE (101,5); COMMIT; END; / PL/SQL procedure successfully completed. User U1 verifies the result with a SELECT query. /*Connect to U1*/ CONN u1/u1 /*Query the table to verify the change*/ SELECT * FROM products / PROD_ID PROD_NAME PROD_CAT PRICE ---------- ---------- ---------- ---------- 101 Milk Dairy 25 102 Cheese Dairy 50 103 Butter Dairy 75 104 Cream Dairy 80 105 Curd Dairy 25 Miscellaneous PL/SQL enhancements Besides the preceding key features, there are a lot of new features in Oracle 12c. The list of features is as follows: An invoker rights function can be result-cached—until Oracle 11g, only the definers' programs were allowed to cache their results. Oracle 12c adds the invoking user's identity to the result cache to make it independent of the definer. The compilation parameter PLSQL_DEBUG has been deprecated. Two conditional compilation inquiry directives $$PLSQL_UNIT_OWNER and $$PLSQL_UNIT_TYPE have been implemented. Summary This chapter covers the top rated and new features of Oracle 12c SQL and PL/SQL as well as some miscellaneous PL/SQL enhancements. This chapter covers the top rated and new features of Oracle 12c SQL and PL/SQL as well as some miscellaneous PL/SQL enhancements. Further resources on this subject: What is Oracle Public Cloud? [article] Oracle APEX 4.2 reporting [article] Oracle E-Business Suite with Desktop Integration [article]
Read more
  • 0
  • 0
  • 18800
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-do-machine-learning-python
Packt
12 Aug 2015
5 min read
Save for later

How to do Machine Learning with Python

Packt
12 Aug 2015
5 min read
In this article, Sunila Gollapudi, author of Practical Machine Learning, introduces the key aspects of machine learning semantics and various toolkit options in Python. Machine learning has been around for many years now and all of us, at some point in time, have been consumers of machine learning technology. One of the most common examples is facial recognition software, which can identify if a digital photograph includes a particular person. Today, Facebook users can see automatic suggestions to tag their friends in their uploaded photos. Some cameras and software such as iPhoto also have this capability. What is learning? Let's spend some time understanding what the "learning" in machine learning means. We are referring to learning from some kind of observation or data to automatically carry out further actions. An intelligent system cannot be built without using learning to get there. The following are some questions that you’ll need to answer to define your learning problem: What do you want to learn? What is the required data and where does it come from? Is the complete data available in one shot? What is the goal of learning or why should there be learning at all? Before we plunge into understanding the internals of each learning type, let's quickly understand a simple predictive analytics process for building and validating models that solve a problem with maximum accuracy: Identify whether the raw dataset is validated or cleansed and is broken into training, testing, and evaluation datasets. Pick a model that best suits and has an error function that will be minimized over the training set. Make sure this model works on the testing set. Iterate this process with other machine learning algorithms and/or attributes until there is a reasonable performance on the test set. This result can now be used to apply for new inputs and predict the output. The following diagram depicts how learning can be applied to predict behavior: Key aspects of machine learning semantics The following concept map shows the key aspects of machine learning semantics: Python Python is one of the most highly adopted programming or scripting languages in the field of machine learning and data science. Python is known for its ease of learning, implementation, and maintenance. Python is highly portable and can run on Unix, Windows, and Mac platforms. With the availability of libraries such as Pydoop and SciPy, its relevance in the world of big data analytics has tremendously increased. Some of the key reasons for the popularity of Python in solving machine learning problems are as follows: Python is well suited for data analysis It is a versatile scripting language that can be used to write some basic, quick and dirty scripts to test some basic functions or can be used in real-time applications leveraging its full-featured toolkits Python comes with mature machine learning packages and can be used in a plug-and-play manner Toolkit options in Python Before we go deeper into what toolkit options we have in Python, let's first understand what toolkit option trade-offs should be considered before choosing one: What are my performance priorities? Do I need offline or real-time processing implementations? How transparent are the toolkits? Can I customize the library myself? What is the community status? How fast are bugs fixed and how is the community support and expert communication availability? There are three options in Python: Python external bindings. These are interfaces to popular packages in the market such as Matlab, R, Octave, and so on. This option will work well if you already have existing implementations in these frameworks. Python-based toolkits. There are a number of toolkits written in Python which come with a bunch of algorithms. Write your own logic/toolkit. Python has two core toolkits that are more like building blocks. Almost all the following specialized toolkits use these core ones: NumPy: Fast and efficient arrays built in Python SciPy: A bunch of algorithms for standard operations built on NumPy There are also C/C++ based implementations such as LIBLINEAR, LIBSVM, OpenCV, and others. Some of the most popular Python toolkits are as follows: nltk: The natural language toolkit. This focuses on natural language processing (NLP). mlpy: The machine learning algorithms toolkit that comes with support for some key machine learning algorithms such as classifications, regression, and clustering, among others. PyML: This toolkit focuses on support vector machine (SVM). PyBrain: This toolkit focuses on neural network and related functions. mdp-toolkit: The focus of this toolkit is on data processing and it supports scheduling and parallelizing the processing. scikit-learn: This is one of the most popular toolkits and has been highly adopted by data scientists in the recent past. It has support for supervised and unsupervised learning and some special support for feature selection and visualizations. There is a large team that is actively building this toolkit and is known for its excellent documentation. PyDoop: Python integration with the Hadoop platform. PyDoop and SciPy are heavily deployed in big data analytics. Find out how to apply python machine learning to your working environment with our 'Practical Machine Learning' book
Read more
  • 0
  • 0
  • 3181

article-image-predicting-sports-winners-decision-trees-and-pandas
Packt
12 Aug 2015
6 min read
Save for later

Predicting Sports Winners with Decision Trees and pandas

Packt
12 Aug 2015
6 min read
In this article by Robert Craig Layton, author of Learning Data Mining with Python, we will look at predicting the winner of games of the National Basketball Association (NBA) using a different type of classification algorithm—decision trees. Collecting the data The data we will be using is the match history data for the NBA, for the 2013-2014 season. The Basketball-Reference.com website contains a significant number of resources and statistics collected from the NBA and other leagues. Perform the following steps to download the dataset: Navigate to http://www.basketball-reference.com/leagues/NBA_2014_games.html in your web browser. Click on the Export button next to the Regular Season heading. Download the file to your data folder (and make a note of the path). This will download a CSV file containing the results of 1,230 games in the regular season of the NBA. We will load the file with the pandas library, which is an incredibly useful library for manipulating data. Python also contains a built-in library called csv that supports reading and writing CSV files. We will use pandas instead as it provides more powerful functions to work with datasets. For this article, you will need to install pandas. The easiest way to do that is to use pip3, which you may previously have used to install scikit-learn: $pip3 install pandas Using pandas to load the dataset We can load the dataset using the read_csv function in pandas as follows: import pandas as pddataset = pd.read_csv(data_filename) The result of this is a data frame, a data structure used by pandas. The pandas.read_csv function has parameters to fix some of the problems in the data, such as missing headings, which we can specify when loading the file: dataset = pd.read_csv(data_filename, parse_dates=["Date"],skiprows=[0,])dataset.columns = ["Date", "Score Type", "Visitor Team","VisitorPts", "Home Team", "HomePts", "OT?", "Notes"] We can now view a sample of the data frame: dataset.ix[:5] Extracting new features We extract our classes, 1 for a home win, and 0 for a visitor win. We can specify this using the following code to extract those wins into a NumPy array: dataset["HomeWin"] = dataset["VisitorPts"] < dataset["HomePts"] y_true = dataset["HomeWin"].values The first two new features we want to create are to indicate whether each of the two teams won their previous game. This would roughly approximate which team is currently playing well. We will compute this feature by iterating through the rows in order, and recording which team won. When we get to a new row, we look up whether the team won the last time: from collections import defaultdictwon_last = defaultdict(int) We can then iterate over all the rows and update the current row with the team's last result (win or loss): for index, row in dataset.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]row["HomeLastWin"] = won_last[home_team]row["VisitorLastWin"] = won_last[visitor_team]dataset.ix[index] = row We then set our dictionary with each team's result (from this row) for the next time we see these teams: won_last[home_team] = row["HomeWin"]won_last[visitor_team] = not row["HomeWin"] Decision trees Decision trees are a class of classification algorithm such as a flow chart that consist of a sequence of nodes, where the values for a sample are used to make a decision on the next node to go to. We can use the DecisionTreeClassifier class to create a decision tree: from sklearn.tree import DecisionTreeClassifierclf = DecisionTreeClassifier(random_state=14) We now need to extract the dataset from our pandas data frame in order to use it with our scikit-learn classifier. We do this by specifying the columns we wish to use and using the values parameter of a view of the data frame: X_previouswins = dataset[["HomeLastWin", "VisitorLastWin"]].values Decision trees are estimators, and therefore, they have fit and predict methods. We can also use the cross_val_score method as before to get the average score: scores = cross_val_score(clf, X_previouswins, y_true,scoring='accuracy')print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100)) This scores up to 56.1%; we are better off choosing randomly! Predicting sports outcomes We have a method for testing how accurate our models are using the cross_val_score method that allows us to try new features. For the first feature, we will create a feature that tells us whether the home team is generally better than the visitors by seeing whether they ranked higher in the previous season. To obtain the data, perform the following steps: Head to http://www.basketball-reference.com/leagues/NBA_2013_standings.html Scroll down to Expanded Standings. This gives us a single list for the entire league. Click on the Export link to the right of this heading. Save the download in your data folder. In your IPython Notebook, enter the following into a new cell. You'll need to ensure that the file was saved into the location pointed to by the data_folder variable: standings_filename = os.path.join(data_folder,"leagues_NBA_2013_standings_expanded-standings.csv")standings = pd.read_csv(standings_filename, skiprows=[0,1]) We then iterate over the rows and compare the team's standings: dataset["HomeTeamRanksHigher"] = 0for index, row in dataset.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"] Between 2013 and 2014, a team was renamed as follows: if home_team == "New Orleans Pelicans":home_team = "New Orleans Hornets"elif visitor_team == "New Orleans Pelicans":visitor_team = "New Orleans Hornets" Now, we can get the rankings for each team. We then compare them and update the feature in the row: home_rank = standings[standings["Team"] ==home_team]["Rk"].values[0]visitor_rank = standings[standings["Team"] ==visitor_team]["Rk"].values[0]row["HomeTeamRanksHigher"] = int(home_rank > visitor_rank)dataset.ix[index] = row Next, we use the cross_val_score function to test the result. First, we extract the dataset as before: X_homehigher = dataset[["HomeLastWin", "VisitorLastWin", "HomeTeamRanksHigher"]].values Then, we create a new DecisionTreeClassifier class and run the evaluation: clf = DecisionTreeClassifier(random_state=14)scores = cross_val_score(clf, X_homehigher, y_true,scoring='accuracy')print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100)) This now scores up to 60.3%—even better than our previous result. Unleash the full power of Python machine learning with our 'Learning Data Mining with Python' book.
Read more
  • 0
  • 0
  • 9355

Packt
11 Aug 2015
17 min read
Save for later

Divide and Conquer – Classification Using Decision Trees and Rules

Packt
11 Aug 2015
17 min read
In this article by Brett Lantz, author of the book Machine Learning with R, Second Edition, we will get a basic understanding about decision trees and rule learners, including the C5.0 decision tree algorithm. This algorithm will cover mechanisms such as choosing the best split and pruning the decision tree. While deciding between several job offers with various levels of pay and benefits, many people begin by making lists of pros and cons, and eliminate options based on simple rules. For instance, ''if I have to commute for more than an hour, I will be unhappy.'' Or, ''if I make less than $50k, I won't be able to support my family.'' In this way, the complex and difficult decision of predicting one's future happiness can be reduced to a series of simple decisions. This article covers decision trees and rule learners—two machine learning methods that also make complex decisions from sets of simple choices. These methods then present their knowledge in the form of logical structures that can be understood with no statistical knowledge. This aspect makes these models particularly useful for business strategy and process improvement. By the end of this article, you will learn: How trees and rules "greedily" partition data into interesting segments The most common decision tree and classification rule learners, including the C5.0, 1R, and RIPPER algorithms We will begin by examining decision trees, followed by a look at classification rules. (For more resources related to this topic, see here.) Understanding decision trees Decision tree learners are powerful classifiers, which utilize a tree structure to model the relationships among the features and the potential outcomes. As illustrated in the following figure, this structure earned its name due to the fact that it mirrors how a literal tree begins at a wide trunk, which if followed upward, splits into narrower and narrower branches. In much the same way, a decision tree classifier uses a structure of branching decisions, which channel examples into a final predicted class value. To better understand how this works in practice, let's consider the following tree, which predicts whether a job offer should be accepted. A job offer to be considered begins at the root node, where it is then passed through decision nodes that require choices to be made based on the attributes of the job. These choices split the data across branches that indicate potential outcomes of a decision, depicted here as yes or no outcomes, though in some cases there may be more than two possibilities. In the case a final decision can be made, the tree is terminated by leaf nodes (also known as terminal nodes) that denote the action to be taken as the result of the series of decisions. In the case of a predictive model, the leaf nodes provide the expected result given the series of events in the tree. A great benefit of decision tree algorithms is that the flowchart-like tree structure is not necessarily exclusively for the learner's internal use. After the model is created, many decision tree algorithms output the resulting structure in a human-readable format. This provides tremendous insight into how and why the model works or doesn't work well for a particular task. This also makes decision trees particularly appropriate for applications in which the classification mechanism needs to be transparent for legal reasons, or in case the results need to be shared with others in order to inform future business practices. With this in mind, some potential uses include: Credit scoring models in which the criteria that causes an applicant to be rejected need to be clearly documented and free from bias Marketing studies of customer behavior such as satisfaction or churn, which will be shared with management or advertising agencies Diagnosis of medical conditions based on laboratory measurements, symptoms, or the rate of disease progression Although the previous applications illustrate the value of trees in informing decision processes, this is not to suggest that their utility ends here. In fact, decision trees are perhaps the single most widely used machine learning technique, and can be applied to model almost any type of data—often with excellent out-of-the-box applications. This said, in spite of their wide applicability, it is worth noting some scenarios where trees may not be an ideal fit. One such case might be a task where the data has a large number of nominal features with many levels or it has a large number of numeric features. These cases may result in a very large number of decisions and an overly complex tree. They may also contribute to the tendency of decision trees to overfit data, though as we will soon see, even this weakness can be overcome by adjusting some simple parameters. Divide and conquer Decision trees are built using a heuristic called recursive partitioning. This approach is also commonly known as divide and conquer because it splits the data into subsets, which are then split repeatedly into even smaller subsets, and so on and so forth until the process stops when the algorithm determines the data within the subsets are sufficiently homogenous, or another stopping criterion has been met. To see how splitting a dataset can create a decision tree, imagine a bare root node that will grow into a mature tree. At first, the root node represents the entire dataset, since no splitting has transpired. Next, the decision tree algorithm must choose a feature to split upon; ideally, it chooses the feature most predictive of the target class. The examples are then partitioned into groups according to the distinct values of this feature, and the first set of tree branches are formed. Working down each branch, the algorithm continues to divide and conquer the data, choosing the best candidate feature each time to create another decision node, until a stopping criterion is reached. Divide and conquer might stop at a node in a case that: All (or nearly all) of the examples at the node have the same class There are no remaining features to distinguish among the examples The tree has grown to a predefined size limit To illustrate the tree building process, let's consider a simple example. Imagine that you work for a Hollywood studio, where your role is to decide whether the studio should move forward with producing the screenplays pitched by promising new authors. After returning from a vacation, your desk is piled high with proposals. Without the time to read each proposal cover-to-cover, you decide to develop a decision tree algorithm to predict whether a potential movie would fall into one of three categories: Critical Success, Mainstream Hit, or Box Office Bust. To build the decision tree, you turn to the studio archives to examine the factors leading to the success and failure of the company's 30 most recent releases. You quickly notice a relationship between the film's estimated shooting budget, the number of A-list celebrities lined up for starring roles, and the level of success. Excited about this finding, you produce a scatterplot to illustrate the pattern: Using the divide and conquer strategy, we can build a simple decision tree from this data. First, to create the tree's root node, we split the feature indicating the number of celebrities, partitioning the movies into groups with and without a significant number of A-list stars: Next, among the group of movies with a larger number of celebrities, we can make another split between movies with and without a high budget: At this point, we have partitioned the data into three groups. The group at the top-left corner of the diagram is composed entirely of critically acclaimed films. This group is distinguished by a high number of celebrities and a relatively low budget. At the top-right corner, majority of movies are box office hits with high budgets and a large number of celebrities. The final group, which has little star power but budgets ranging from small to large, contains the flops. If we wanted, we could continue to divide and conquer the data by splitting it based on the increasingly specific ranges of budget and celebrity count, until each of the currently misclassified values resides in its own tiny partition, and is correctly classified. However, it is not advisable to overfit a decision tree in this way. Though there is nothing to stop us from splitting the data indefinitely, overly specific decisions do not always generalize more broadly. We'll avoid the problem of overfitting by stopping the algorithm here, since more than 80 percent of the examples in each group are from a single class. This forms the basis of our stopping criterion. You might have noticed that diagonal lines might have split the data even more cleanly. This is one limitation of the decision tree's knowledge representation, which uses axis-parallel splits. The fact that each split considers one feature at a time prevents the decision tree from forming more complex decision boundaries. For example, a diagonal line could be created by a decision that asks, "is the number of celebrities is greater than the estimated budget?" If so, then "it will be a critical success." Our model for predicting the future success of movies can be represented in a simple tree, as shown in the following diagram. To evaluate a script, follow the branches through each decision until the script's success or failure has been predicted. In no time, you will be able to identify the most promising options among the backlog of scripts and get back to more important work, such as writing an Academy Awards acceptance speech. Since real-world data contains more than two features, decision trees quickly become far more complex than this, with many more nodes, branches, and leaves. In the next section, you will learn about a popular algorithm to build decision tree models automatically. The C5.0 decision tree algorithm There are numerous implementations of decision trees, but one of the most well-known implementations is the C5.0 algorithm. This algorithm was developed by computer scientist J. Ross Quinlan as an improved version of his prior algorithm, C4.5, which itself is an improvement over his Iterative Dichotomiser 3 (ID3) algorithm. Although Quinlan markets C5.0 to commercial clients (see http://www.rulequest.com/ for details), the source code for a single-threaded version of the algorithm was made publically available, and it has therefore been incorporated into programs such as R. To further confuse matters, a popular Java-based open source alternative to C4.5, titled J48, is included in R's RWeka package. Because the differences among C5.0, C4.5, and J48 are minor, the principles in this article will apply to any of these three methods, and the algorithms should be considered synonymous. The C5.0 algorithm has become the industry standard to produce decision trees, because it does well for most types of problems directly out of the box. Compared to other advanced machine learning models, the decision trees built by C5.0 generally perform nearly as well, but are much easier to understand and deploy. Additionally, as shown in the following table, the algorithm's weaknesses are relatively minor and can be largely avoided: Strengths Weaknesses An all-purpose classifier that does well on most problems Highly automatic learning process, which can handle numeric or nominal features, as well as missing data Excludes unimportant features Can be used on both small and large datasets Results in a model that can be interpreted without a mathematical background (for relatively small trees) More efficient than other complex models Decision tree models are often biased toward splits on features having a large number of levels It is easy to overfit or underfit the model Can have trouble modeling some relationships due to reliance on axis-parallel splits Small changes in the training data can result in large changes to decision logic Large trees can be difficult to interpret and the decisions they make may seem counterintuitive To keep things simple, our earlier decision tree example ignored the mathematics involved in how a machine would employ a divide and conquer strategy. Let's explore this in more detail to examine how this heuristic works in practice. Choosing the best split The first challenge that a decision tree will face is to identify which feature to split upon. In the previous example, we looked for a way to split the data such that the resulting partitions contained examples primarily of a single class. The degree to which a subset of examples contains only a single class is known as purity, and any subset composed of only a single class is called pure. There are various measurements of purity that can be used to identify the best decision tree splitting candidate. C5.0 uses entropy, a concept borrowed from information theory that quantifies the randomness, or disorder, within a set of class values. Sets with high entropy are very diverse and provide little information about other items that may also belong in the set, as there is no apparent commonality. The decision tree hopes to find splits that reduce entropy, ultimately increasing homogeneity within the groups. Typically, entropy is measured in bits. If there are only two possible classes, entropy values can range from 0 to 1. For n classes, entropy ranges from 0 to log2(n). In each case, the minimum value indicates that the sample is completely homogenous, while the maximum value indicates that the data are as diverse as possible, and no group has even a small plurality. In the mathematical notion, entropy is specified as follows: In this formula, for a given segment of data (S), the term c refers to the number of class levels and pi refers to the proportion of values falling into class level i. For example, suppose we have a partition of data with two classes: red (60 percent) and white (40 percent). We can calculate the entropy as follows: > -0.60 * log2(0.60) - 0.40 * log2(0.40) [1] 0.9709506 We can examine the entropy for all the possible two-class arrangements. If we know that the proportion of examples in one class is x, then the proportion in the other class is (1 – x). Using the curve() function, we can then plot the entropy for all the possible values of x: > curve(-x * log2(x) - (1 - x) * log2(1 - x),        col = "red", xlab = "x", ylab = "Entropy", lwd = 4) This results in the following figure: As illustrated by the peak in entropy at x = 0.50, a 50-50 split results in maximum entropy. As one class increasingly dominates the other, the entropy reduces to zero. To use entropy to determine the optimal feature to split upon, the algorithm calculates the change in homogeneity that would result from a split on each possible feature, which is a measure known as information gain. The information gain for a feature F is calculated as the difference between the entropy in the segment before the split (S1) and the partitions resulting from the split (S2): One complication is that after a split, the data is divided into more than one partition. Therefore, the function to calculate Entropy(S2) needs to consider the total entropy across all of the partitions. It does this by weighing each partition's entropy by the proportion of records falling into the partition. This can be stated in a formula as: In simple terms, the total entropy resulting from a split is the sum of the entropy of each of the n partitions weighted by the proportion of examples falling in the partition (wi). The higher the information gain, the better a feature is at creating homogeneous groups after a split on this feature. If the information gain is zero, there is no reduction in entropy for splitting on this feature. On the other hand, the maximum information gain is equal to the entropy prior to the split. This would imply that the entropy after the split is zero, which means that the split results in completely homogeneous groups. The previous formulae assume nominal features, but decision trees use information gain for splitting on numeric features as well. To do so, a common practice is to test various splits that divide the values into groups greater than or less than a numeric threshold. This reduces the numeric feature into a two-level categorical feature that allows information gain to be calculated as usual. The numeric cut point yielding the largest information gain is chosen for the split. Though it is used by C5.0, information gain is not the only splitting criterion that can be used to build decision trees. Other commonly used criteria are Gini index, Chi-Squared statistic, and gain ratio. For a review of these (and many more) criteria, refer to Mingers J. An Empirical Comparison of Selection Measures for Decision-Tree Induction. Machine Learning. 1989; 3:319-342. Pruning the decision tree A decision tree can continue to grow indefinitely, choosing splitting features and dividing the data into smaller and smaller partitions until each example is perfectly classified or the algorithm runs out of features to split on. However, if the tree grows overly large, many of the decisions it makes will be overly specific and the model will be overfitted to the training data. The process of pruning a decision tree involves reducing its size such that it generalizes better to unseen data. One solution to this problem is to stop the tree from growing once it reaches a certain number of decisions or when the decision nodes contain only a small number of examples. This is called early stopping or pre-pruning the decision tree. As the tree avoids doing needless work, this is an appealing strategy. However, one downside to this approach is that there is no way to know whether the tree will miss subtle, but important patterns that it would have learned had it grown to a larger size. An alternative, called post-pruning, involves growing a tree that is intentionally too large and pruning leaf nodes to reduce the size of the tree to a more appropriate level. This is often a more effective approach than pre-pruning, because it is quite difficult to determine the optimal depth of a decision tree without growing it first. Pruning the tree later on allows the algorithm to be certain that all the important data structures were discovered. The implementation details of pruning operations are very technical and beyond the scope of this article. For a comparison of some of the available methods, see Esposito F, Malerba D, Semeraro G. A Comparative Analysis of Methods for Pruning Decision Trees. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1997;19: 476-491. One of the benefits of the C5.0 algorithm is that it is opinionated about pruning—it takes care of many decisions automatically using fairly reasonable defaults. Its overall strategy is to post-prune the tree. It first grows a large tree that overfits the training data. Later, the nodes and branches that have little effect on the classification errors are removed. In some cases, entire branches are moved further up the tree or replaced by simpler decisions. These processes of grafting branches are known as subtree raising and subtree replacement, respectively. Balancing overfitting and underfitting a decision tree is a bit of an art, but if model accuracy is vital, it may be worth investing some time with various pruning options to see if it improves the performance on test data. As you will soon see, one of the strengths of the C5.0 algorithm is that it is very easy to adjust the training options. Summary This article covered two classification methods that use so-called "greedy" algorithms to partition the data according to feature values. Decision trees use a divide and conquer strategy to create flowchart-like structures, while rule learners separate and conquer data to identify logical if-else rules. Both methods produce models that can be interpreted without a statistical background. One popular and highly configurable decision tree algorithm is C5.0. We used the C5.0 algorithm to create a tree to predict whether a loan applicant will default. This article merely scratched the surface of how trees and rules can be used. Resources for Article: Further resources on this subject: Introduction to S4 Classes [article] First steps with R [article] Supervised learning [article]
Read more
  • 0
  • 0
  • 99630

article-image-matrix-and-pixel-manipulation-along-handling-files
Packt
11 Aug 2015
14 min read
Save for later

Matrix and Pixel Manipulation along with Handling Files

Packt
11 Aug 2015
14 min read
In this article, by Daniel Lélis Baggio, author of the book OpenCV 3.0 Computer Vision with Java, you will learn to perform basic operations required in computer vision, such as dealing with matrices, pixels, and opening files for prototype applications. In this article, the following topics will be covered: Basic matrix manipulation Pixel manipulation How to load and display images from files (For more resources related to this topic, see here.) Basic matrix manipulation From a computer vision background, we can see an image as a matrix of numerical values, which represents its pixels. For a gray-level image, we usually assign values ranging from 0 (black) to 255 (white) and the numbers in between show a mixture of both. These are generally 8-bit images. So, each element of the matrix refers to each pixel on the gray-level image, the number of columns refers to the image width, as well as the number of rows refers to the image's height. In order to represent a color image, we usually adopt each pixel as a combination of three basic colors: red, green, and blue. So, each pixel in the matrix is represented by a triplet of colors. It is important to observe that with 8 bits, we get 2 to the power of eight (28), which is 256. So, we can represent the range from 0 to 255, which includes, respectively the values used for black and white levels in 8-bit grayscale images. Besides this, we can also represent these levels as floating points and use 0.0 for black and 1.0 for white. OpenCV has a variety of ways to represent images, so you are able to customize the intensity level through the number of bits considering whether one wants signed, unsigned, or floating point data types, as well as the number of channels. OpenCV's convention is seen through the following expression: CV_<bit_depth>{U|S|F}C(<number_of_channels>) Here, U stands for unsigned, S for signed, and F stands for floating point. For instance, if an 8-bit unsigned single-channel image is required, the data type representation would be CV_8UC1, while a colored image represented by 32-bit floating point numbers would have the data type defined as CV_32FC3. If the number of channels is omitted, it evaluates to 1. We can see the ranges according to each bit depth and data type in the following list: CV_8U: These are the 8-bit unsigned integers that range from 0 to 255 CV_8S: These are the 8-bit signed integers that range from -128 to 127 CV_16U: These are the 16-bit unsigned integers that range from 0 to 65,535 CV_16S: These are the 16-bit signed integers that range from -32,768 to 32,767 CV_32S: These are the 32-bit signed integers that range from -2,147,483,648 to 2,147,483,647 CV_32F: These are the 32-bit floating-point numbers that range from -FLT_MAX to FLT_MAX and include INF and NAN values CV_64F: These are the 64-bit floating-point numbers that range from -DBL_MAX to DBL_MAX and include INF and NAN values You will generally start the project from loading an image, but it is important to know how to deal with these values. Make sure you import org.opencv.core.CvType and org.opencv.core.Mat. Several constructors are available for matrices as well, for instance: Mat image2 = new Mat(480,640,CvType.CV_8UC3); Mat image3 = new Mat(new Size(640,480), CvType.CV_8UC3); Both of the preceding constructors will construct a matrix suitable to fit an image with 640 pixels of width and 480 pixels of height. Note that width is to columns as height is to rows. Also pay attention to the constructor with the Size parameter, which expects the width and height order. In case you want to check some of the matrix properties, the methods rows(), cols(), and elemSize() are available: System.out.println(image2 + "rows " + image2.rows() + " cols " + image2.cols() + " elementsize " + image2.elemSize()); The output of the preceding line is: Mat [ 480*640*CV_8UC3, isCont=true, isSubmat=false, nativeObj=0xceeec70, dataAddr=0xeb50090 ]rows 480 cols 640 elementsize 3 The isCont property tells us whether this matrix uses extra padding when representing the image, so that it can be hardware-accelerated in some platforms; however, we won't cover it in detail right now. The isSubmat property refers to fact whether this matrix was created from another matrix and also whether it refers to the data from another matrix. The nativeObj object refers to the native object address, which is a Java Native Interface (JNI) detail, while dataAddr points to an internal data address. The element size is measured in the number of bytes. Another matrix constructor is the one that passes a scalar to be filled as one of its elements. The syntax for this looks like the following: Mat image = new Mat(new Size(3,3), CvType.CV_8UC3, new Scalar(new double[]{128,3,4})); This constructor will initialize each element of the matrix with the triple {128, 3, 4}. A very useful way to print a matrix's contents is using the auxiliary method dump() from Mat. Its output will look similar to the following: [128, 3, 4, 128, 3, 4, 128, 3, 4; 128, 3, 4, 128, 3, 4, 128, 3, 4; 128, 3, 4, 128, 3, 4, 128, 3, 4] It is important to note that while creating the matrix with a specified size and type, it will also immediately allocate memory for its contents. Pixel manipulation Pixel manipulation is often required for one to access pixels in an image. There are several ways to do this and each one has its advantages and disadvantages. A straightforward method to do this is the put(row, col, value) method. For instance, in order to fill our preceding matrix with values {1, 2, 3}, we will use the following code: for(int i=0;i<image.rows();i++){ for(int j=0;j<image.cols();j++){    image.put(i, j, new byte[]{1,2,3}); } } Note that in the array of bytes {1, 2, 3}, for our matrix, 1 stands for the blue channel, 2 for the green, and 3 for the red channel, as OpenCV stores its matrix internally in the BGR (blue, green, and red) format. It is okay to access pixels this way for small matrices. The only problem is the overhead of JNI calls for big images. Remember that even a small 640 x 480 pixel image has 307,200 pixels and if we think about a colored image, it has 921,600 values in a matrix. Imagine that it might take around 50 ms to make an overloaded call for each of the 307,200 pixels. On the other hand, if we manipulate the whole matrix on the Java side and then copy it to the native side in a single call, it will take around 13 ms. If you want to manipulate the pixels on the Java side, perform the following steps: Allocate memory with the same size as the matrix in a byte array. Put the image contents into that array (optional). Manipulate the byte array contents. Make a single put call, copying the whole byte array to the matrix. A simple example that will iterate all image pixels and set the blue channel to zero, which means that we will set to zero every element whose modulo is 3 equals zero, that is {0, 3, 6, 9, …}, as shown in the following piece of code: public void filter(Mat image){ int totalBytes = (int)(image.total() * image.elemSize()); byte buffer[] = new byte[totalBytes]; image.get(0, 0,buffer); for(int i=0;i<totalBytes;i++){    if(i%3==0) buffer[i]=0; } image.put(0, 0, buffer); } First, we find out the number of bytes in the image by multiplying the total number of pixels (image.total) with the element size in bytes (image.elemenSize). Then, we build a byte array with that size. We use the get(row, col, byte[]) method to copy the matrix contents in our recently created byte array. Then, we iterate all bytes and check the condition that refers to the blue channel (i%3==0). Remember that OpenCV stores colors internally as {Blue, Green, Red}. We finally make another JNI call to image.put, which copies the whole byte array to OpenCV's native storage. An example of this filter can be seen in the following screenshot, which was uploaded by Mromanchenko, licensed under CC BY-SA 3.0: Be aware that Java does not have any unsigned byte data type, so be careful when working with it. The safe procedure is to cast it to an integer and use the And operator (&) with 0xff. A simple example of this would be int unsignedValue = myUnsignedByte & 0xff;. Now, unsignedValue can be checked in the range of 0 to 255. Loading and displaying images from files Most computer vision applications need to retrieve images from some where. In case you need to get them from files, OpenCV comes with several image file loaders. Unfortunately, some loaders depend on codecs that sometimes aren't shipped with the operating system, which might cause them not to load. From the documentation, we see that the following files are supported with some caveats: Windows bitmaps: *.bmp, *.dib JPEG files: *.jpeg, *.jpg, *.jpe JPEG 2000 files: *.jp2 Portable Network Graphics: *.png Portable image format: *.pbm, *.pgm, *.ppm Sun rasters: *.sr, *.ras TIFF files: *.tiff, *.tif Note that Windows bitmaps, the portable image format, and sun raster formats are supported by all platforms, but the other formats depend on a few details. In Microsoft Windows and Mac OS X, OpenCV can always read the jpeg, png, and tiff formats. In Linux, OpenCV will look for codecs supplied with the OS, as stated by the documentation, so remember to install the relevant packages (do not forget the development files, for example, "libjpeg-dev" in Debian* and Ubuntu*) to get the codec support or turn on the OPENCV_BUILD_3RDPARTY_LIBS flag in CMake, as pointed out in Imread's official documentation. The imread method is supplied to get access to images through files. Use Imgcodecs.imread (name of the file) and check whether dataAddr() from the read image is different from zero to make sure the image has been loaded correctly, that is, the filename has been typed correctly and its format is supported. A simple method to open a file could look like the one shown in the following code. Make sure you import org.opencv.imgcodecs.Imgcodecs and org.opencv.core.Mat: public Mat openFile(String fileName) throws Exception{ Mat newImage = Imgcodecs.imread(fileName);    if(newImage.dataAddr()==0){      throw new Exception ("Couldn't open file "+fileName);    } return newImage; } Displaying an image with Swing OpenCV developers are used to a simple cross-platform GUI by OpenCV, which was called as HighGUI, and a handy method called imshow. It constructs a window easily and displays an image within it, which is nice to create quick prototypes. As Java comes with a popular GUI API called Swing, we had better use it. Besides, no imshow method was available for Java until its 2.4.7.0 version was released. On the other hand, it is pretty simple to create such functionality. Let's break down the work in to two classes: App and ImageViewer. The App class will be responsible for loading the file, while ImageViewer will display it. The application's work is simple and will only need to use Imgcodecs's imread method, which is shown as follows: package org.javaopencvbook;   import java.io.File; … import org.opencv.imgcodecs.Imgcodecs;   public class App { static{ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }   public static void main(String[] args) throws Exception { String filePath = "src/main/resources/images/cathedral.jpg"; Mat newImage = Imgcodecs.imread(filePath); if(newImage.dataAddr()==0){    System.out.println("Couldn't open file " + filePath); } else{    ImageViewer imageViewer = new ImageViewer();    imageViewer.show(newImage, "Loaded image"); } } } Note that the App class will only read an example image file in the Mat object and it will call the ImageViewer method to display it. Now, let's see how the ImageViewer class's show method works: package org.javaopencvbook.util;   import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.Image; import java.awt.image.BufferedImage;   import javax.swing.ImageIcon; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JScrollPane; import javax.swing.UIManager; import javax.swing.UnsupportedLookAndFeelException; import javax.swing.WindowConstants;   import org.opencv.core.Mat; import org.opencv.imgproc.Imgproc;   public class ImageViewer { private JLabel imageView; public void show(Mat image){    show(image, ""); }   public void show(Mat image,String windowName){    setSystemLookAndFeel();       JFrame frame = createJFrame(windowName);               Image loadedImage = toBufferedImage(image);        imageView.setIcon(new ImageIcon(loadedImage));               frame.pack();        frame.setLocationRelativeTo(null);        frame.setVisible(true);    }   private JFrame createJFrame(String windowName) {    JFrame frame = new JFrame(windowName);    imageView = new JLabel();    final JScrollPane imageScrollPane = new JScrollPane(imageView);        imageScrollPane.setPreferredSize(new Dimension(640, 480));        frame.add(imageScrollPane, BorderLayout.CENTER);        frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);    return frame; }   private void setSystemLookAndFeel() {    try {      UIManager.setLookAndFeel (UIManager.getSystemLookAndFeelClassName());    } catch (ClassNotFoundException e) {      e.printStackTrace();    } catch (InstantiationException e) {      e.printStackTrace();    } catch (IllegalAccessException e) {      e.printStackTrace();    } catch (UnsupportedLookAndFeelException e) {      e.printStackTrace();    } }   public Image toBufferedImage(Mat matrix){    int type = BufferedImage.TYPE_BYTE_GRAY;    if ( matrix.channels() > 1 ) {      type = BufferedImage.TYPE_3BYTE_BGR;    }    int bufferSize = matrix.channels()*matrix.cols()*matrix.rows();    byte [] buffer = new byte[bufferSize];    matrix.get(0,0,buffer); // get all the pixels    BufferedImage image = new BufferedImage(matrix.cols(),matrix.rows(), type);    final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();    System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);    return image; }   } Pay attention to the show and toBufferedImage methods. Show will try to set Swing's look and feel to the default native look, which is cosmetic. Then, it will create JFrame with JScrollPane and JLabel inside it. It will then call toBufferedImage, which will convert an OpenCV Mat object to a BufferedImage AWT. This conversion is made through the creation of a byte array that will store matrix contents. The appropriate size is allocated through the multiplication of the number of channels by the number of columns and rows. The matrix.get method puts all the elements into the byte array. Finally, the image's raster data buffer is accessed through the getDataBuffer() and getData() methods. It is then filled with a fast system call to the System.arraycopy method. The resulting image is then assigned to JLabel and then it is easily displayed. Note that this method expects a matrix that is either stored as one channel's unsigned 8-bit or three channel's unsigned 8-bit. In case your image is stored as a floating point, you should convert it using the following code before calling this method, supposing that the image you need to convert is a Mat object called originalImage: Mat byteImage = new Mat(); originalImage.convertTo(byteImage, CvType.CV_8UC3); This way, you can call toBufferedImage from your converted byteImage property. The image viewer can be easily installed in any Java OpenCV project and it will help you to show your images for debugging purposes. The output of this program can be seen in the next screenshot: Summary In this article, we learned dealing with matrices, pixels, and opening files for GUI prototype applications. Resources for Article: Further resources on this subject: Wrapping OpenCV [article] Making subtle color shifts with curves [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 16269
article-image-getting-started-java-driver-mongodb
Packt
11 Aug 2015
8 min read
Save for later

Getting Started with Java Driver for MongoDB

Packt
11 Aug 2015
8 min read
In this article by Francesco Marchioni, author of the book MongoDB for Java Developers, you will be able to perform all the create/read/update/delete (CRUD) operations that we have so far accomplished using the mongo shell. (For more resources related to this topic, see here.) Querying data We will now see how to use the Java API to query for your documents. Querying for documents with MongoDB resembles JDBC queries; the main difference is that the returned object is a com.mongodb.DBCursor class, which is an iterator over the database result. In the following example, we are iterating over the javastuff collection that should contain the documents inserted so far: package com.packtpub.mongo.chapter2;   import com.mongodb.DB; import com.mongodb.DBCollection; import com.mongodb.DBCursor; import com.mongodb.DBObject; import com.mongodb.MongoClient;   public class SampleQuery{   private final static String HOST = "localhost"; private final static int PORT = 27017;   public static void main( String args[] ){    try{      MongoClient mongoClient = new MongoClient( HOST,PORT );           DB db = mongoClient.getDB( "sampledb" );           DBCollection coll = db.getCollection("javastuff");      DBCursor cursor = coll.find();      try {        while(cursor.hasNext()) {          DBObject object = cursor.next();          System.out.println(object);        }      }      finally {        cursor.close();      }       }    catch(Exception e) {      System.err.println( e.getClass().getName() + ": " +       e.getMessage() );    } } } Depending on the documents you have inserted, the output could be something like this: { "_id" : { "$oid" : "5513f8836c0df1301685315b"} , "name" : "john" , "age" : 35 , "kids" : [ { "name" : "mike"} , { "name" : "faye"}] , "info" : { "email" : "john@mail.com" , "phone" : "876-134-667"}} . . . . Restricting the search to the first document The find operator executed without any parameter returns the full cursor of a collection; pretty much like the SELECT * query in relational DB terms. If you are interested in reading just the first document in the collection, you could use the findOne() operation to get the first document in the collection. This method returns a single document (instead of the DBCursor that the find() operation returns). As you can see, the findOne() operator directly returns a DBObject instead of a com.mongodb.DBCursor class: DBObject myDoc = coll.findOne(); System.out.println(myDoc); Querying the number of documents in a collection Another typical construct that you probably know from the SQL is the SELECT count(*) query that is useful to retrieve the number of records in a table. In MongoDB terms, you can get this value simply by invoking the getCount against a DBCollection class: DBCollection coll = db.getCollection("javastuff"); System.out.println(coll.getCount()); As an alternative, you could execute the count() method over the DBCursor object: DBCursor cursor = coll.find(); System.out.println(cursor.count()); Eager fetching of data using DBCursor When find is executed and a DBCursor is executed you have a pointer to a database document. This means that the documents are fetched in the memory as you call next() method on the DBCursor. On the other hand, you can eagerly load all the data into the memory by executing the toArray() method, which returns a java.util.List structure: List list = collection.find( query ).toArray(); The problem with this approach is that you could potentially fill up the memory with lots of documents, which are eagerly loaded. You are therefore advised to include some operators such as skip() and limit() to control the amount of data to be loaded into the memory: List list = collection.find( query ).skip( 100 ). limit( 10 ).toArray(); Just like you learned from the mongo shell, the skip operator can be used as an initial offset of your cursor whilst the limit construct can eventually load the first n occurrences in the cursor. Filtering through the records Typically, you will not need to fetch the whole set of documents in a collection. So, just like SQL uses WHERE conditions to filter records, in MongoDB you can restrict searches by creating a BasicDBObject and passing it to the find function as an argument. See the following example: DBCollection coll = db.getCollection("javastuff");   DBObject query = new BasicDBObject("name", "owen");   DBCursor cursor = coll.find(query);   try {    while(cursor.hasNext()) {        System.out.println(cursor.next());    } } finally {    cursor.close(); } In the preceding example, we retrieve the documents in the javastuff collection, whose name key equals to owen. That's the equivalent of an SQL query like this: SELECT * FROM javastuff WHERE name='owen' Building more complex searches As your collections keep growing, you will need to be more selective with your searches. For example, you could include multiple keys in your BasicDBObject that will eventually be passed to find. We can then apply the same functions in our queries. For example, here is how to find documents whose name does not equal ($ne) to Frank and whose age is greater than 10: DBCollection coll = db.getCollection("javastuff");   DBObject query = new      BasicDBObject("name", new BasicDBObject("$ne",        "frank")).append("age", new BasicDBObject("$gt", 10));   DBCursor cursor = coll.find(query); Updating documents Having learned about create and read, we are half way through our CRUD track. The next operation you will learn is update. The DBCollection class contains an update method that can be used for this purpose. Let's say we have the following document: > db.javastuff.find({"name":"frank"}).pretty() { "_id" : ObjectId("55142c27627b27560bd365b1"), "name" : "frank", "age" : 31, "info" : { "email" : "frank@mail.com", "phone" : "222-111-444" } } Now we want to change the age value for this document by setting it to 23: DBCollection coll = db.getCollection("javastuff");   DBObject newDocument = new BasicDBObject(); newDocument.put("age", 23);   DBObject searchQuery = new BasicDBObject().append("name", "owen");   coll.update(searchQuery, newDocument); You might think that would do the trick, but wait! Let's have a look at our document using the mongo shell: > db.javastuff.find({"age":23}).pretty()   { "_id" : ObjectId("55142c27627b27560bd365b1"), "age" : 23 } As you can see, the update statement has replaced the original document with another one, including only the keys and values we have passed to the update. In most cases, this is not what we want to achieve. If we want to update a particular value, we have to use the $set update modifier. DBCollection coll = db.getCollection("javastuff");   BasicDBObject newDocument = new BasicDBObject(); newDocument.append("$set", new BasicDBObject().append("age",   23));   BasicDBObject searchQuery = new BasicDBObject().append("name", "frank");   coll.update(searchQuery, newDocument); So, suppose we restored the initial document with all the fields, this is the outcome of the update using the $set update modifier: > db.javastuff.find({"age":23}).pretty() { "_id" : ObjectId("5514326e627b383428c2ccd8"), "name" : "frank", "age" : 23, "in,fo" : {    "email" : "frank@mail.com",    "phone" : "222-111-444" } } Please note that the DBCollection class overloads the method update with update (DBObject q, DBObject o, boolean upsert, boolean multi). The first parameter (upsert) determines whether the database should create the element if it does not exist. The second one (multi) causes the update to be applied to all matching objects. Deleting documents The operator to be used for deleting documents is obviously delete. As for other operators, it includes several variants. In its simplest form, when executed over a single document returned, it will remove it: MongoClient mongoClient = new MongoClient("localhost", 27017);   DB db = mongoClient.getDB("sampledb"); DBCollection coll = db.getCollection("javastuff"); DBObject doc = coll.findOne();   coll.remove(doc); Most of the time you will need to filter the documents to be deleted. Here is how to delete the document with the key frank: DBObject document = new BasicDBObject(); document.put("name", "frank");   coll.remove(document); Deleting a set of documents Bulk deletion of documents can be achieved by including the keys in a List and building an $in modifier expression that uses this list. Let's see, for example, how to delete all records whose age ranges from 0 to 49: BasicDBObject deleteQuery = new BasicDBObject(); List<Integer> list = new ArrayList<Integer>();   for (int i=0;i<50;i++) list.add(i);   deleteQuery.put("age", new BasicDBObject("$in", list)); Summary In this article, we covered how to perform the same operations that are available in the mongo shell. Resources for Article: Further resources on this subject: MongoDB data modeling [article] Apache Solr and Big Data – integration with MongoDB [article] About MongoDB [article]
Read more
  • 0
  • 0
  • 2052

article-image-neo4j-modeling-bookings-and-users
Packt
11 Aug 2015
14 min read
Save for later

Neo4j – Modeling Bookings and Users

Packt
11 Aug 2015
14 min read
In this article, by Mahesh Lal, author of the book Neo4j Graph Data Modeling, we will explore how graphs can be used to solve problems that are dominantly solved using RDBMS, for example, bookings. We will discuss the following topics in this article: Modeling bookings in an RDBMS Modeling bookings in a graph Adding bookings to graphs Using Cypher to find bookings and journeys (For more resources related to this topic, see here.) Building a data model for booking flights We have a graph that allows people to search flights. At this point, a logical extension to the problem statement could be to allow users to book flights online after they decide the route on which they wish to travel. We were only concerned with flights and the cities. However, we need to tweak the model to include users, bookings, dates, and capacity of the flight in order to make bookings. Most teams choose to use an RDBMS for sensitive data such as user information and bookings. Let's understand how we can translate a model from an RDBMS to a graph. A flight booking generally has many moving parts. While it would be great to model all of the parts of a flight booking, a smaller subset would be more feasible, to demonstrate how to model data that is normally stored in a RDBMS. A flight booking will contain information about the user who booked it along with the date of booking. It's not uncommon to change multiple flights to get from one city to another. We can call these journey legs or journeys, and model them separately from the booking that has these journeys. It is also possible that the person booking the flight might be booking for some other people. Because of this, it is advisable to model passengers with their basic details separately from the user. We have intentionally skipped details such as payment and costs in order to keep the model simple. A simple model of the bookings ecosystem A booking generally contains information such as the date of booking, the user who booked it, and a date of commencement of the travel. A journey contains information about the flight code. Other information about the journey such as the departure and arrival time, and the source and destination cities can be evaluated on the basis of the flight which the journey is being undertaken. Both booking and journey will have their own specific IDs to identify them uniquely. Passenger information related to the booking must have the name of the passengers at the very least, but more commonly will have more information such as the age, gender, and e-mail. A rough model of the Booking, Journey, Passenger, and User looks like this: Figure 4.1: Bookings ecosystem Modeling bookings in an RDBMS To model data shown in Figure 4.1 in an RDBMS, we will have to create tables for bookings, journeys, passengers, and users. In the previous model, we have intentionally added booking_id to Journeys and user_id to Bookings. In an RDBMS, these will be used as foreign keys. We also need an additional table Bookings_Passengers_Relationships so that we can depict the many relationships between Bookings and Passengers. The multiple relationships between Bookings and Passengers help us to ensure that we capture passenger details for two purposes. The first is that a user can have a master list of travelers they have travelled with and the second use is to ensure that all the journeys taken by a person can be fetched when the passenger logs into their account or creates an account in the future. We are naming the foreign key references with a prefix fk_ in adherence to the popular convention. Figure 4.2: Modeling bookings in an RDBMS In an RDBMS, every record is a representation of an entity (or a relationship in case of relationship tables). In our case, we tried to represent a single booking record as a single block. This applies to all other entities in the system, such as the journeys, passengers, users, and flights. Each of the records has its own ID by which it can be uniquely identified. The properties starting with fk_ are foreign keys, which should be present in the tables to which the key points. In our model, passengers may or may not be the users of our application. Hence, we don't add a foreign key constraint to the Passengers table. To infer whether the passenger is one of the users or not, we will have to use other means of inferences, for example, the e-mail ID. Given the relationships of the data, which are inferred using the foreign key relationships and other indirect means, we can draw the logical graph of bookings as shown in the following diagram: Figure 4.3: Visualizing related entities in an RDBMS Figure 4.3 shows us the logical graph of how entities are connected in our domain. We can translate this into a Bookings subgraph. From the related entities of Figure 4.3, we can create a specification of the Bookings subgraph, which is as follows: Figure 4.4: Specification of subgraph of bookings Comparing Figure 5.3 and Figure 5.4, we observe that all the fk_ properties are removed from the nodes that represent the entities. Since we have explicit relationships that can now be used to traverse the graph, we don't need implicit relationships that rely on foreign keys to be enforced. We put the date of booking on the booking itself rather than on the relationship between User and Bookings. The date of booking can be captured either in the booking node or in the :MADE_BOOKING relationship. The advantage of capturing it in the booking node is that we can further run queries efficiently on it rather than relying on crude filtering methods to extract information from the subgraph. An important addition to the Bookings object is adding the properties year, month, and day. Since date is not a datatype supported by Neo4j, range queries become difficult. Timestamps solve this problem to some extent, for example, if we want to find all bookings made between June 01, 2015 and July 01, 2015, we can convert them into timestamps and search for all bookings that have timestamps between these two timestamps. This, however, is a very expensive process, and would need a store scan of bookings. To alleviate these problems, we can capture the year, day, and month on the booking. While adapting to the changing needs of the system, remodeling the data model is encouraged. It is also important that we build a data model with enough data captured for our needs—both current and future. It is a judgment-based decision, without any correct answer. As long as the data might be easily derived from existing data in the node, we recommend not to add it until needed. In this case, converting a timestamp to its corresponding date with its components might require additional programming effort. To avoid that, we can begin capturing the data right away. There might be other cases, for example, we want to introduce a property Name on a node with First name and Last name as properties. The derivation of Name from First name and Last name is straightforward. In this case, we advise not to capture the data till the need arises. Creating bookings and users in Neo4j For bookings to exist, we should create users in our data model. Creating users To create users, we create a constraint on the e-mail of the user, which we will use as an unique identifier as shown in the following query: neo4j-sh (?)$ CREATE CONSTRAINT ON (user:User)   ASSERT user.email IS UNIQUE; The output of the preceding query is as follows: +-------------------+ | No data returned. | +-------------------+ Constraints added: 1 With the constraint added, let's create a few users in our system: neo4j-sh (?)$ CREATE (:User{name:"Mahesh Lal",   email:"mahesh.lal@gmail.com"}), (:User{name:"John Doe", email:"john.doe@gmail.com"}), (:User{name:"Vishal P", email:"vishal.p@gmail.com"}), (:User{name:"Dave Coeburg", email:"dave.coeburg@gmail.com"}), (:User{name:"Brian Heritage",     email:"brian.heritage@hotmail.com"}), (:User{name:"Amit Kumar", email:"amit.kumar@hotmail.com"}), (:User{name:"Pramod Bansal",     email:"pramod.bansal@hotmail.com"}), (:User{name:"Deepali T", email:"deepali.t@gmail.com"}), (:User{name:"Hari Seldon", email:"hari.seldon@gmail.com"}), (:User{name:"Elijah", email:"elijah.b@gmail.com"}); The output of the preceding query is as follows: +-------------------+ | No data returned. | +-------------------+ Nodes created: 10 Properties set: 20 Labels added: 10 Please add more users from users.cqy. Creating bookings in Neo4j As discussed earlier, a booking has multiple journey legs, and a booking is only complete when all its journey legs are booked. Bookings in our application aren't a single standalone entity. They involve multiple journeys and passengers. To create a booking, we need to ensure that journeys are created and information about passengers is captured. This results in a multistep process. To ensure that booking IDs remain unique and no two nodes have the same ID, we should add a constraint on the id property of booking: neo4j-sh (?)$ CREATE CONSTRAINT ON (b:Booking)   ASSERT b.id IS UNIQUE; The output will be as follows: +-------------------+ | No data returned. | +-------------------+ Constraints added: 1 We will create similar constraints for Journey as shown here: neo4j-sh (?)$ CREATE CONSTRAINT ON (journey:Journey)   ASSERT journey._id IS UNIQUE; The output is as follows: +-------------------+ | No data returned. | +-------------------+ Constraints added: 1 Add a constraint for the e-mail of passengers to be unique, as shown here: neo4j-sh (?)$ CREATE CONSTRAINT ON (p:Passenger)   ASSERT p.email IS UNIQUE; The output is as shown: +-------------------+ | No data returned. | +-------------------+ Constraints added: 1 With constraint creation, we can now focus on how bookings can be created. We will be running this query in the Neo4j browser, as shown: //Get all flights and users MATCH (user:User{email:"john.doe@gmail.com"}) MATCH (f1:Flight{code:"VS9"}), (f2:Flight{code:"AA9"}) //Create a booking for a date MERGE (user)-[m:MADE_BOOKING]->(booking:Booking {_id:"0f64711c-7e22-11e4-a1af-14109fda6b71", booking_date:1417790677.274862, year: 2014, month: 12, day: 5}) //Create or get passengers MERGE (p1:Passenger{email:"vishal.p@gmail.com"}) ON CREATE SET p1.name = "Vishal Punyani", p1.age= 30 MERGE (p2:Passenger{email:"john.doe@gmail.com"}) ON CREATE SET p2.name = "John Doe", p2.age= 25 //Create journeys to be taken by flights MERGE (j1:Journey{_id: "712785b8-1aff-11e5-abd4-6c40089a9424", date_of_journey:1422210600.0, year:2015, month: 1, day: 26})-[:BY_FLIGHT]-> (f1) MERGE (j2:Journey{_id:"843de08c-1aff-11e5-8643-6c40089a9424", date_of_journey:1422210600.0, year:2015, month: 1, day: 26})-[:BY_FLIGHT]-> (f2) WITH user, booking, j1, j2, f1, f2, p1, p2 //Merge journeys and booking, Create and Merge passengers with bookings, and return data MERGE (booking)-[:HAS_PASSENGER]->(p1) MERGE (booking)-[:HAS_PASSENGER]->(p2) MERGE (booking)-[:HAS_JOURNEY]->(j1) MERGE (booking)-[:HAS_JOURNEY]->(j2) RETURN user, p1, p2, j1, j2, f1, f2, booking The output is as shown in the following screenshot: Figure 4.5: Booking that was just created We have added comments to the query to explain the different parts of the query. The query can be divided into the following parts: Finding flights and user Creating bookings Creating journeys Creating passengers and link to booking Linking journey to booking We have the same start date for both journeys, but in general, the start dates of journeys in the same booking will differ if: The traveler is flying across time zones. For example, if a traveler is flying from New York to Istanbul, the journeys from New York to London and from London to Istanbul will be on different dates. The traveler is booking multiple journeys in which they will be spending some time at a destination. Let's use bookings.cqy to add a few more bookings to the graph. We will use them to run further queries. Queries to find journeys and bookings With the data on bookings added in, we can now explore some interesting queries that can help us. Finding all journeys of a user All journeys that a user has undertaken will be all journeys that they have been a passenger on. We can use the user's e-mail to search for journeys on which the user has been a passenger. To find all the journeys that the user has been a passenger on, we should find the journeys via the bookings, and then using the bookings, we can find the journeys, flights, and cities as shown: neo4j-sh (?)$ MATCH (b:Booking)-[:HAS_PASSENGER]->(p:Passenger{email:"vishal.p@gmail.com"}) WITH b MATCH (b)-[:HAS_JOURNEY]->(j:Journey)-[:BY_FLIGHT]->(f:Flight) WITH b._id as booking_id, j.date_of_journey as date_of_journey, COLLECT(f) as flights ORDER BY date_of_journey DESC MATCH (source:City)-[:HAS_FLIGHT]->(f)-[:FLYING_TO]->(destination:City) WHERE f in flights RETURN booking_id, date_of_journey, source.name as from, f.code as by_flight, destination.name as to; The output of this query is as follows: While this query is useful to get all the journeys of the user, it can also be used to map all the locations the user has travelled to. Queries for finding the booking history of a user The query for finding all bookings by a user is straightforward, as shown here: neo4j-sh (?)$ MATCH (user:User{email:"mahesh.lal@gmail.com"})-[:MADE_BOOKING]->(b:Booking) RETURN b._id as booking_id; The output of the preceding query is as follows: +----------------------------------------+ | booking_id                             | +----------------------------------------+ | "251679be-1b3f-11e5-820e-6c40089a9424" | | "ff3dd694-7e7f-11e4-bb93-14109fda6b71" | | "7c63cc35-7e7f-11e4-8ffe-14109fda6b71" | | "f5f15252-1b62-11e5-8252-6c40089a9424" | | "d45de0c2-1b62-11e5-98a2-6c40089a9424" | | "fef04c30-7e2d-11e4-8842-14109fda6b71" | | "f87a515e-7e2d-11e4-b170-14109fda6b71" | | "75b3e78c-7e2b-11e4-a162-14109fda6b71" | +----------------------------------------+ 8 rows Upcoming journeys of a user Upcoming journeys of a user is straightforward. We can construct it by simply comparing today's date to the journey date as shown: neo4j-sh (?)$ MATCH (user:User{email:"mahesh.lal@gmail.com"})-[:MADE_BOOKING]->(:Booking)-[:HAS_JOURNEY]-(j:Journey) WHERE j.date_of_journey >=1418055307 WITH COLLECT(j) as journeys MATCH (j:Journey)-[:BY_FLIGHT]->(f:Flight) WHERE j in journeys WITH j.date_of_journey as date_of_journey, COLLECT(f) as flights MATCH (source:City)-[:HAS_FLIGHT]->(f)-[:FLYING_TO]->(destination:City) WHERE f in flights RETURN date_of_journey, source.name as from, f.code as by_flight, destination.name as to; The output of the preceding query is as follows: +-------------------------------------------------------------+ | date_of_journey | from         | by_flight | to           | +-------------------------------------------------------------+ | 1.4226426E9     | "New York"   | "VS8"     | "London"     | | 1.4212602E9     | "Los Angeles" | "UA1262" | "New York"   | | 1.4212602E9     | "Melbourne"   | "QF94"   | "Los Angeles" | | 1.4304186E9     | "New York"   | "UA1507" | "Los Angeles" | | 1.4311962E9     | "Los Angeles" | "AA920"   | "New York"   | +-------------------------------------------------------------+ 5 rows Summary In this article, you learned how you can model a domain that has traditionally been implemented using RDBMS. We saw how tables can be changed to nodes and relationships, and we explored what happened to relationship tables. You also learned about transactions in Cypher and wrote Cypher to manipulate the database. Resources for Article: Further resources on this subject: Selecting the Layout [article] Managing Alerts [article] Working with a Neo4j Embedded Database [article]
Read more
  • 0
  • 0
  • 2463

article-image-splunk-interface
Packt
10 Aug 2015
17 min read
Save for later

The Splunk Interface

Packt
10 Aug 2015
17 min read
In this article by Vincent Bumgarner & James D. Miller, author of the book, Implementing Splunk - Second Edition, we will walk through the most common elements in the Splunk interface, and will touch upon concepts that will be covered in greater detail. You may want to dive right into the search section, but an overview of the user interface elements might save you some frustration later. We will cover the following topics: Logging in and app selection A detailed explanation of the search interface widgets A quick overview of the admin interface (For more resources related to this topic, see here.) Logging into Splunk The Splunk GUI interface (Splunk is also accessible through its command-line interface [CLI] and REST API) is web-based, which means that no client needs to be installed. Newer browsers with fast JavaScript engines, such as Chrome, Firefox, and Safari, work better with the interface. As of Splunk Version 6.2.0, no browser extensions are required. Splunk Versions 4.2 and earlier require Flash to render graphs. Flash can still be used by older browsers, or for older apps that reference Flash explicitly. The default port for a Splunk installation is 8000. The address will look like: http://mysplunkserver:8000 or http://mysplunkserver.mycompany.com:8000. The Splunk interface If you have installed Splunk on your local machine, the address can be some variant of http://localhost:8000, http://127.0.0.1:8000, http://machinename:8000, or http://machinename.local:8000. Once you determine the address, the first page you will see is the login screen. The default username is admin with the password changeme. The first time you log in, you will be prompted to change the password for the admin user. It is a good idea to change this password to prevent unwanted changes to your deployment. By default, accounts are configured and stored within Splunk. Authentication can be configured to use another system, for instance Lightweight Directory Access Protocol (LDAP). By default, Splunk authenticates locally. If LDAP is set up, the order is as follows: LDAP / Local. The home app After logging in, the default app is the Launcher app (some may refer to this as Home). This app is a launching pad for apps and tutorials. In earlier versions of Splunk, the Welcome tab provided two important shortcuts, Add data and the Launch search app. In version 6.2.0, the Home app is divided into distinct areas, or panes, that provide easy access to Explore Splunk Enterprise (Add Data, Splunk Apps, Splunk Docs, and Splunk Answers) as well as Apps (the App management page) Search & Reporting (the link to the Search app), and an area where you can set your default dashboard (choose a home dashboard).                 The Explore Splunk Enterprise pane shows links to: Add data: This links Add Data to the Splunk page. This interface is a great start for getting local data flowing into Splunk (making it available to Splunk users). The Preview data interface takes an enormous amount of complexity out of configuring dates and line breaking. Splunk Apps: This allows you to find and install more apps from the Splunk Apps Marketplace (http://apps.splunk.com). This marketplace is a useful resource where Splunk users and employees post Splunk apps, mostly free but some premium ones as well. Splunk Answers: This is one of your links to the wide amount of Splunk documentation available, specifically http://answers.splunk.com, where you can engage with the Splunk community on Splunkbase (https://splunkbase.splunk.com/) and learn how to get the most out of your Splunk deployment. The Apps section shows the apps that have GUI elements on your instance of Splunk. App is an overloaded term in Splunk. An app doesn't necessarily have a GUI at all; it is simply a collection of configurations wrapped into a directory structure that means something to Splunk. Search & Reporting is the link to the Splunk Search & Reporting app. Beneath the Search & Reporting link, Splunk provides an outline which, when you hover over it, displays a Find More Apps balloon tip. Clicking on the link opens the same Browse more apps page as the Splunk Apps link mentioned earlier. Choose a home dashboard provides an intuitive way to select an existing (simple XML) dashboard and set it as part of your Splunk Welcome or Home page. This sets you at a familiar starting point each time you enter Splunk. The following image displays the Choose Default Dashboard dialog: Once you select an existing dashboard from the dropdown list, it will be part of your welcome screen every time you log into Splunk – until you change it. There are no dashboards installed by default after installing Splunk, except the Search & Reporting app. Once you have created additional dashboards, they can be selected as the default. The top bar The bar across the top of the window contains information about where you are, as well as quick links to preferences, other apps, and administration. The current app is specified in the upper-left corner. The following image shows the upper-left Splunk bar when using the Search & Reporting app: Clicking on the text takes you to the default page for that app. In most apps, the text next to the logo is simply changed, but the whole block can be customized with logos and alternate text by modifying the app's CSS. The upper-right corner of the window, as seen in the previous image, contains action links that are almost always available: The name of the user who is currently logged in appears first. In this case, the user is Administrator. Clicking on the username allows you to select Edit Account (which will take you to the Your account page) or to Logout (of Splunk). Logout ends the session and forces the user to login again. The following screenshot shows what the Your account page looks like: This form presents the global preferences that a user is allowed to change. Other settings that affect users are configured through permissions on objects and settings on roles. (Note: preferences can also be configured using the CLI or by modifying specific Splunk configuration files). Full name and Email address are stored for the administrator's convenience. Time zone can be changed for the logged-in user. This is a new feature in Splunk 4.3. Setting the time zone only affects the time zone used to display the data. It is very important that the date is parsed properly when events are indexed. Default app controls the starting page after login. Most users will want to change this to search. Restart backgrounded jobs controls whether unfinished queries should run again if Splunk is restarted. Set password allows you to change your password. This is only relevant if Splunk is configured to use internal authentication. For instance, if the system is configured to use Windows Active Directory via LDAP (a very common configuration), users must change their password in Windows. Messages allows you to view any system-level error messages you may have pending. When there is a new message for you to review, a notification displays as a count next to the Messages menu. You can click the X to remove a message. The Settings link presents the user with the configuration pages for all Splunk Knowledge objects, Distributed Environment settings, System and Licensing, Data, and Users and Authentication settings. If you do not see some of these options, you do not have the permissions to view or edit them. The Activity menu lists shortcuts to Splunk Jobs, Triggered Alerts, and System Activity views. You can click Jobs (to open the search jobs manager window, where you can view and manage currently running searches), click Triggered Alerts (to view scheduled alerts that are triggered) or click System Activity (to see dashboards about user activity and the status of the system). Help lists links to video Tutorials, Splunk Answers, the Splunk Contact Support portal, and online Documentation. Find can be used to search for objects within your Splunk Enterprise instance. For example, if you type in error, it returns the saved objects that contain the term error. These saved objects include Reports, Dashboards, Alerts, and so on. You can also search for error in the Search & Reporting app by clicking Open error in search. The search & reporting app The Search & Reporting app (or just the search app) is where most actions in Splunk start. This app is a dashboard where you will begin your searching. The summary view Within the Search & Reporting app, the user is presented with the Summary view, which contains information about the data which that user searches for by default. This is an important distinction—in a mature Splunk installation, not all users will always search all data by default. But at first, if this is your first trip into Search & Reporting, you'll see the following: From the screen depicted in the previous screenshot, you can access the Splunk documentation related to What to Search and How to Search. Once you have at least some data indexed, Splunk will provide some statistics on the available data under What to Search (remember that this reflects only the indexes that this particular user searches by default; there are other events that are indexed by Splunk, including events that Splunk indexes about itself.) This is seen in the following image: In previous versions of Splunk, panels such as the All indexed data panel provided statistics for a user's indexed data. Other panels gave a breakdown of data using three important pieces of metadata—Source, Sourcetype, and Hosts. In the current version—6.2.0—you access this information by clicking on the button labeled Data Summary, which presents the following to the user: This dialog splits the information into three tabs—Hosts, Sources and Sourcetypes. A host is a captured hostname for an event. In the majority of cases, the host field is set to the name of the machine where the data originated. There are cases where this is not known, so the host can also be configured arbitrarily. A source in Splunk is a unique path or name. In a large installation, there may be thousands of machines submitting data, but all data on the same path across these machines counts as one source. When the data source is not a file, the value of the source can be arbitrary, for instance, the name of a script or network port. A source type is an arbitrary categorization of events. There may be many sources across many hosts, in the same source type. For instance, given the sources /var/log/access.2012-03-01.log and /var/log/access.2012-03-02.log on the hosts fred and wilma, you could reference all these logs with source type access or any other name that you like. Let's move on now and discuss each of the Splunk widgets (just below the app name). The first widget is the navigation bar. As a general rule, within Splunk, items with downward triangles are menus. Items without a downward triangle are links. Next we find the Search bar. This is where the magic starts. We'll go into great detail shortly. Search Okay, we've finally made it to search. This is where the real power of Splunk lies. For our first search, we will search for the word (not case specific); error. Click in the search bar, type the word error, and then either press Enter or click on the magnifying glass to the right of the bar. Upon initiating the search, we are taken to the search results page. Note that the search we just executed was across All time (by default); to change the search time, you can utilize the Splunk time picker. Actions Let's inspect the elements on this page. Below the Search bar, we have the event count, action icons, and menus. Starting from the left, we have the following: The number of events matched by the base search. Technically, this may not be the number of results pulled from disk, depending on your search. Also, if your query uses commands, this number may not match what is shown in the event listing. Job: This opens the Search job inspector window, which provides very detailed information about the query that was run. Pause: This causes the current search to stop locating events but keeps the job open. This is useful if you want to inspect the current results to determine whether you want to continue a long running search. Stop: This stops the execution of the current search but keeps the results generated so far. This is useful when you have found enough and want to inspect or share the results found so far. Share: This shares the search job. This option extends the job's lifetime to seven days and sets the read permissions to everyone. Export: This exports the results. Select this option to output to CSV, raw events, XML, or JavaScript Object Notation (JSON) and specify the number of results to export. Print: This formats the page for printing and instructs the browser to print. Smart Mode: This controls the search experience. You can set it to speed up searches by cutting down on the event data it returns and, additionally, by reducing the number of fields that Splunk will extract by default from the data (Fast mode). You can, otherwise, set it to return as much event information as possible (Verbose mode). In Smart mode (the default setting) it toggles search behavior based on the type of search you're running. Timeline Now we'll skip to the timeline below the action icons. Along with providing a quick overview of the event distribution over a period of time, the timeline is also a very useful tool for selecting sections of time. Placing the pointer over the timeline displays a pop-up for the number of events in that slice of time. Clicking on the timeline selects the events for a particular slice of time. Clicking and dragging selects a range of time. Once you have selected a period of time, clicking on Zoom to selection changes the time frame and reruns the search for that specific slice of time. Repeating this process is an effective way to drill down to specific events. Deselect shows all events for the time range selected in the time picker. Zoom out changes the window of time to a larger period around the events in the current time frame The field picker To the left of the search results, we find the field picker. This is a great tool for discovering patterns and filtering search results. Fields The field list contains two lists: Selected Fields, which have their values displayed under the search event in the search results Interesting Fields, which are other fields that Splunk has picked out for you Above the field list are two links: Hide Fields and All Fields. Hide Fields: Hides the field list area from view. All Fields: Takes you to the Selected Fields window. Search results We are almost through with all the widgets on the page. We still have a number of items to cover in the search results section though, just to be thorough. As you can see in the previous screenshot, at the top of this section, we have the number of events displayed. When viewing all results in their raw form, this number will match the number above the timeline. This value can be changed either by making a selection on the timeline or by using other search commands. Next, we have the action icons (described earlier) that affect these particular results. Under the action icons, we have four results tabs: Events list, which will show the raw events. This is the default view when running a simple search, as we have done so far. Patterns streamlines the event pattern detection. It displays a list of the most common patterns among the set of events returned by your search. Each of these patterns represents the number of events that share a similar structure. Statistics populates when you run a search with transforming commands such as stats, top, chart, and so on. The previous keyword search for error does not display any results in this tab because it does not have any transforming commands. Visualization transforms searches and also populates the Visualization tab. The results area of the Visualization tab includes a chart and the statistics table used to generate the chart. Not all searches are eligible for visualization. Under the tabs described just now, is the timeline. Options Beneath the timeline, (starting at the left) is a row of option links that include: Show Fields: shows the Selected Fields screen List: allows you to select an output option (Raw, List, or Table) for displaying the search results Format: provides the ability to set Result display options, such as Show row numbers, Wrap results, the Max lines (to display) and Drilldown as on or off. NN Per Page: is where you can indicate the number of results to show per page (10, 20, or 50). To the right are options that you can use to choose a page of results, and to change the number of events per page. In prior versions of Splunk, these options were available from the Results display options popup dialog. The events viewer Finally, we make it to the actual events. Let's examine a single event. Starting at the left, we have: Event Details: Clicking here (indicated by the right facing arrow) opens the selected event, providing specific information about the event by type, field, and value, and allows you the ability to perform specific actions on a particular event field. In addition, Splunk version 6.2.0 offers a button labeled Event Actions to access workflow actions, a few of which are always available. Build Eventtype: Event types are a way to name events that match a certain query. Extract Fields: This launches an interface for creating custom field extractions. Show Source: This pops up a window with a simulated view of the original source. The event number: Raw search results are always returned in the order most recent first. Next to appear are any workflow actions that have been configured. Workflow actions let you create new searches or links to other sites, using data from an event. Next comes the parsed date from this event, displayed in the time zone selected by the user. This is an important and often confusing distinction. In most installations, everything is in one time zone—the servers, the user, and the events. When one of these three things is not in the same time zone as the others, things can get confusing. Next, we see the raw event itself. This is what Splunk saw as an event. With no help, Splunk can do a good job finding the date and breaking lines appropriately, but as we will see later, with a little help, event parsing can be more reliable and more efficient. Below the event are the fields that were selected in the field picker. Clicking on the value adds the field value to the search. Summary As you have seen, the Splunk GUI provides a rich interface for working with search results. We have really only scratched the surface and will cover more elements. Resources for Article: Further resources on this subject: The Splunk Web Framework [Article] Loading data, creating an app, and adding dashboards and reports in Splunk [Article] Working with Apps in Splunk [Article]
Read more
  • 0
  • 0
  • 4002
article-image-oracle-goldengate-12c-overview
Packt
10 Aug 2015
21 min read
Save for later

Oracle GoldenGate 12c — An Overview

Packt
10 Aug 2015
21 min read
In this article by John P Jeffries, author of the book Oracle GoldenGate 12c Implementer's Guide, he provides an introduction to Oracle GoldenGate by describing the key components, processes, and considerations required to build and implement a GoldenGate solution. John tells you how to address some of the issues that influence the decision-making process when you design a GoldenGate solution. He focuses on the additional configuration options available in Oracle GoldenGate 12c (For more resources related to this topic, see here.) 12c new features Oracle has provided some exciting new features in their 12c version of GoldenGate, some of which we have already touched upon. Following the official desupport of Oracle Streams in Oracle Database 12c, Oracle has essentially migrated some of the key features to its strategic product. You will find that GoldenGate now has a tighter integration with the Oracle database, enabling enhanced functionality. Let's explore some of the new features available in Oracle GoldenGate 12c. Integrated capture Integrated capture has been available since Oracle GoldenGate 11gR2 with Oracle Database 11g (11.2.0.3). Originally decoupled from the database, GoldenGate's new architecture provides the option to integrate its Extract process(es) with the Oracle database. This enables GoldenGate to access the database's data dictionary and undo tablespace, providing replication support for advanced features and data types. Oracle GoldenGate 12c still supports the original Extract configuration, known as Classic Capture. Integrated Replicat Integrated Replicat is a new feature in Oracle GoldenGate 12c for the delivery of data to Oracle Database 11g (11.2.0.4) or 12c. The performance enhancement provides better scalability and load balancing that leverages the database parallel apply servers for automatic, dependency-aware parallel Replicat processes. With Integrated Replicat, there is no need for users to manually split the delivery process into multiple threads and manage multiple parameter files. GoldenGate now uses a lightweight streaming API to prepare, coordinate, and apply the data to the downstream database. Oracle GoldenGate 12c still supports the original Replicat configuration, known as Classic Delivery. Downstream capture Downstream capture was one of my favorite Oracle Stream features. It allows for a combined in-memory capture and apply process that achieves very low latency even in heavy data load situations. Like Streams, GoldenGate builds on this feature by employing a real-time downstream capture process. This method uses Oracle Data Guard's log transportation mechanism, which writes changed data to standby redo logs. It provides a best-of-both-worlds approach, enabling a real-time mine configuration that falls back to archive log mining when the apply process cannot keep up. In addition, the real-time mine process is re-enabled automatically when the data throughput is less. Installation One of the major changes in Oracle GoldenGate 12c is the installation method. Like other Oracle products, Oracle GoldenGate 12c is now installed using the Java-based Oracle Universal Installer (OUI) in either the interactive or silent mode. OUI reads the Oracle Inventory on your system to discover existing installations (Oracle Homes), allowing you to install, deinstall, or clone software products. Upgrading to 12c Whether you wish to upgrade your current GoldenGate installation from Oracle GoldenGate 11g Release 2 or from an earlier version, the steps are the same. Simply stop all the GoldenGate running processes on your database server, backup the GoldenGate home, and then use OUI to perform the fresh installation. It is important to note, however, while restarting replication, ensure the capture process begins from the point at which it was gracefully stopped to guarantee against lost synchronization data. Multitenant database replication As the version suggests, Oracle GoldenGate 12c now supports data replication for Oracle Database 12c. Those familiar with the 12c database features will be aware of the multitenant container database (CDB) that provides database consolidation. Each CDB consists of a root container and one or more pluggable databases (PDB). The PDB can contain multiple schemas and objects, just like a conventional database that GoldenGate replicates data to and from. The GoldenGate Extract process pulls data from multiple PDBs or containers in the source, combining the changed data into a single trail file. Replicat, however, splits the data into multiple process groups in order to apply the changes to a target PDB. Coordinated Delivery The Coordinated Delivery option applies to the GoldenGate Replicat process when configured in the classic mode. It provides a performance gain by automatically splitting the delivered data from a remote trail file into multiple threads that are then applied to the target database in parallel. GoldenGate manages the coordination across selected events that require ordering, including DDL, primary key updates, event marker interface (EMI), and SQLEXEC. Coordinated Delivery can be used with both Oracle (from version 11.2.0.4) and non-Oracle databases. Event-based processing In GoldenGate 12c, event-based processing has been enhanced to allow specific events to be captured and acted upon automatically through an EMI. SQLEXEC provides the API to EMI, enabling programmatic execution of tasks following an event. Now it is possible, for example, to detect the start of a batch job or large transaction, trap the SQL statement(s), and ignore the subsequent multiple change records until the end of the source system transaction. The original DML can then be replayed on the target database as one transaction. This is a major step forward in the performance tuning for data replication. Enhanced security Recent versions of GoldenGate have included security features such as the encryption of passwords and data. Oracle GoldenGate 12c now supports a credential store, better known as an Oracle wallet, that securely stores an alias associated with a username and password. The alias is then referenced in the GoldenGate parameter files rather than the actual username and password. Conflict Detection and Resolution In earlier versions of GoldenGate, Conflict Detection and Resolution (CDR) has been somewhat lightweight and was not readily available out of the box. Although available in Oracle Streams, the GoldenGate administrator would have to programmatically resolve any data conflict in the replication process using GoldenGate built-in tools. In the 12c version, the feature has emerged as an easily configurable option through Extract and Replicat parameters. Dynamic Rollback Selective data back out of applied transactions is now possible using the Dynamic Rollback feature. The feature operates at table and record-level and supports point-in-time recovery. This potentially eliminates the need for a full database restore, following data corruption, erroneous deletions, or perhaps the removal of test data, thus avoiding hours of system downtime. Streams to GoldenGate migration Oracle Streams users can now migrate their data replication solution to Oracle GoldenGate 12c using a purpose-built utility. This is a welcomed feature given that Streams is no longer supported in Oracle Database 12c. The Streams2ogg tool auto generates Oracle GoldenGate configuration files that greatly simplify the effort required in the migration process. Performance In today's demand for real-time access to real-time data, high performance is the key. For example, businesses will no longer wait for information to arrive on their DSS to make decisions and users will expect the latest information to be available in the public cloud. Data has value and must be delivered in real time to meet the demand. So, how long does it take to replicate a transaction from the source database to its target? This is known as end-to-end latency, which typically has a threshold that must not be breeched in order to satisfy a predefined Service Level Agreement (SLA). GoldenGate refers to latency as lag, which can be measured at different intervals in the replication process. They are as follows: Source to Extract: The time taken for a record to be processed by the Extract compared to the commit timestamp on the database Replicat to target: The time taken for the last record to be processed by the Replicat process compared to the record creation time in the trail file A well-designed system may still encounter spikes in the latency, but it should never be continuous or growing. Peaks are typically caused by load on the source database system, where the latency increases with the number of transactions per second. Lag should be measured as an average over a specified period. Trying to tune GoldenGate when the design is poor is a difficult situation to be in. For the system to perform well, you may need to revisit the design. Availability Another important NFR is availability. Normally quoted as a percentage, the system must be available for the specified length of time. For example, NFR of 99.9 percent availability equates to a downtime of 8.76 hours in a year, which sounds quite a lot, especially if it were to occur all at once. Oracle's maximum availability architecture (MAA) offers enhanced availability through products such as Real Application Clusters (RAC) and Active Data Guard (ADG). However, as we previously described, the network plays a major role in data replication. The NFR relates to the whole system, so you need to be sure your design covers redundancy for all components. Event-based processing It is important in any data replication environment to capture and manage events, such as trail records containing specific data or operations or maybe the occurrence of a certain error. These are known as Event Markers. GoldenGate provides a mechanism to perform an action on a given event or condition. These are known as Event Actions and are triggered by Event Records. If you are familiar with Oracle Streams, Event Actions are like rules. The Event Marker System GoldenGate's Event Marker System, also known as event marker interface (EMI), allows custom DML-driven processing on an event. This comprises of an Event Record to trigger a given action. An Event Record can be either a trail record that satisfies a condition evaluated by a WHERE or FILTER clause or a record written to an event table that enables an action to occur. Typical actions are writing status information, reporting errors, ignoring certain records in a trail, invoking a shell script, or performing an administrative task. The following Replicat code describes the process of capturing an event and performing an action by logging DELETE operations made against the CREDITCARD_ACCOUNTS table using the EVENTACTIONS parameter: MAP SRC.CREDITCARD_ACCOUNTS, TARGET TGT.CREDITCARD_ACCOUNTS_DIM;TABLE SRC.CREDITCARD_ACCOUNTS, &FILTER (@GETENV ('GGHEADER', 'OPTYPE') = 'DELETE'), &EVENTACTIONS (LOG INFO); By default, all logged information is written to the process group report file, the GoldenGate error log, and the system messages file. On Linux, this is the /var/log/messages file. Note that the TABLE parameter is also used in the Replicat's parameter file. This is a means of triggering an Event Action to be executed by the Replicat when it encounters an Event Marker. The following code shows the use of the IGNORE option that prevents certain records from being extracted or replicated, which is particularly useful to filter out system type data. When used with the TRANSACTION option, the whole transaction and not just the Event Record is ignored: TABLE SRC.CREDITCARD_ACCOUNTS, &FILTER (@GETENV ('GGHEADER', 'OPTYPE') = 'DELETE'), &EVENTACTIONS (IGNORE TRANSACTION); The preceding code extends the previous code by stopping the Event Record itself from being replicated. Using Event Actions to improve batch performance All replication technologies typically suffer from one flaw that is the way in which the data is replicated. Consider a table that is populated with a million rows as part of a batch process. This may be a bulk insert operation that Oracle completes on the source database as one transaction. However, Oracle will write each change to its redo logs as Logical Change Records (LCRs). GoldenGate will subsequently mine the logs, write the LCRs to a remote trail, convert each one back to DML, and apply them to the target database, one row at a time. The single source transaction becomes one million transactions, which causes a huge performance overhead. To overcome this issue, we can use Event Actions to: Detect the DML statement (INSERT INTO TABLE SELECT ..) Ignore the data resulting from the SELECT part of the statement Replicate just the DML statement as an Event Record Execute just the DML statement on the target database The solution requires a statement table on both source and target databases to trigger the event. Also, both databases must be perfectly synchronized to avoid data integrity issues. User tokens User tokens are GoldenGate environment variables that are captured and stored in the trail record for replication. They can be accessed via the @GETENV function. We can use token data in column maps, stored procedures called by SQLEXEC, and, of course, in macros. Using user tokens to populate a heartbeat table A vast array of user tokens exist in GoldenGate. Let's start by looking at a common method of replicating system information to populate a heartbeat table that can be used to monitor performance. We can use the TOKENS option of the Extract TABLE parameter to define a user token and associate it with the GoldenGate environment data. The following Extract configuration code shows the token declarations for the heartbeat table: TABLE GGADMIN.GG_HB_OUT, &TOKENS (EXTGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"), &EXTTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &EXTLAG = @GETENV ("LAG","SEC"), &EXTSTAT_TOTAL = @GETENV ("DELTASTATS","DML"), &), FILTER (@STREQ (EXTGROUP, @GETENV("GGENVIRONMENT","GROUPNAME"))); For the data pump, the example Extract configuration is shown here: TABLE GGADMIN.GG_HB_OUT, &TOKENS (PMPGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"), &PMPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &PMPLAG = @GETENV ("LAG","SEC")); Also, for the Replicat, the following configuration populates the heartbeat table on the target database with the token data derived from Extract, data pump, and Replicat, containing system details and replication lag: MAP GGADMIN.GG_HB_OUT_SRC, TARGET GGADMIN.GG_HB_IN_TGT, &KEYCOLS (DB_NAME, EXTGROUP, PMPGROUP, REPGROUP), &INSERTMISSINGUPDATES, &COLMAP (USEDEFAULTS, &ID = 0, &SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &EXTGROUP = @TOKEN ("EXTGROUP"), &EXTTIME = @TOKEN ("EXTTIME"), &PMPGROUP = @TOKEN ("PMPGROUP"), &PMPTIME = @TOKEN ("PMPTIME"), &REPGROUP = @TOKEN ("REPGROUP"), &REPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &EXTLAG = @TOKEN ("EXTLAG"), &PMPLAG = @TOKEN ("PMPLAG"), &REPLAG = @GETENV ("LAG","SEC"), &EXTSTAT_TOTAL = @TOKEN ("EXTSTAT_TOTAL")); As in the heartbeat table example, the defined user tokens can be called in a MAP statement using the @TOKEN function. The SOURCE_COMMIT and LAG metrics are self-explained. However, EXTSTAT_TOTAL, which is derived from DELTASTATS, is particularly useful to measure the load on the source system when you evaluate latency peaks. For applications, user tokens are useful to audit data and trap exceptions within the replicated data stream. Common user tokens are shown in the following code that replicates the token data to five columns of an audit table: MAP SRC.AUDIT_LOG, TARGET TGT.AUDIT_LOG, &COLMAP (USEDEFAULTS, &OSUSER = @TOKEN ("TKN_OSUSER"), &DBNAME = @TOKEN ("TKN_DBNAME"), &HOSTNAME = @TOKEN ("TKN_HOSTNAME"), &TIMESTAMP = @TOKEN ("TKN_COMMITTIME"), &BEFOREAFTERINDICATOR = @TOKEN ("TKN_ BEFOREAFTERINDICATOR"); The BEFOREAFTERINDICATOR environment variable is particularly useful to provide a status flag in order to check whether the data was from a Before or After image of an UPDATE or DELETE operation. By default, GoldenGate provides After images. To enable a Before image extraction, the GETUPDATEBEFORES Extract parameter must be used on the source database. Using logic in the data replication GoldenGate has a number of functions that enable the administrator to program logic in the Extract and Replicat process configuration. These provide generic functions found in the IF and CASE programming languages. In addition, the @COLTEST function enables conditional calculations by testing for one or more column conditions. This is typically used with the @IF function, as shown in the following code: MAP SRC.CREDITCARD_PAYMENTS, TARGET TGT.CREDITCARD_PAYMENTS_FACT,&COLMAP (USEDEFAULTS, &AMOUNT = @IF(@COLTEST(AMOUNT, MISSING, INVALID), 0, AMOUNT)); Here, the @COLTEST function tests the AMOUNT column in the source data to check whether it is MISSING or INVALID. The @IF function returns 0 if @COLTEST returns TRUE and returns the value of AMOUNT if FALSE. The target AMOUNT column is therefore set to 0 when the equivalent source is found to be missing or invalid; otherwise, a direct mapping occurs. The @CASE function tests a list of values for a match and then returns a specified value. If no match is found, @CASE will return a default value. There is no limit to the number of cases to test; however, if the list is very large, a database lookup may be more appropriate. The following code shows the simplicity of the @CASE statement. Here, the country name is returned from the country code: MAP SRC.CREDITCARD_STATEMENT, TARGET TGT.CREDITCARD_STATEMENT_DIM,&COLMAP (USEDEFAULTS, &COUNTRY = @CASE(COUNTRY_CODE, "UK", "United Kingdom", "USA","United States of America")); Other GoldenGate functions: @EVAL and @VALONEOF exist that perform tests. Similar to @CASE, @VALONEOF compares a column or string to a list of values. The difference being it evaluates more than one value against a single column or string. When the following code is used with @IF, it returns "EUROPE" when TRUE and "UNKNOWN" when FALSE: MAP SRC.CREDITCARD_STATEMENT, TARGET TGT.CREDITCARD_STATEMENT_DIM,&COLMAP (USEDEFAULTS, &REGION = @IF(@VALONEOF(COUNTRY_CODE, "UK","E", "D"),"EUROPE","UNKNOWN")); The @EVAL function evaluates a list of conditions and returns a specified value. Optionally, if none are satisfied, it returns a default value. There is no limit to the number of evaluations you can list. However, it is best to list the most common evaluations at the beginning to enhance performance. The following code includes the BEFORE option that compares the before value of the replicated source column to the current value of the target column. Depending on the evaluation, @EVAL will return "PAID MORE", "PAID LESS", or "PAID SAME": MAP SRC.CREDITCARD_ PAYMENTS, TARGET TGT.CREDITCARD_PAYMENTS, &COLMAP (USEDEFAULTS, &STATUS = @EVAL(AMOUNT < BEFORE.AMOUNT, "PAID LESS", AMOUNT > BEFORE.AMOUNT, "PAID MORE", AMOUNT = BEFORE.AMOUNT, "PAID SAME")); The BEFORE option can be used with other GoldenGate functions, including the WHERE and FILTER clauses. However, for the Before image to be written to the trail and to be available, the GETUPDATEBEFORES parameter must be enabled in the source database's Extract parameter file or the target database's Replicat parameter file, but not both. The GETUPDATEBEFORES parameter can be set globally for all tables defined in the Extract or individually per table using GETUPDATEBEFORES and IGNOREUPDATEBEFORES, as seen in the following code: EXTRACT EOLTP01USERIDALIAS srcdb DOMAIN adminSOURCECATALOG PDB1EXTTRAIL ./dirdat/aaGETAPPLOPSIGNOREREPLICATESGETUPDATEBEFORESTABLE SRC.CHECK_PAYMENTS;IGNOREUPDATEBEFORESTABLE SRC.CHECK_PAYMENTS_STATUS;TABLE SRC.CREDITCARD_ACCOUNTS;TABLE SRC.CREDITCARD_PAYMENTS; Tracing processes to find wait events If you have worked with Oracle software, particularly in the performance tuning space, you will be familiar with tracing. Tracing enables additional information to be gathered from a given process or function to diagnose performance problems or even bugs. One example is the SQL trace that can be enabled at a database session or the system level to provide key information, such as; wait events, parse, fetch, and execute times. Oracle GoldenGate 12c offers a similar tracing mechanism through its trace and trace2 options of the SEND GGSCI command. This is like the session-level SQL trace. Also, in a similar fashion to performing a database system trace, tracing can be enabled in the GoldenGate process parameter files that make it permanent until the Extract or Replicat is stopped. trace provides processing information, whereas trace2 identifies the processes with wait events. The following commands show tracing being dynamically enabled for 2 minutes on a running Replicat process: GGSCI (db12server02) 1> send ROLAP01 trace2 ./dirrpt/ROLAP01.trc Wait for 2 minutes, then turn tracing off: GGSCI (db12server02) 2> send ROLAP01 trace2 offGGSCI (db12server02) 3> exit To view the contents of the Replicat trace file, we can execute the following command. In the case of a coordinated Replicat, the trace file will contain information from all of its threads: $ view dirrpt/ROLAP01.trcstatistics between 2015-08-08 Wed HKT 11:55:27 and 2015-08-08 Wed HKT11:57:28RPT_PROD_Ol.LIMIT_TP_RESP : n=2 : op=Insert; total=3; avg=1.5000;max=3msecRPT_PROD_01.SUP_POOL_SMRY_HIST : n=1 : op=Insert; total=2; avg=2.0000;max=2msecRPT_PROD_01.EVENTS : n=1 : op=Insert; total=2; avg=2.0000; max=2msecRPT_PROD_01.DOC_SHIP_DTLS : n=17880 : op=FieldComp; total=22003;avg=1.2306; max=42msecRPT_PROD_01.BUY_POOL_SMRY_HIST : n=1 : op=Insert; total=2; avg=2.0000;max=2msecRPT_PROD_01.LIMIT_TP_LOG : n=2 : op-Insert; total=2; avg=1.0000;max=2msecRPT_PROD_01.POOL_SMRY : n=1 : op=FieldComp; total=2; avg=2.0000;max=2msec..===============================================summary==============Delete : n=2; total=2; avg=1.00;Insert : n=78; total=356; avg=4.56;FieldComp : n=85728; total=123018; avg=1.43;total_op_num=85808 : total_op_time=123376 ms : total_avg_time=1.44ms/optotal commit number=1 The trace file provides the following information: The table name The operation type (FieldComp is for a compressed field) The number of operations The average wait The maximum wait Summary Armed with the preceding information, we can quickly see what operations against which tables are taking the longest time. Exception handling Oracle GoldenGate 12c now supports Conflict Detection and Resolution (CDR). However, out-of-the-box, GoldenGate takes a catch all approach to exception handling. For example, by default, should any operational failure occur, a Replicat process will ABEND and roll back the transaction to the last known checkpoint. This may not be ideal in a production environment. The HANDLECOLLISIONS and NOHANDLECOLLISIONS parameters can be used to control whether or not a Replicat process tries to resolve the duplicate record error and the missing record error. The way to determine what error occurred and on which Replicat is to create an exceptions handler. Exception handling differs from CDR by trapping and reporting Oracle errors suffered by the data replication (DML and DDL). On the other hand, CDR detects and resolves inconsistencies in the replicated data, such as mismatches with before and after images. Exceptions can always be trapped by the Oracle error they produce. GoldenGate provides an exception handler parameter called REPERROR that allows the Replicat to continue processing data after a predefined error. For example, we can include the following configuration in our Replicat parameter file to ignore ORA-00001 "unique constraint (%s.%s) violated": REPERROR (DEFAULT, EXCEPTION)REPERROR (DEFAULT2, ABEND)REPERROR (-1, EXCEPTION) Cloud computing Cloud computing has grown enormously in the recent years. Oracle has named its latest version of products: 12c, the c standing for Cloud of course. The architecture of Oracle 12c Database allows a multitenant container database to support multiple pluggable databases—a key feature of cloud computing—rather than implement the inefficient schema consolidation, typical of the previous Oracle database version architecture, which is known to cause contention on shared resources during high load. The Oracle 12c architecture supports a database consolidation approach through its efficient memory management and dedicated background processes. Online computer companies such as Amazon have leveraged the cloud concept by offering Relational Database Services (RDS), which is becoming very popular for its speed of readiness, support, and low cost. The cloud environments are often huge, containing hundreds of servers, petabytes of storage, terabytes of memory, and countless CPU cores. The cloud has to support multiple applications in a multi-tiered, shared environment, often through virtualization technologies, where storage and CPUs are typically the driving factors for cost-effective options. Customers choose their hardware footprint that best suits their budget and system requirements, commonly known as Platform as a Service (PaaS). Cloud computing is an extension to grid computing that offers both public and private clouds. GoldenGate and Big Data It is increasingly evident that organizations need to quickly access, analyze, and report on their data across their Enterprise in order to be agile in a competitive market. Data is becoming more of an asset to companies; it adds value to a business, but may be stored in any number of current and legacy systems, making it difficult to realize its full potential. Known as big data, it has until recently been nearly impossible to perform real-time business analysis on the combined data from multiple sources. Nowadays, the ability to access all transactional data with low latency is essential. With the introduction of products such as Apache Hadoop, integration of structured data from an RDBMS, including semi-structured and unstructured data, offers a common playing field to support business intelligence. When coupled with ODI, GoldenGate for big data provides real-time delivery to a suite of Apache products, such as Flume, HDFS, Hive, and Hbase, to support big data analytics. Summary In this article, we have learned an introduction to Oracle GoldenGate by describing the key components, processes, and considerations required to build and implement a GoldenGate solution. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Oracle GoldenGate- Advanced Administration Tasks - I [Article] Oracle B2B Overview [Article]
Read more
  • 0
  • 0
  • 6636

article-image-setting-synchronous-replication
Packt
10 Aug 2015
17 min read
Save for later

Setting Up Synchronous Replication

Packt
10 Aug 2015
17 min read
In this article by the author, Hans-Jürgen Schönig, of the book, PostgreSQL Replication, Second Edition, we learn how to set up synchronous replication. In asynchronous replication, data is submitted and received by the slave (or slaves) after the transaction has been committed on the master. During the time between the master's commit and the point when the slave actually has fully received the data, it can still be lost. Here, you will learn about the following topics: Making sure that no single transaction can be lost Configuring PostgreSQL for synchronous replication Understanding and using application_name The performance impact of synchronous replication Optimizing replication for speed Synchronous replication can be the cornerstone of your replication setup, providing a system that ensures zero data loss. (For more resources related to this topic, see here.) Synchronous replication setup Synchronous replication has been made to protect your data at all costs. The core idea of synchronous replication is that a transaction must be on at least two servers before the master returns success to the client. Making sure that data is on at least two nodes is a key requirement to ensure no data loss in the event of a crash. Setting up synchronous replication works just like setting up asynchronous replication. Just a handful of parameters discussed here have to be changed to enjoy the blessings of synchronous replication. However, if you are about to create a setup based on synchronous replication, we recommend getting started with an asynchronous setup and gradually extending your configuration and turning it into synchronous replication. This will allow you to debug things more easily and avoid problems down the road. Understanding the downside to synchronous replication The most important thing you have to know about synchronous replication is that it is simply expensive. Synchronous replication and its downsides are two of the core reasons for which we have decided to include all this background information in this book. It is essential to understand the physical limitations of synchronous replication, otherwise you could end up in deep trouble. When setting up synchronous replication, try to keep the following things in mind: Minimize the latency Make sure you have redundant connections Synchronous replication is more expensive than asynchronous replication Always cross-check twice whether there is a real need for synchronous replication In many cases, it is perfectly fine to lose a couple of rows in the event of a crash. Synchronous replication can safely be skipped in this case. However, if there is zero tolerance, synchronous replication is a tool that should be used. Understanding the application_name parameter In order to understand a synchronous setup, a config variable called application_name is essential, and it plays an important role in a synchronous setup. In a typical application, people use the application_name parameter for debugging purposes, as it allows users to assign a name to their database connection. It can help track bugs, identify what an application is doing, and so on: test=# SHOW application_name; application_name ------------------ psql (1 row)   test=# SET application_name TO 'whatever'; SET test=# SHOW application_name; application_name ------------------ whatever (1 row) As you can see, it is possible to set the application_name parameter freely. The setting is valid for the session we are in, and will be gone as soon as we disconnect. The question now is: What does application_name have to do with synchronous replication? Well, the story goes like this: if this application_name value happens to be part of synchronous_standby_names, the slave will be a synchronous one. In addition to that, to be a synchronous standby, it has to be: connected streaming data in real-time (that is, not fetching old WAL records) Once a standby becomes synced, it remains in that position until disconnection. In the case of cascaded replication (which means that a slave is again connected to a slave), the cascaded slave is not treated synchronously anymore. Only the first server is considered to be synchronous. With all of this information in mind, we can move forward and configure our first synchronous replication. Making synchronous replication work To show you how synchronous replication works, this article will include a full, working example outlining all the relevant configuration parameters. A couple of changes have to be made to the master. The following settings will be needed in postgresql.conf on the master: wal_level = hot_standby max_wal_senders = 5   # or any number synchronous_standby_names = 'book_sample' hot_standby = on # on the slave to make it readable Then we have to adapt pg_hba.conf. After that, the server can be restarted and the master is ready for action. We recommend that you set wal_keep_segments as well to keep more transaction logs. We also recommend setting wal_keep_segments to keep more transaction logs on the master database. This makes the entire setup way more robust. It is also possible to utilize replication slots. In the next step, we can perform a base backup just as we have done before. We have to call pg_basebackup on the slave. Ideally, we already include the transaction log when doing the base backup. The --xlog-method=stream parameter allows us to fire things up quickly and without any greater risks. The --xlog-method=stream and wal_keep_segments parameters are a good combo, and in our opinion, should be used in most cases to ensure that a setup works flawlessly and safely. We have already recommended setting hot_standby on the master. The config file will be replicated anyway, so you save yourself one trip to postgresql.conf to change this setting. Of course, this is not fine art but an easy and pragmatic approach. Once the base backup has been performed, we can move ahead and write a simple recovery.conf file suitable for synchronous replication, as follows: iMac:slavehs$ cat recovery.conf primary_conninfo = 'host=localhost                    application_name=book_sample                    port=5432'   standby_mode = on The config file looks just like before. The only difference is that we have added application_name to the scenery. Note that the application_name parameter must be identical to the synchronous_standby_names setting on the master. Once we have finished writing recovery.conf, we can fire up the slave. In our example, the slave is on the same server as the master. In this case, you have to ensure that those two instances will use different TCP ports, otherwise the instance that starts second will not be able to fire up. The port can easily be changed in postgresql.conf. After these steps, the database instance can be started. The slave will check out its connection information and connect to the master. Once it has replayed all the relevant transaction logs, it will be in synchronous state. The master and the slave will hold exactly the same data from then on. Checking the replication Now that we have started the database instance, we can connect to the system and see whether things are working properly. To check for replication, we can connect to the master and take a look at pg_stat_replication. For this check, we can connect to any database inside our (master) instance, as follows: postgres=# x Expanded display is on. postgres=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid            | 62871 usesysid         | 10 usename         | hs application_name | book_sample client_addr     | ::1 client_hostname | client_port     | 59235 backend_start   | 2013-03-29 14:53:52.352741+01 state           | streaming sent_location   | 0/30001E8 write_location   | 0/30001E8 flush_location   | 0/30001E8 replay_location | 0/30001E8 sync_priority   | 1 sync_state       | sync This system view will show exactly one line per slave attached to your master system. The x command will make the output more readable for you. If you don't use x to transpose the output, the lines will be so long that it will be pretty hard for you to comprehend the content of this table. In expanded display mode, each column will be in one line instead. You can see that the application_name parameter has been taken from the connect string passed to the master by the slave (which is book_sample in our example). As the application_name parameter matches the master's synchronous_standby_names setting, we have convinced the system to replicate synchronously. No transaction can be lost anymore because every transaction will end up on two servers instantly. The sync_state setting will tell you precisely how data is moving from the master to the slave. You can also use a list of application names, or simply a * sign in synchronous_standby_names to indicate that the first slave has to be synchronous. Understanding performance issues At various points in this book, we have already pointed out that synchronous replication is an expensive thing to do. Remember that we have to wait for a remote server and not just the local system. The network between those two nodes is definitely not something that is going to speed things up. Writing to more than one node is always more expensive than writing to only one node. Therefore, we definitely have to keep an eye on speed, otherwise we might face some pretty nasty surprises. Consider what you have learned about the CAP theory earlier in this book. Synchronous replication is exactly where it should be, with the serious impact that the physical limitations will have on performance. The main question you really have to ask yourself is: do I really want to replicate all transactions synchronously? In many cases, you don't. To prove our point, let's imagine a typical scenario: a bank wants to store accounting-related data as well as some logging data. We definitely don't want to lose a couple of million dollars just because a database node goes down. This kind of data might be worth the effort of replicating synchronously. The logging data is quite different, however. It might be far too expensive to cope with the overhead of synchronous replication. So, we want to replicate this data in an asynchronous way to ensure maximum throughput. How can we configure a system to handle important as well as not-so-important transactions nicely? The answer lies in a variable you have already seen earlier in the book—the synchronous_commit variable. Setting synchronous_commit to on In the default PostgreSQL configuration, synchronous_commit has been set to on. In this case, commits will wait until a reply from the current synchronous standby indicates that it has received the commit record of the transaction and has flushed it to the disk. In other words, both servers must report that the data has been written safely. Unless both servers crash at the same time, your data will survive potential problems (crashing of both servers should be pretty unlikely). Setting synchronous_commit to remote_write Flushing to both disks can be highly expensive. In many cases, it is enough to know that the remote server has accepted the XLOG and passed it on to the operating system without flushing things to the disk on the slave. As we can be pretty certain that we don't lose two servers at the very same time, this is a reasonable compromise between performance and consistency with respect to data protection. Setting synchronous_commit to off The idea is to delay WAL writing to reduce disk flushes. This can be used if performance is more important than durability. In the case of replication, it means that we are not replicating in a fully synchronous way. Keep in mind that this can have a serious impact on your application. Imagine a transaction committing on the master and you wanting to query that data instantly on one of the slaves. There would still be a tiny window during which you can actually get outdated data. Setting synchronous_commit to local The local value will flush locally but not wait for the replica to respond. In other words, it will turn your transaction into an asynchronous one. Setting synchronous_commit to local can also cause a small time delay window, during which the slave can actually return slightly outdated data. This phenomenon has to be kept in mind when you decide to offload reads to the slave. In short, if you want to replicate synchronously, you have to ensure that synchronous_commit is set to either on or remote_write. Changing durability settings on the fly Changing the way data is replicated on the fly is easy and highly important to many applications, as it allows the user to control durability on the fly. Not all data has been created equal, and therefore, more important data should be written in a safer way than data that is not as important (such as log files). We have already set up a full synchronous replication infrastructure by adjusting synchronous_standby_names (master) along with the application_name (slave) parameter. The good thing about PostgreSQL is that you can change your durability requirements on the fly: test=# BEGIN; BEGIN test=# CREATE TABLE t_test (id int4); CREATE TABLE test=# SET synchronous_commit TO local; SET test=# x Expanded display is on. test=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid             | 62871 usesysid         | 10 usename         | hs application_name | book_sample client_addr     | ::1 client_hostname | client_port     | 59235 backend_start   | 2013-03-29 14:53:52.352741+01 state           | streaming sent_location   | 0/3026258 write_location   | 0/3026258 flush_location   | 0/3026258 replay_location | 0/3026258 sync_priority   | 1 sync_state       | sync   test=# COMMIT; COMMIT In this example, we changed the durability requirements on the fly. This will make sure that this very specific transaction will not wait for the slave to flush to the disk. Note, as you can see, sync_state has not changed. Don't be fooled by what you see here; you can completely rely on the behavior outlined in this section. PostgreSQL is perfectly able to handle each transaction separately. This is a unique feature of this wonderful open source database; it puts you in control and lets you decide which kind of durability requirements you want. Understanding the practical implications and performance We have already talked about practical implications as well as performance implications. But what good is a theoretical example? Let's do a simple benchmark and see how replication behaves. We are performing this kind of testing to show you that various levels of durability are not just a minor topic; they are the key to performance. Let's assume a simple test: in the following scenario, we have connected two equally powerful machines (3 GHz, 8 GB RAM) over a 1 Gbit network. The two machines are next to each other. To demonstrate the impact of synchronous replication, we have left shared_buffers and all other memory parameters as default, and only changed fsync to off to make sure that the effect of disk wait is reduced to practically zero. The test is simple: we use a one-column table with only one integer field and 10,000 single transactions consisting of just one INSERT statement: INSERT INTO t_test VALUES (1); We can try this with full, synchronous replication (synchronous_commit = on): real 0m6.043s user 0m0.131s sys 0m0.169s As you can see, the test has taken around 6 seconds to complete. This test can be repeated with synchronous_commit = local now (which effectively means asynchronous replication): real 0m0.909s user 0m0.101s sys 0m0.142s In this simple test, you can see that the speed has gone up by us much as six times. Of course, this is a brute-force example, which does not fully reflect reality (this was not the goal anyway). What is important to understand, however, is that synchronous versus asynchronous replication is not a matter of a couple of percentage points or so. This should stress our point even more: replicate synchronously only if it is really needed, and if you really have to use synchronous replication, make sure that you limit the number of synchronous transactions to an absolute minimum. Also, please make sure that your network is up to the job. Replicating data synchronously over network connections with high latency will kill your system performance like nothing else. Keep in mind that throwing expensive hardware at the problem will not solve the problem. Doubling the clock speed of your servers will do practically nothing for you because the real limitation will always come from network latency. The performance penalty with just one connection is definitely a lot larger than that with many connections. Remember that things can be done in parallel, and network latency does not make us more I/O or CPU bound, so we can reduce the impact of slow transactions by firing up more concurrent work. When synchronous replication is used, how can you still make sure that performance does not suffer too much? Basically, there are a couple of important suggestions that have proven to be helpful: Use longer transactions: Remember that the system must ensure on commit that the data is available on two servers. We don't care what happens in the middle of a transaction, because anybody outside our transaction cannot see the data anyway. A longer transaction will dramatically reduce network communication. Run stuff concurrently: If you have more than one transaction going on at the same time, it will be beneficial to performance. The reason for this is that the remote server will return the position inside the XLOG that is considered to be processed safely (flushed or accepted). This method ensures that many transactions can be confirmed at the same time. Redundancy and stopping replication When talking about synchronous replication, there is one phenomenon that must not be left out. Imagine we have a two-node cluster replicating synchronously. What happens if the slave dies? The answer is that the master cannot distinguish between a slow and a dead slave easily, so it will start waiting for the slave to come back. At first glance, this looks like nonsense, but if you think about it more deeply, you will figure out that synchronous replication is actually the only correct thing to do. If somebody decides to go for synchronous replication, the data in the system must be worth something, so it must not be at risk. It is better to refuse data and cry out to the end user than to risk data and silently ignore the requirements of high durability. If you decide to use synchronous replication, you must consider using at least three nodes in your cluster. Otherwise, it will be very risky, and you cannot afford to lose a single node without facing significant downtime or risking data loss. Summary Here, we outlined the basic concept of synchronous replication, and showed how data can be replicated synchronously. We also showed how durability requirements can be changed on the fly by modifying PostgreSQL runtime parameters. PostgreSQL gives users the choice of how a transaction should be replicated, and which level of durability is necessary for a certain transaction. Resources for Article: Further resources on this subject: Introducing PostgreSQL 9 [article] PostgreSQL – New Features [article] Installing PostgreSQL [article]
Read more
  • 0
  • 0
  • 4683
Modal Close icon
Modal Close icon