In this chapter, we will cover the following recipes:
- Retrieving all file names from hierarchical directories using Java
- Retrieving all file names from hierarchical directories using Apache Commons IO
- Reading contents from text files all at once using Java 8
- Reading contents from text files all at once using Apache Commons IO
- Extracting PDF text using Apache Tika
- Cleaning ASCII text files using Regular Expressions
- Parsing Comma Separated Value files using Univocity
- Parsing Tab Separated Value files using Univocity
- Parsing XML files using JDOM
- Writing JSON files using JSON.simple
- Reading JSON files using JSON.simple
- Extracting web data from a URL using JSoup
- Extracting web data from a website using Selenium
Webdriver
- Reading table data from MySQL database
Every data scientist needs to deal with data that is stored on disks in several formats, such as ASCII text, PDF, XML, JSON, and so on. Also, data can be stored in database tables. The first and foremost task for a data scientist before doing any analysis is to obtain data from these data sources and of these formats, and apply data-cleaning techniques to get rid of noises present in them. In this chapter, we will see recipes to accomplish this important task.
We will be using external Java libraries (Java archive files or simply JAR files) not only for this chapter but throughout the book. These libraries are created by developers or organizations to make everybody's life easier. We will be using Eclipse IDE for code development, preferably on the Windows platform, and execution throughout the book. Here is how you can include any external JAR file, and in many recipes, where I instruct you to include external JAR files into your project, this is what you need to do.
You can add a JAR file in a project in Eclipse by right-clicking on the Project | Build Path | Configure Build Path. Under the Libraries tab, click on Add External JARs..., and select the external JAR file(s) that you are going to use for a particular project:

This recipe (and the following) is for the data scientist who wants to retrieve the file paths and names (for some future analysis) from a complex directory structure that contains numerous directories and files inside a root directory.
In order to perform this recipe, we will require the following:
Create directories within directories (as many layers as you want).
Create text files in some of these directories while leaving some directories empty for more excitement.
We are going to create a
static
method that takes aFile
argument, which is eventually the root directory or the directory to start with. The method will return a set of files that are found within this root directory (and in all other subsequent directories):public static Set<File> listFiles(File rootDir) {
First, create a
HashSet
that will contain the file information:Set<File> fileSet = new HashSet<File>();
Once the
HashSet
is created, we need to check whether the root directory or the directories within it arenull
. For such cases, we do not need to proceed further:if (rootDir == null || rootDir.listFiles() == null){ return fileSet; }
We consider one directory (or file) from the root directory at a time and check whether we are dealing with a file or with a directory. In the case of a file, we add that to our
HashSet
. In the case of a directory, we recursively call this method again by sending the path and name of the directory:for (File fileOrDir : rootDir.listFiles()) { if (fileOrDir.isFile()){ fileSet.add(fileOrDir); } else{ fileSet.addAll(listFiles(fileOrDir)); } }
Finally, we return the
HashSet
to the caller of this method:return fileSet; }
The complete method, with the class and the driver method to run it, is as follows:
import java.io.File; import java.util.HashSet; import java.util.Set; public class TestRecursiveDirectoryTraversal { public static void main(String[] args){ System.out.println(listFiles(new File("Path for root directory")).size()); } public static Set<File> listFiles(File rootDir) { Set<File> fileSet = new HashSet<File>(); if(rootDir == null || rootDir.listFiles()==null){ return fileSet; } for (File fileOrDir : rootDir.listFiles()) { if (fileOrDir.isFile()){ fileSet.add(fileOrDir); } else{ fileSet.addAll(listFiles(fileOrDir)); } } return fileSet; } }
Listing of file names in hierarchical directories can be done recursively as demonstrated in the previous recipe. However, this can be done in a much easier and convenient way and with less coding using the Apache Commons IO library.
In order to perform this recipe, we will require the following:
In this recipe, we will be using a Java library from Apache named Commons IO. Throughout the book, we will be using version 2.5. Download the JAR file of your choice from here: https://commons.apache.org/proper/commons-io/download_io.cgi
Include the JAR file in your project an external JAR in Eclipse.
Create a method that takes the root directory in the hierarchy of directories as input:
public void listFiles(String rootDir){
Create a file object with the root directory name:
File dir = new File(rootDir);
The
FileUtils
class of the Apache Commons library contains a method namedlistFiles()
. Use this method to retrieve all the file names, and put the names in a list variable with<File>
generics. UseTrueFileFilter.INSTANCE
to match all directories:List<File> files = (List<File>) FileUtils.listFiles(dir, TrueFileFilter.INSTANCE, TrueFileFilter.INSTANCE);
The file names can be displayed on the standard output as follows. As we now have the names in a list, we have a means to process the data in these files further:
for (File file : files) { System.out.println("file: " + file.getAbsolutePath()); }
Close the method:
}
The method in this recipe, the class for it, and the driver method to run it are as follows:
import java.io.File; import java.util.List; import org.apache.commons.io.FileUtils; import org.apache.commons.io.filefilter.TrueFileFilter; public class FileListing{ public static void main (String[] args){ FileListing fileListing = new FileListing(); fileListing.listFiles("Path for the root directory here"); } public void listFiles(String rootDir){ File dir = new File(rootDir); List<File> files = (List<File>) FileUtils.listFiles(dir, TrueFileFilter.INSTANCE, TrueFileFilter.INSTANCE); for (File file : files) { System.out.println("file: " + file.getAbsolutePath()); } }
Tip
If you want to list files with some particular extensions, there is a method in Apache Commons library called listFiles
, too. However, the parameters are different; the method takes three parameters, namely, file directory, String[]
extensions, boolean recursive. Another interesting method in this library is listFilesAndDirs (File directory, IOFileFilter fileFilter, IOFileFilter dirFilter) if someone is interested in listing not only files but also directories. Detailed information can be found at https://commons.apache.org/proper/commons-io/javadocs/.
On many occasions, data scientists have their data in text format. There are many different ways to read text file contents, and they each have their own pros and cons: some of them consume time and memory, while some are fast and do not require much computer memory; some read the text contents all at once, while some read text files line by line. The choice depends on the task at hand and a data scientist's approach to that task.
This recipe demonstrates how to read text file contents all at once using Java 8.
First, create a
String
object to hold the path and name of the text file you are going to read:String file = "C:/dummy.txt";
Using the
get()
method of thePaths
class, we get to the path of the file we are trying to read. The parameter for this method is theString
object that points to the name of the file. The output of this method is fed to another method namedlines()
, which is in theFiles
class. This method reads all lines from a file as aStream
, and therefore, the output of this method is directed to aStream
variable. Because ourdummy.txt
file contains string data, the generics of theStream
variable is set toString
.
The entire process of reading needs a try...catch
block for attempts such as reading a file that does not exist or damaged and so on.
The following code segment displays the contents of our dummy.txt
file. The stream
variable contains the lines of the text file, and therefore, the forEach()
method of the variable is used to display each line content:
try (Stream<String> stream = Files.lines(Paths.get(file))) { stream.forEach(System.out::println); } catch (IOException e) { System.out.println("Error reading " + file.getAbsolutePath()); }
The same functionality described in the previous recipe can be achieved using Apache Commons IO API.
In order to perform this recipe, we will require the following:
In this recipe, we will be using a Java library from Apache named Commons IO. Download the version of your choice from here: https://commons.apache.org/proper/commons-io/download_io.cgi
Include the JAR file in your project an external JAR in Eclipse.
Say, you are trying to read the contents of a file located in your
C:/ drive
nameddummy.txt
. First, you need to create a file object for accessing this file as follows:File file = new File("C:/dummy.txt");
Next, create a string object to hold the text contents of your file. The method we will be using from Apache Commons IO library is called
readFileToString
, which is a member of the class namedFileUtils
. There are many different ways you can call this method. But for now, just know that we need to send two arguments to this method. First, thefile
object, which is the file we will be reading, and then the encoding of the file, which in this example isUTF-8
:String text = FileUtils.readFileToString(file, "UTF-8");
The preceding two lines will be enough to read text file content and put that in a variable. However, you are not only a data scientist, you are a smart data scientist. Therefore, you need to add a few lines before and after the code just to handle exceptions thrown by Java methods if you try to read a file that does not exist, or is corrupted, and so on. The completeness of the preceding code can be achieved by introducing a
try...catch
block as follows:File file = new File("C:/dummy.txt"); try { String text = FileUtils.readFileToString(file, "UTF-8"); } catch (IOException e) { System.out.println("Error reading " + file.getAbsolutePath()); }
One of the most difficult file types for parsing and extracting data is PDF. Some PDFs are not even possible to parse because they are password-protected, while some others contain scanned texts and images. This dynamic file type, therefore, sometimes becomes the worst nightmare for data scientists. This recipe demonstrates how to extract text from PDF files using Apache Tika, given that the file is not encrypted or password-protected and contains text that is not scanned.
In order to perform this recipe we will require the following:
Download Apache Tika 1.10 JAR file from http://archive.apache.org/dist/tika/tika-app-1.10.jar, and include it in your Eclipse project as an external Java library.
Have any unlocked PDF file saved as
testPDF.pdf
on yourC: drive
.
Create a method named
convertPdf(String)
, which takes the name of the PDF file to be converted as parameter:public void convertPDF(String fileName){
Create an input stream that will contain the PDF data as a stream of bytes:
InputStream stream = null;
Create a
try
block as follows:try{
Assign the file to the
stream
you have just created:stream = new FileInputStream(fileName);
There are many different parsers offered in the Apache Tika package. If you do not know which parser you are going to use, or say you have not only PDFs but also other types of documents to get converted, you should use an
AutoDetectParser
as follows:AutoDetectParser parser = new AutoDetectParser();
Create a handler to handle the body content of the file. Note the
-1
as the parameter of the constructor. Usually, Apache Tika is limited to handling files with at most 100,000 characters. The-1
value ensures that this limitation is overlooked by the body handler:BodyContentHandler handler = new BodyContentHandler(-1);
Create a metadata object:
Metadata metadata = new Metadata();
Call the
parser()
method of the parser object with all these objects you just created:parser.parse(stream, handler, metadata, new ParseContext());
Use the
tostring()
method of the handler object to get the body text extracted from the file:System.out.println(handler.toString());
Close the
try
block and complement it with acatch
block andfinally
block, and close the method as follows:}catch (Exception e) { e.printStackTrace(); }finally { if (stream != null) try { stream.close(); } catch (IOException e) { System.out.println("Error closing stream"); } } }
The full method with the driver method in a class will be as follows. The method you have just created can be called by sending it the path and the name of the PDF file you need to convert, which is in your
C: drive
saved astestPDF.pdf
:import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import org.apache.tika.metadata.Metadata; import org.apache.tika.parser.AutoDetectParser; import org.apache.tika.parser.ParseContext; import org.apache.tika.sax.BodyContentHandler; public class TestTika { public static void main(String args[]) throws Exception { TestTika tika = new TestTika(); tika.convertPdf("C:/testPDF.pdf"); } public void convertPdf(String fileName){ InputStream stream = null; try { stream = new FileInputStream(fileName); AutoDetectParser parser = new AutoDetectParser(); BodyContentHandler handler = new BodyContentHandler(-1); Metadata metadata = new Metadata(); parser.parse(stream, handler, metadata, new ParseContext()); System.out.println(handler.toString()); }catch (Exception e) { e.printStackTrace(); }finally { if (stream != null) try { stream.close(); } catch (IOException e) { System.out.println("Error closing stream"); } } } }
ASCII text files can contain unnecessary units of characters that eventually are introduced during a conversion process, such as PDF-to-text conversion or HTML-to-text conversion. These characters are often seen as noise because they are one of the major roadblocks for data processing. This recipe cleans several noises from ASCII text data using Regular Expressions.
Create a method named
cleanText(String)
that takes the text to be cleaned in theString
format:public String cleanText(String text){
Add the following lines in your method, return the cleaned text, and close the method. The first line strips off non-ASCII characters. The line next to it replaces continuous white spaces with a single white space. The third line erases all the
ASCII
control characters. The fourth line strips off theASCII
non-printable characters. The last line removes non-printable characters from Unicode:text = text.replaceAll("[^p{ASCII}]",""); text = text.replaceAll("s+", " "); text = text.replaceAll("p{Cntrl}", ""); text = text.replaceAll("[^p{Print}]", ""); text = text.replaceAll("p{C}", ""); return text; }
The full method with the driver method in a class will look as follows:
public class CleaningData { public static void main(String[] args) throws Exception { CleaningData clean = new CleaningData(); String text = "Your text here you have got from some file"; String cleanedText = clean.cleanText(text); //Process cleanedText } public String cleanText(String text){ text = text.replaceAll("[^p{ASCII}]",""); text = text.replaceAll("s+", " "); text = text.replaceAll("p{Cntrl}", ""); text = text.replaceAll("[^p{Print}]", ""); text = text.replaceAll("p{C}", ""); return text; } }
Another very common file type that data scientists handle is Comma Separated Value (CSV) files, where data is separated by commas. CSV files are very popular because they can be read by most of the spreadsheet applications, such as MS Excel.
In this recipe, we will see how we can parse CSV files and handle data points retrieved from them.
In order to perform this recipe, we will require the following:
Download the Univocity JAR file from http://oss.sonatype.org/content/repositories/releases/com/univocity/univocity-parsers/2.2.1/univocity-parsers-2.2.1.jar. Include the JAR file in your project in Eclipse as external library.
Create a CSV file from the following data using Notepad. The extension of the file should be
.csv
. You save the file asC:/testCSV.csv
:Year,Make,Model,Description,Price 1997,Ford,E350,"ac, abs, moon",3000.00 1999,Chevy,"Venture ""Extended Edition""","",4900.00 1996,Jeep,Grand Cherokee,"MUST SELL! air, moon roof, loaded",4799.00 1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00 ,,"Venture ""Extended Edition""","",4900.00
Create a method named
parseCsv(String)
that takes the name of the file as a String argument:public void parseCsv(String fileName){
Then create a settings object. This object provides many configuration settings options:
CsvParserSettings parserSettings = new CsvParserSettings();
You can configure the parser to automatically detect what line separator sequence is in the input:
parserSettings.setLineSeparatorDetectionEnabled(true);
Create a
RowListProcessor
that stores each parsed row in a list:RowListProcessor rowProcessor = new RowListProcessor();
You can configure the parser to use a
RowProcessor
to process the values of each parsed row. You will find moreRowProcessors
in thecom.univocity.parsers.common.processor
package, but you can also create your own:parserSettings.setRowProcessor(rowProcessor);
If the CSV file that you are going to parse contains headers, you can consider the first parsed row as the headers of each column in the file:
parserSettings.setHeaderExtractionEnabled(true);
Now, create a
parser
instance with the given settings:CsvParser parser = new CsvParser(parserSettings);
The
parse()
method will parse the file and delegate each parsed row to theRowProcessor
you defined:parser.parse(new File(fileName));
If you have parsed the headers, the
headers
can be found as follows:String[] headers = rowProcessor.getHeaders();
You can then easily process this String array to get the header values.
On the other hand, the row values can be found in a list. The list can be printed using a for loop as follows:
List<String[]> rows = rowProcessor.getRows(); for (int i = 0; i < rows.size(); i++){ System.out.println(Arrays.asList(rows.get(i))); }
Finally, close the method:
}
The entire method can be written as follows:
import java.io.File; import java.util.Arrays; import java.util.List; import com.univocity.parsers.common.processor.RowListProcessor; import com.univocity.parsers.csv.CsvParser; import com.univocity.parsers.csv.CsvParserSettings; public class TestUnivocity { public void parseCSV(String fileName){ CsvParserSettings parserSettings = new CsvParserSettings(); parserSettings.setLineSeparatorDetectionEnabled(true); RowListProcessor rowProcessor = new RowListProcessor(); parserSettings.setRowProcessor(rowProcessor); parserSettings.setHeaderExtractionEnabled(true); CsvParser parser = new CsvParser(parserSettings); parser.parse(new File(fileName)); String[] headers = rowProcessor.getHeaders(); List<String[]> rows = rowProcessor.getRows(); for (int i = 0; i < rows.size(); i++){ System.out.println(Arrays.asList(rows.get(i))); } } public static void main(String[] args){ TestUnivocity test = new TestUnivocity(); test.parseCSV("C:/testCSV.csv"); } }
Note
There are many CSV parsers that are written in Java. However, in a comparison, Univocity is found to be the fastest one. See the detailed comparison results here: https://github.com/uniVocity/csv-parsers-comparison
Unlike CSV files, Tab Separated Value (TSV) files contain data that is separated by tab delimiters. This recipe shows you how to retrieve data points from TSV files.
In order to perform this recipe, we will require the following:
Download the Univocity JAR file from http://oss.sonatype.org/content/repositories/releases/com/univocity/univocity-parsers/2.2.1/univocity-parsers-2.2.1.jar. Include the JAR file in your project in Eclipse an external library.
Create a TSV file from the following data using Notepad. The extension of the file should be
.tsv
. You save the file asC:/testTSV.tsv
:
Year Make Model Description Price 1997 Ford E350 ac, abs, moon 3000.00 1999 Chevy Venture "Extended Edition" 4900.00 1996 Jeep Grand Cherokee MUST SELL!nair, moon roof, loaded 4799.00 1999 Chevy Venture "Extended Edition, Very Large" 5000.00 Venture "Extended Edition" 4900.00
Create a method named
parseTsv(String)
that takes the name of the file as a String argument:public void parseTsv(String fileName){
The line separator for the TSV file in this recipe is a newline character or
n
. To set this character as the line separator, modify the settings:settings.getFormat().setLineSeparator("n");
Using these settings, create a TSV parser:
TsvParser parser = new TsvParser(settings);
Parse all rows of the TSV file at once as follows:
List<String[]> allRows = parser.parseAll(new File(fileName));
Iterate over the list object to print/process the rows as follows:
for (int i = 0; i < allRows.size(); i++){ System.out.println(Arrays.asList(allRows.get(i))); }
Finally, close the method:
}
The full method with the driver method in a class will look like the following:
import java.io.File; import java.util.Arrays; import java.util.List; import com.univocity.parsers.tsv.TsvParser; import com.univocity.parsers.tsv.TsvParserSettings; public class TestTsv { public void parseTsv(String fileName){ TsvParserSettings settings = new TsvParserSettings(); settings.getFormat().setLineSeparator("n"); TsvParser parser = new TsvParser(settings); List<String[]> allRows = parser.parseAll(new File(fileName)); for (int i = 0; i < allRows.size(); i++){ System.out.println(Arrays.asList(allRows.get(i))); } } }
Unlike text data, which is often unstructured, organizing data in XML files is a popular method to prepare, convey, and exploit data in a structured way. There are several ways to parse contents of XML files. In this book, we will limit our recipes to an external Java library for XML parsing named JDOM.
In order to perform this recipe, we will require the following:
Download version 2.06 of the JAR file for JDOM from http://www.jdom.org/downloads/index.html.
In Eclipse, create a project and include the JAR file an external JAR.
Open up notepad. Create a new file named
xmldummy
with the.xml
extension. The content of the file will be as simple as follows:
<?xml version="1.0"?> <book> <author> <firstname>Alice</firstname> <lastname>Peterson</lastname> </author> <author> <firstname>John</firstname> <lastname>Doe</lastname> </author> </book>
Create a
SAXBuilder
object namedbuilder
:SAXBuilder builder = new SAXBuilder();
Now you need to create a
File
object to point to the XML file that you will be parsing. If you have saved your XML file in theC:/
drive, then type in the following code segment:File file = new File("c:/dummyxml.xml");
In a
try
block, you are going to create aDocument
object, which will be your XML file:try { Document document = (Document) builder.build(file);
When you are parsing an XML, as it is tree structured, you need to know the root element of the file to start traversing the tree (in other words, to start parsing systematically). So, you are creating a
rootNode
object of typeElement
to hold the root element, which in our example is<book>
node:Element rootNode = document.getRootElement();
Then, you will be retrieving all the children nodes of your root node that have the name
author
. The names come as a list, and therefore, you will be using a list variable to hold them:List list = rootNode.getChildren("author");
Next, you will be iterating over this list using a
for
loop to get the elements of the entries in this list. Each element will be kept in anElement
type variable named node. This variable has a method namedgetChildText()
, which takes the name of its child as parameter; the method returns the textual content of the named child element, ornull
if there is no such child. This method is convenient because callinggetChild().getText()
can throw aNullPointerException
:for (int i = 0; i < list.size(); i++) { Element node = (Element) list.get(i); System.out.println("First Name : " + node.getChildText("firstname")); System.out.println("Last Name : " + node.getChildText("lastname")); }
Finally, you will be closing the
try
block; put the followingcatch
blocks to handle exceptions:} catch (IOException io) { System.out.println(io.getMessage()); } catch (JDOMException jdomex) { System.out.println(jdomex.getMessage()); }
The complete code for the recipe is as follows:
import java.io.File; import java.io.IOException; import java.util.List; import org.jdom2.Document; import org.jdom2.Element; import org.jdom2.JDOMException; import org.jdom2.input.SAXBuilder; public class TestJdom { public static void main(String[] args){ TestJdom test = new TestJdom(); test.parseXml("C:/dummyxml.com"); } public void parseXml(String fileName){ SAXBuilder builder = new SAXBuilder(); File file = new File(fileName); try { Document document = (Document) builder.build(file); Element rootNode = document.getRootElement(); List list = rootNode.getChildren("author"); for (int i = 0; i < list.size(); i++) { Element node = (Element) list.get(i); System.out.println("First Name : " + node.getChildText("firstname")); System.out.println("Last Name : " + node.getChildText("lastname")); } } catch (IOException io) { System.out.println(io.getMessage()); } catch (JDOMException jdomex) { System.out.println(jdomex.getMessage()); } } }
Note
There are many different types of XML parsers, and each has its own benefits Dom Parser: These parsers load the complete content of the document in memory and create its complete hierarchical tree in memory. SAX Parser: These parsers do not load the complete document into the memory and parse the documents on event-based triggers. JDOM Parser: JDOM parsers parse the document in a similar fashion to DOM parser but in a more convenient way. StAX Parser: These parsers handle the document in a similar fashion to SAX parser but in a more efficient way. XPath Parser: These parsers parse the document based on expressions and are used extensively with XSLT. DOM4J Parser: This is a Java library to parse XML, XPath, and XSLT using Java Collections Framework that provides support for DOM, SAX, and JAXP.
Just like XML, JSON is also a human-readable Data Interchange Format that is lightweight. It stands for JavaScript Object Notation. This is becoming a popular format generated and parsed by modern web applications. In this recipe, you will see how you can write JSON files.
In order to perform this recipe, we will require the following:
Download
json-simple-1.1.1.jar
from https://code.google.com/archive/p/json-simple/downloads and include the JAR file as external library to your Eclipse project.
Create a method named
writeJson(String outFileName)
that takes the name of the JSON file we will be generating as output with the JSON information in this recipe.Create a JSON object and use the object's
put()
method to populate a few fields. For instance, say your fields will be books and their authors. The following code will be creating a JSON object and populate a book name from the Harry Potter series and its author's name:JSONObject obj = new JSONObject(); obj.put("book", "Harry Potter and the Philosopher's Stone"); obj.put("author", "J. K. Rowling");
Next, say that we have three reviewer comments for this book. They can be put together in a JSON array. The array can be populated as follows. First, we use
add()
of the array object to add the reviews. When all the reviews are added to the array, we will be putting the array to the JSON object we created in the previous step:JSONArray list = new JSONArray(); list.add("There are characters in this book that will remind us of all the people we have met. Everybody knows or knew a spoilt, overweight boy like Dudley or a bossy and interfering (yet kind-hearted) girl like Hermione"); list.add("Hogwarts is a truly magical place, not only in the most obvious way but also in all the detail that the author has gone to describe it so vibrantly."); list.add("Parents need to know that this thrill-a-minute story, the first in the Harry Potter series, respects kids' intelligence and motivates them to tackle its greater length and complexity, play imaginative games, and try to solve its logic puzzles. "); obj.put("messages", list);
We will be writing down the information in the JSON object to an output file because this file will be used to demonstrate how we can read/parse a JSON file. The following
try...catch
code blocks write down the information to a JSON file:try { FileWriter file = new FileWriter("c:test.json"); file.write(obj.toJSONString()); file.flush(); file.close(); } catch (IOException e) { //your message for exception goes here. }
The content of the JSON object can also be displayed on the standard output as follows:
System.out.print(obj);
Finally, close the method:
}
The entire class, the method described in this recipe, and the driver method to call the method with an output JSON file name are as follows:
import java.io.FileWriter; import java.io.IOException; import org.json.simple.JSONArray; import org.json.simple.JSONObject; public class JsonWriting { public static void main(String[] args) { JsonWriting jsonWriting = new JsonWriting(); jsonWriting.writeJson("C:/testJSON.json"); } public void writeJson(String outFileName){ JSONObject obj = new JSONObject(); obj.put("book", "Harry Potter and the Philosopher's Stone"); obj.put("author", "J. K. Rowling"); JSONArray list = new JSONArray(); list.add("There are characters in this book that will remind us of all the people we have met. Everybody knows or knew a spoilt, overweight boy like Dudley or a bossy and interfering (yet kind-hearted) girl like Hermione"); list.add("Hogwarts is a truly magical place, not only in the most obvious way but also in all the detail that the author has gone to describe it so vibrantly."); list.add("Parents need to know that this thrill-a-minute story, the first in the Harry Potter series, respects kids' intelligence and motivates them to tackle its greater length and complexity, play imaginative games, and try to solve its logic puzzles. "); obj.put("messages", list); try { FileWriter file = new FileWriter(outFileName); file.write(obj.toJSONString()); file.flush(); file.close(); } catch (IOException e) { e.printStackTrace(); } System.out.print(obj); } }
The output file will be containing data as follows. Note that the output shown here has been modified to increase readability, and the actual output is one, big, flat piece of text:
{ "author":"J. K. Rowling", "book":"Harry Potter and the Philosopher's Stone", "messages":[ "There are characters in this book that will remind us of all the people we have met. Everybody knows or knew a spoilt, overweight boy like Dudley or a bossy and interfering (yet kind-hearted) girl like Hermione", "Hogwarts is a truly magical place, not only in the most obvious way but also in all the detail that the author has gone to describe it so vibrantly.", "Parents need to know that this thrill-a-minute story, the first in the Harry Potter series, respects kids' intelligence and motivates them to tackle its greater length and complexity, play imaginative games, and try to solve its logic puzzles." ] }
In this recipe, we will see how we can read or parse a JSON file. As our sample input file, we will be using the JSON file we created in the previous recipe.
In order to perform this recipe, we will require the following:
Use the previous recipe to create a JSON file with book, author, and reviewer comments information. This file will be used as an input for parsing/reading in this recipe.
As we will be reading or parsing a JSON file, first, we will be creating a JSON parser:
JSONParser parser = new JSONParser();
Then, in a
try
block, we will be retrieving the values in the fields book and author. However, to do that, we first use theparse()
method of the parser to read the input JSON file. Theparse()
method returns the content of the file as an object. Therefore, we will need anObject
variable to hold the content. Then, theobject
will be assigned to a JSON object for further processing. Notice the type cast of theObject
variable during the assignment:try { Object obj = parser.parse(new FileReader("c:test.json")); JSONObject jsonObject = (JSONObject) obj; String name = (String) jsonObject.get("book"); System.out.println(name); String author = (String) jsonObject.get("author"); System.out.println(author); }
The next field to retrieve from the input JSON file is the review field, which is an array. We iterate over this field as follows:
JSONArray reviews = (JSONArray) jsonObject.get("messages"); Iterator<String> iterator = reviews.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next()); }
Finally, we create catch blocks to handle three types of exceptions that may occur during the parsing, and then close the method:
} catch (FileNotFoundException e) { //Your exception handling here } catch (IOException e) { //Your exception handling here } catch (ParseException e) { //Your exception handling here } }
The entire class, the method described in this recipe, and the driver method to run the method are as follows:
import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.util.Iterator; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; import org.json.simple.parser.ParseException; public class JsonReading { public static void main(String[] args){ JsonReading jsonReading = new JsonReading(); jsonReading.readJson("C:/testJSON.json"); } public void readJson(String inFileName) { JSONParser parser = new JSONParser(); try { Object obj = parser.parse(new FileReader(inFileName)); JSONObject jsonObject = (JSONObject) obj; String name = (String) jsonObject.get("book"); System.out.println(name); String author = (String) jsonObject.get("author"); System.out.println(author); JSONArray reviews = (JSONArray) jsonObject.get("messages"); Iterator<String> iterator = reviews.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next()); } } catch (FileNotFoundException e) { //Your exception handling here } catch (IOException e) { //Your exception handling here } catch (ParseException e) { //Your exception handling here } } }
On successful execution of the code, you will be able to see the contents of the input file on the standard output.
A large amount of data, nowadays, can be found on the Web. This data is sometimes structured, semi-structured, or even unstructured. Therefore, very different techniques are needed to extract them. There are many different ways to extract web data. One of the easiest and handy ways is to use an external Java library named JSoup. This recipe uses a certain number of methods offered in JSoup to extract web data.
In order to perform this recipe, we will require the following:
Go to https://jsoup.org/download, and download the
jsoup-1.9.2.jar
file. Add the JAR file to your Eclipse project an external library.If you are a Maven fan, please follow the instructions on the download page to include the JAR file into your Eclipse project.
Create a method named
extractDataWithJsoup(String url)
. The parameter is the URL of any webpage that you need to call the method. We will be extracting web data from this URL:public void extractDataWithJsoup(String href){
Use the
connect()
method by sending the URL where we want to connect (and extract data). Then, we will be chaining a few more methods with it. First, we will chain thetimeout()
method that takes milliseconds as parameters. The methods after that define the user-agent name during this connection and whether attempts will be made to ignore connection errors. The next method to chain with the previous two is theget()
method that eventually returns aDocument
object. Therefore, we will be holding this returned object indoc
of theDocument
class:doc = Jsoup.connect(href).timeout(10*1000).userAgent ("Mozilla").ignoreHttpErrors(true).get();
As this code throws
IOException
, we will be using atry...catch
block as follows:Document doc = null; try { doc = Jsoup.connect(href).timeout(10*1000).userAgent ("Mozilla").ignoreHttpErrors(true).get(); } catch (IOException e) { //Your exception handling here }
A large number of methods can be found for a
Document
object. If you want to extract the title of the URL, you can use title method as follows:if(doc != null){ String title = doc.title();
To only extract the textual part of the web page, we can chain the
body()
method with thetext()
method of aDocument
object, as follows:String text = doc.body().text();
If you want to extract all the hyperlinks in a URL, you can use the
select()
method of aDocument
object with thea[href]
parameter. This gives you all the links at once:Elements links = doc.select("a[href]");
Perhaps you wanted to process the links in a web page individually? That is easy, too--you need to iterate over all the links to get the individual links:
for (Element link : links) { String linkHref = link.attr("href"); String linkText = link.text(); String linkOuterHtml = link.outerHtml(); String linkInnerHtml = link.html(); System.out.println(linkHref + "t" + linkText + "t" + linkOuterHtml + "t" + linkInnterHtml); }
Finally, close the if-condition with a brace. Close the method with a brace:
} }
The complete method, its class, and the driver method are as follows:
import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; import org.jsoup.select.Elements; public class JsoupTesting { public static void main(String[] args){ JsoupTesting test = new JsoupTesting(); test.extractDataWithJsoup("Website address preceded by http://"); } public void extractDataWithJsoup(String href){ Document doc = null; try { doc = Jsoup.connect(href).timeout(10*1000).userAgent ("Mozilla").ignoreHttpErrors(true).get(); } catch (IOException e) { //Your exception handling here } if(doc != null){ String title = doc.title(); String text = doc.body().text(); Elements links = doc.select("a[href]"); for (Element link : links) { String linkHref = link.attr("href"); String linkText = link.text(); String linkOuterHtml = link.outerHtml(); String linkInnerHtml = link.html(); System.out.println(linkHref + "t" + linkText + "t" + linkOuterHtml + "t" + linkInnterHtml); } } } }
Selenium is a Java-based tool to help automating software testing or quality assurance. Interestingly enough, Selenium can be used to automatically retrieve and utilize web data. This recipe shows you how.
In order to perform this recipe, we will require the following:
Download
selenium-server-standalone-2.53.1.jar
andselenium-java-2.53.1.zip
from http://selenium-release.storage.googleapis.com/index.html?path=2.53/. From the latter, extract theselenium-java-2.53.1.jar
file. Include these two JAR files in your eclipse project an external Java library.Download and install Firefox 47.0.1 from https://ftp.mozilla.org/pub/firefox/releases/47.0.1/ by selecting the version appropriate for your operating system.
Create a method named
extractDataWithSelenium(String)
that takes aString
as a parameter, which eventually is the URL from where we are going to extract data. There can be many different types of data that we can extract from URLs, such as the title, the headers, and the values in a selection drop-down box. This recipe only concentrates on extracting the text part of the webpage:public String extractDataWithSelenium(String url){
Next, create a Firefox web driver using the following code:
WebDriver driver = new FirefoxDriver();
Use the
get()
method of theWebDriver
object by passing the URL:driver.get("http://cogenglab.csd.uwo.ca/rushdi.htm");
The text of the webpage can be found using
xpath
, where the value ofid
is content:Find this particular element with the
findElement()
method. This method returns aWebElement
object. Create aWebElement
object namedwebElement
to hold the returned value:WebElement webElement = driver.findElement(By.xpath("//* [@id='content']"));
The
WebElement
object has a method namedgetText()
. Call this method to retrieve the text of the web page, and put the text into aString
variable as follows:String text = (webElement.getText());
Finally, return the String variable and close the method:
}
The complete code segment with the driver main() method for the recipe looks like the following:
import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.firefox.FirefoxDriver; public class TestSelenium { public String extractDataWithSelenium(String url) { WebDriver driver = new FirefoxDriver(); driver.get("http://cogenglab.csd.uwo.ca/rushdi.htm"); WebElement webElement = driver.findElement(By.xpath("//* [@id='content']")); System.out.println(webElement.getText()); return url; } public static void main(String[] args){ TestSelenium test = new TestSelenium(); String webData = test.extractDataWithSelenium ("http://cogenglab.csd.uwo.ca/rushdi.htm"); //process webData } }
Note
Selenium and Firefox have compatibility issues. Some Selenium versions do not work with some Firefox versions. The recipe provided here works fine with the versions mentioned in the recipe. But it does not have any guarantee that it will work with other Selenium or Firefox versions.
Because of the version conflict issues between Selenium and Firefox, once you run a code with a particular version of the both, turn off the automatic update download and installation option in Firefox.
Data can be stored in database tables also. This recipe demonstrates how we can read data from a table in MySQL.
In order to perform this recipe, we will require the following:
Download and install MySQL community server from http://dev.mysql.com/downloads/mysql/. The version used in this recipe is 5.7.15.
Create a database named
data_science
. In this database, create a table namedbooks
that contains data as follows:The choice of the field types does not matter for this recipe, but the names of the fields need to exactly match those from the exhibit shown here.
Download the platform independent MySql JAR file from http://dev.mysql.com/downloads/connector/j/, and add it an external library into your Java project. The version used in this recipe is 5.1.39.
Create a method as public void
readTable(String user, String password, String server)
that will take the user name, password, and server name for your MySQL database as parameters:public void readTable(String user, String password, String server){
Create a MySQL data source, and using the data source, set the user name, password, and server name:
MysqlDataSource dataSource = new MysqlDataSource(); dataSource.setUser(user); dataSource.setPassword(password); dataSource.setServerName(server);
In a
try
block, create a connection for the database. Using the connection, create a statement that will be used to execute aSELECT
query to get information from the table. The results of the query will be stored in a result set:try{ Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM data_science.books");
Now, iterate over the result set, and retrieve each column data by mentioning the column name. Note the use of the method that gives us the data you need to know the field type before you can use them. For instance, as we know that the ID filed is integer, we are able to use the
getInt()
method:while (rs.next()){ int id = rs.getInt("id"); String book = rs.getString("book_name"); String author = rs.getString("author_name"); Date dateCreated = rs.getDate("date_created"); System.out.format("%s, %s, %s, %sn", id, book, author, dateCreated); }
Close the result set, the statement, and connection after iteration:
rs.close(); stmt.close(); conn.close();
Catch some exceptions as you can have during this reading data from the table and close the method:
}catch (Exception e){ //Your exception handling mechanism goes here. } }
The complete method, the class, and the driver method to execute the method are as follows:
import java.sql.*; import com.mysql.jdbc.jdbc2.optional.MysqlDataSource; public class TestDB{ public static void main(String[] args){ TestDB test = new TestDB(); test.readTable("your user name", "your password", "your MySQL server name"); } public void readTable(String user, String password, String server) { MysqlDataSource dataSource = new MysqlDataSource(); dataSource.setUser(user); dataSource.setPassword(password); dataSource.setServerName(server); try{ Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM data_science.books"); while (rs.next()){ int id = rs.getInt("id"); String book = rs.getString("book_name"); String author = rs.getString("author_name"); Date dateCreated = rs.getDate("date_created"); System.out.format("%s, %s, %s, %sn", id, book, author, dateCreated); } rs.close(); stmt.close(); conn.close(); }catch (Exception e){ //Your exception handling mechanism goes here. } } }
This code displays the data in the table that you created.