Reader small image

You're reading from  Hands-On Data Science with the Command Line

Product typeBook
Published inJan 2019
Reading LevelIntermediate
PublisherPackt
ISBN-139781789132984
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Authors (3):
Jason Morris
Jason Morris
author image
Jason Morris

Jason Morris is a systems and research engineer with over 19 years of experience in system architecture, research engineering, and large data analysis. His primary focus is machine learning with TensorFlow, CUDA, and Apache Spark. Jason is also a speaker and a consultant for designing large-scale architectures, implementing best security practices on the cloud, creating near real-time image detection analytics with deep learning, and developing serverless architectures to aid in ETL. His most recent roles include solution architect, big data engineer, big data specialist, and instructor at Amazon Web Services. He is currently the Chief Technology Officer of Next Rev Technologies and his favorite command line program is netcat
Read more about Jason Morris

Chris McCubbin
Chris McCubbin
author image
Chris McCubbin

Chris McCubbin is a data scientist and software developer with 20 years experience in developing complex systems and analytics. He co-founded the successful big data security startup Sqrrl, since acquired by Amazon. He has also developed smart swarming systems for drones, social network analysis systems in MapReduce and big data security analytic platforms using the Apache projects Accumulo and Spark. He has been using the Unix command line starting on IRIX platforms in college and his favorite command line program is find.
Read more about Chris McCubbin

Raymond Page
Raymond Page
author image
Raymond Page

Raymond Page is a computer engineer specializing in site reliability. His experience with embedded development engendered a passion for removing the pervasive bloat from web technologies and cloud computing. His favorite command is cat.
Read more about Raymond Page

View More author details
Right arrow

SQL, Math, and Wrapping it up

Databases are attractive solutions for storing and accessing data. They supply the developer with an API that allows the structured organization of data, the ability to search that data in flexible ways, and the ability to store new data. When a database's capabilities are a requirement, there's often little room left for negotiation; the question is which database and not whether we should use one.

Despite this fact, the Unix command line provides a suite of tools that lets a developer view streams or files in many of the same ways as they would view a database. Given one or more files with data in it, we can use these tools to query that data without ever having to maintain a database or any of the things that go along with it, such as fixed schemas. Often, we can use this method for processing data instead of standing up a database server...

cut and viewing data as columnar

The first thing you will likely need to do is partition data in files into rows of data and columns of data. We saw some transformations in the previous chapters that allow us to manipulate data one row at a time. For this chapter, we'll assume the rows of your data correspond with the lines of data in your files. If this isn't the case, this may be the first thing you want to do in your pipeline.

Given that we have some rows of data in our file or stream, we would like to view those rows in a columnar fashion, such as a traditional database. We can do this using the help of the cut command. cut will allow us to chop the lines of the file into columns by a delimiter, and to select which of those columns get passed through to the output.

If your data is a comma-separated or tab-separated file, cut is quite simple:

zcat amazon_reviews_us_Digital_Ebook_Purchase_v1_01...

Simulating selects

In the previous sections, we saw how to SELECT data, inner JOIN data, and even do GROUP BY and ORDER BY operations on flat files or streams of data. Rounding out the commonly-used operations, we can also create sub-selected tables of data by simply wrapping a set of calls into a stream and then processing them further. This is what we've been doing using the piping model, but to illustrate a point, say we wanted to sub-select out of the grouped-by reviews only those reviewers who had between 100 and 200 reviews. We can take the command in the preceding example and awk it once more:

zcat amazon_reviews_us_Digital_Ebook_Purchase_v1_01.tsv.gz | cut -d$'\t' -f2,8 | awk '{sum[$1]+=$2;count[$1]+=1} END {for (i in sum) {print i,sum[i],count[i],sum[i]/count[i]}}' | sort -k3 -r -n | awk '$3 >= 100 && $3 <=200' | head 
...

Keys to the kingdom

Now that we can explore data with the command line and have mastered transforming text, we'll provide you with the keys to the kingdom. SQLite is a public domain library that implements a SQL engine and provides a sqlite command shell for interacting with database files. Unlike Oracle, MySQL, and other database engines that provide a network endpoint, sqlite is offline and locally driven by library calls to interact with a single file that is the entire database. This makes backups easy. Backups can be created by doing cp database.sq3 backups/`date +%F`-database.sq3. One can version control it, but that's unlikely to compress well with delta comparisons.

Using SQLite

Easy import of CSV files ...

Math in bash itself

Bash itself is able to do simple integer arithmetic. There are at least three different ways to accomplish this in bash.

Using let

You can use the let command to do simple bash arithmetic:

$ let x=1
$ echo $x
1
$ let x=$x+1
$ echo $x
2

Basic arithmetic

You can do addition, subtraction, multiplication (be sure to escape the * operator with \*) and integer division:

expr 1 + 2
3
expr 3 \* 10
30

The numbers must be separated by spaces.

Double-parentheses

...

Python (pandas, numpy, scikit-learn)

Counting things often gets you to where you need to be, but sometimes more complex tools are required to do the job. Fortunately, we can write our own tools in the UNIX paradigm and use them in our workstream pipes along with our other command-line tools if we so desire.

One such tool is python, along with popular data science libraries such as pandas, numpy, and scikit-learn. This isn't a text on all the great things those libraries can do for you (if you'd like to learn, a good place to start is the official python tutorial (https://docs.python.org/3/tutorial/) and the basics of Pandas data structures in the Pandas documentation (https://pandas.pydata.org/pandas-docs/stable/basics.html). Make sure you have Python, pip, and pandas installed before you continue (see Chapter 1, Data Science at the Command Line and Setting It Up)...

Analyzing weather data in bash

The National Weather Service has an API to get weather data: https://forecast-v3.weather.gov/documentation . The API delivers forecast data over a lightweight HTTP interface. If you pass the correct URL and parameters to the web endpoint, the service will return JSON-formatted weather data. Let's take a look at an example of some data exploration we can do with this rich dataset.

The NWS provides both current weather data and forecasts. Let's say I'd like to see just how accurate NWS forecasts are. I'd like to do this over some amount of time, say a week. I'd like to save tomorrow's forecast, and then later on, compare those forecasts to what the temperature really was. For this example, let's look at the forecast highs, and the actual high temperatures. I'd like to do this for a single point in lat-lon.

Our...

Summary

In this chapter, we used cut, grep, awk, and sort to deeply inspect our data, as one would in a more traditional database. We then saw how sqlite can provide a lightweight alternative to other databases. Using these tools together, we were able to mine useful knowledge from our raw files.

We also saw how the command line offers several options for doing arithmetic and other mathematical operations. Simple arithmetic and grouped tallies can be performed using bash itself or awk. More complex mathematics can be done using a scripting language, such as bc or python, and be called like other command-line workflow tools.

Finally, we used many of the tools we discussed to produce a useful and interesting result from publicly-available data.

We hope that this book broadens your understanding of just how powerful the command line actually is, especially for data science. However...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Data Science with the Command Line
Published in: Jan 2019Publisher: PacktISBN-13: 9781789132984
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Jason Morris

Jason Morris is a systems and research engineer with over 19 years of experience in system architecture, research engineering, and large data analysis. His primary focus is machine learning with TensorFlow, CUDA, and Apache Spark. Jason is also a speaker and a consultant for designing large-scale architectures, implementing best security practices on the cloud, creating near real-time image detection analytics with deep learning, and developing serverless architectures to aid in ETL. His most recent roles include solution architect, big data engineer, big data specialist, and instructor at Amazon Web Services. He is currently the Chief Technology Officer of Next Rev Technologies and his favorite command line program is netcat
Read more about Jason Morris

author image
Chris McCubbin

Chris McCubbin is a data scientist and software developer with 20 years experience in developing complex systems and analytics. He co-founded the successful big data security startup Sqrrl, since acquired by Amazon. He has also developed smart swarming systems for drones, social network analysis systems in MapReduce and big data security analytic platforms using the Apache projects Accumulo and Spark. He has been using the Unix command line starting on IRIX platforms in college and his favorite command line program is find.
Read more about Chris McCubbin

author image
Raymond Page

Raymond Page is a computer engineer specializing in site reliability. His experience with embedded development engendered a passion for removing the pervasive bloat from web technologies and cloud computing. His favorite command is cat.
Read more about Raymond Page