Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Data Science with the Command Line
Hands-On Data Science with the Command Line

Hands-On Data Science with the Command Line: Automate everyday data science tasks using command-line tools

By Jason Morris , Chris McCubbin , Raymond Page
€19.99 €13.98
Book Jan 2019 124 pages 1st Edition
eBook
€19.99 €13.98
Print
€24.99
Subscription
€14.99 Monthly
eBook
€19.99 €13.98
Print
€24.99
Subscription
€14.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Jan 31, 2019
Length 124 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781789132984
Category :
Concepts :
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Data Science with the Command Line

Data Science at the Command Line and Setting It Up

"In the beginning... was the command line" Years ago, we didn't have fancy frameworks that handled our distributed computing for us, or applications that could read files intelligently and give us accurate results. If we did, it was very expensive or only worked for a small problem set, very few people had access to this technology, and it was mostly proprietary.

For newcomers to the world of data science, you might have used the command line for a small number of things. Maybe you moved a file from one place to another using mv, or read a file using cat. Or you might have never used the command line at all, or at least not for data science. In this book, we hope to show you a number of tools and ways you can perform some everyday tasks that you can do locally, without using today's buzzword framework.

We created this book for the folks who have little to no experience with the command line, and perform a lot of data extraction, modelling, parsing, and analyzing. This doesn't mean that if you do have a lot of command-line experience (a lot of DevOps and systems folks do), you shouldn't read this book. In fact, you might pick up a couple commands and techniques that you haven't used before.

In this chapter, we will cover the following topics:

  • The history of the command line
  • Language-focused shells
  • Why use the command line?

We will also walk through the setup and configuration of the command line with the following operating systems:

  • Windows 10
  • Mac OS X
  • Ubuntu Linux

If you are running a different operating system, we suggest obtaining an instance from a cloud provider or using the Docker container that's provided in this book.

History of the command line

Since the very first electronic machines, people have strived to communicate with them the same way that we humans talk to each other. But since natural-language processing was beyond the technological grasp of early computer systems, engineers relatively quickly replaced the punch cards, dials, and knobs of early computing machines with teletypes: typewriter-like machines that enabled keyed input and textual output to a display. Teletypes were replaced fairly quickly with video monitors, enabling a world of graphical displays. A novelty of the time, teletypes served a function that was missing in graphical environments, and thus terminal emulators were born for serving as the modern interface to the command line. The programs behind the terminals started out as an ingrained part of the computer itself: resident monitor programs that were able to start a job, detect when it was done, and clean up.

As computers grew in complexity, so did the programs controlling them. Resident monitors gave way to operating systems that were able to share time between multiple jobs. In the early 1960s, Louis Pouzin had the brilliant idea to use the commands being fed to the computer as a kind of program, a shell around the operating system.

"After having written dozens of commands for CTSS, I reached the stage where I felt that commands should be usable as building blocks for writing more commands, just like subroutine libraries. Hence, I wrote RUNCOM, a sort of shell that drives the execution of command scripts, with argument substitution. The tool became instantly popular, as it became possible to go home in the evening and leaving long runcoms to execute overnight."

Scripting in this way, and the reuse of tooling, would become an ingrained trope in the exciting new world of programmable computing. Pouzin's concepts for a programmable shell made their way into the design and philosophy of Multics in the 1960s and its Bell Labs successor, Unix.

In the Bell System Technical Journal from 1978, Doug McIlroy wrote the following regarding the Unix system:

"A number of maxims have gained currency among the builders and users of the UNIX system to explain and promote its characteristic style: Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features."
  • Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
  • Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
  • Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

This is the core of the Unix philosophy and the key tenets that make the command line not just a way to launch programs or list files, but a powerful group of community-built tools that can work together to process data in a clean, simple manner. In fact, McIlroy follows up with this great example of how this had led to success with data processing, even back in 1978:

"Unexpected uses of files abound: programs may be compiled to be run and also typeset to be published in a book from the same text without human intervention; text intended for publication serves as grist for statistical studies of English to help in data compression or cryptography; mailing lists turn into maps. The prevalence of free-format text, even in "data" files, makes the text-processing utilities useful for many strictly data processing functions such as shuffling fields, counting, or collating."

Having access to simple yet powerful components, programmers needed an easy way to construct, reuse, and execute more complicated commands and scripts to do the processing specific to their needs. Enter the early fully-featured command line shell: the Bourne shell. Developed by Stephen Bourne (also at Bell Labs) in the late 1970s for Unix's System 7, the Bourne shell was designed from the start with programmers like us in mind: it had all the scripting tools needed to put the community-developed single-purpose tools to good use. It was the right tool, in the right place, at the right time; almost all Unix systems today are based upon System 7 and nearly all still include the original Bourne shell as an option. In this book, we will use a descendant of the venerable Bourne shell, known as Bash, which is a rewrite of the Bourne shell released in 1989 for the GNU project that incorporated the best features of the Bourne shell itself along with several of its earlier spinoffs.

We don't want to BaSH other shells, but...

In this book, we decided to focus on using the Bourne-again shell (bash) for multiple reasons. First, it's the most popular shell and you can find it everywhere. In fact, for the majority of Linux distributions, bash is the default shell. It's a great first shell to learn and very easy to work with. There's a number of examples and resources available to help you with bash if you ever get stuck. It's also safe to say that since it's so popular, you can find it on almost any system available today. From a bare-metal installation in a data center to an instance running in the cloud, bash is there, installed, and waiting for input.

There are a number of other shells you can choose from, such as the Z shell (zsh). The Z shell is fairly new (and by new I mean released in 1990, which is new in shell land) and provides a number of powerful features. Other notable shells are tcsh, ksh, and fish. The C Shell (tcsh), the Korn Shell (ksh), and the Friendly Interactive Shell (fish) are still widely used today. FreeBSD has made tcsh its default shell for the root user and ksh is still used for a lot of Solaris operating systems. Fish is also a great starter shell with a lot of features to help the user navigate the shell without feeling lost.

While these shells are still very powerful and stable, we will be focusing on using bash, as we want to focus on consistency across multiple platforms and help you learn a very active and popular shell that's been around for 30 years.

Language-focused shells

As a data scientist, I'm sure you do a lot of work with Python and Scala or have at least heard of those two languages. Two of our favorite shell replacements are Xonsh and Ammonite. Xonsh (https://xon.sh/) is a Python-powered shell that uses Python 3.4, and Ammonite (http://ammonite.io/) is a Scala-powered shell that uses Scala 2.11.7 (both versions are at time of writing). If you find yourself using a lot of Python or Scala in your day-to-day work, we recommend checking those shell replacements out as well after you've mastered the command line using bash.

So, why the command line?

As the field of data science is still fairly new (it used to be called operations research), the tools and frameworks are also fairly new. With that being said, the command line is almost 50 years old and still one of the most powerful tools used today. If you're familiar with interpreters, the command line will come easy to you. Think of it as a place to experiment and see your results in real time. Every command you enter is executed interactively, and when you call a bash script to run, it executes sequentially (unless you decide not to, more in later chapters). As we know, experimenting and exploring is most of what data science tries to accomplish (and it's the most fun!).

I was having a conversation with a newly-graduated data science student about parsing text and asked, "How would you take a small file and provide a word count on how many time the words appear?" By now everyone is familiar with the infamous Hadoop word-count example. It's considered the "Hello, World" of data science.

The answer I received was a little shocking but expected. The student instantly replied that they'd use Hadoop to read the file, tokenize the words to form a key/value pair, reduce all the keys and values that are grouped together, and add up the occurrences. The student isn't wrong, in fact, that's a perfectly acceptable answer. Especially if the file is too large for a single system (big data), you already have the code in place to scale.

With that being said, what if I told you there's a quicker way to obtain the results that doesn't require programming in Java and setting up a cluster or having Hadoop run locally? In fact, it would only take one line to complete the task? Check out the following code:

cat file.txt | tr '[:space:]' '[\n*]' | grep -v "^$" | sort | uniq -c | sort -bnr
(tr '[:space:]' '[\n*]' | grep -v "^$" | sort | uniq -c | sort -bnr )<file.txt

This may seem like a lot, especially if you've never used the command line before, so let's break it down. The cat command reads files sequentially and writes them to standard output. |, also known as pipe or the pipe operator, combines a sequence of commands chained together by their standard streams so that the output of each process (stdout) feeds directly as input (stdin) to the next one. tr (translate) reads the input from cat (via | ) and writes the result to standard output that replaces spaces with new lines. The grep command is very powerful and the most used for a lot of data parsing. grep is used to search plain-text data for lines that match a regular expression. In this example, grep trims out the empty lines. sort is used for, well, sorting! You'll notice a lot of the commands are named for what they actually do. The sort command prints the lines of its input or concatenation of files listed in its argument list in sorted order. uniq is a command that, when fed a text file, outputs the file with adjacent identical lines collapsed to one. It usually works well with the sort command. In this example, uniq -c is called to count occurrences. And finally, sort -bnr sorts in numeric reverse order and ignores whitespace.

Don't worry if the example looks foreign to you. The command line also comes with manual pages for each command. All you have to do is man the command to view the page. You can even man man to get an idea of what the man command does! Give it a whirl and man tr or man sort. Oh, you don't have the command line set up? It's easier than you think, and we can get you up in running in minutes, so let's get started.

Getting set up with Windows 10

We want the readers to keep in mind that PowerShell will not work with the examples listed in this book. However, Microsoft has seen fit to release their Windows Subsystem for Linux as of Windows 10 version 1607 and later. It's also easy to install: open the Microsoft Store, search for Ubuntu (a Linux distribution), and install it:

In Windows 10 version 1607 and later, you have the ability to run Linux natively with your choice of distribution. In this example, we will use Ubuntu on top of Windows 10 to get our workspace set up. Make sure you have the latest version of Windows installed in order to take advantage of WSL (Windows Subsystem for Linux); at a minimum, you need the Windows 10 Fall Creator update to proceed. Also keep in mind that WSL is in beta at the time of writing. If you don't feel comfortable installing beta software, I recommend finding an alternative, such as an EC2 instance on AWS, or skipping ahead to the Docker section of this book:

  1. Go to the Start menu and search for PowerShell:

  1. Double-click Windows PowerShell and click Run as Administrator.
  2. Type the following command to enable WSL:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

The following should be displayed:

  1. You will be asked to confirm your choice. Use Y or press Enter:
  1. Press Y to reboot.

Once your system has rebooted, do the following:

  1. Go to the Start menu and search for Store.
  2. Search for Ubuntu:

  1. Click Install:

  1. Click Launch.
  2. When asked to create a username and password, go ahead and create one. Make sure you remember this information as you'll need it throughout this book:

  1. Success! You now have completed the setup and installation of Linux on Windows 10.

Install the following tools as we will be using them throughout this book:

sudo apt update
sudo apt install jq python-pip gnuplot sqlite3 libsqlite3-dev curl netcat bc
pip install pandas

Getting set up on OS X

OS X already has a full command-line system installed using bash as the default shell. To access this shell, click the magnifying glass in the upper-right corner and type terminal in the dialog box:

This will open a bash Terminal:

As in other bash shells, this Terminal doesn't have everything installed, so type the following commands to install the requisite installers and command-line tools that we'll be using in this book:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install jq sqlite gnuplot python netcat bc
pip3 install pandas

On OS X, this script installs a few installation tools, including pip and homebrew. It then uses these tools to install the commands that we use in this book that aren't natively installed, namely jq, gnuplot, sqlite, and pandas.

One thing to look out for in OS X is that certain standard tools are built a little differently than the ones that come with Debian-based systems like the rest of the systems we talk about in this chapter. In some circumstances, OS X tools work slightly differently or have different options. Where this is the case we have noted it in the text.

Getting set up on Ubuntu Linux

Ubuntu has a full built-in command-line shell and typically uses bash as the default shell. Different window managers have slightly different ways of opening a Terminal window. For example, in the image of Ubuntu 17.10 Artful (located at https://www.osboxes.org/ubuntu/), open the Terminal by clicking on Activities in the upper-left corner and typing terminal in the dialog:

This will bring up a command-line prompt:

As in other bash shells, this shell doesn't have everything installed, so type the following command to install the installers and command-line tools that we will use in this book:

sudo apt update
sudo apt install jq python-pip gnuplot sqlite3 libsqlite3-dev curl netcat bc
pip install pandas

On Ubuntu, this script installs a few installation tools, including pip. It then uses these tools to install the commands that we use in this book that aren't natively installed, namely jq, gnuplot, sqlite, curl, and pandas.

Getting set up with Docker

What if there were a way to obtain an image with all the commands preinstalled and you were able to run it on most major operating systems without any issues? That's exactly what Docker provides, and you can quickly get up and running in a matter of minutes:

  1. Visit https://www.docker.com/community-edition and install the version of Docker for your operating system
  2. Run the following command to obtain the Docker image:
docker run -ivt nextrevtech/commandline-book /bin/bash

Summary

The command line has a long history, and it can be quite foreign to newcomers. In this chapter, we covered the environment setup steps so that you can follow along with the examples in this book. Essential commands will introduce what you need to succeed, followed by acquiring datasets that we can play with. We will cover all the shell magic, such as background processes, writing shell functions, basic shell control-flow constructs, visualizing results, processing strings, simulating database functionality, simple math constructs, and finally a synthesis of all of these in a penultimate chapter of magical fascination.

Everything you need to explore the rest of the book is now installed and configured. As you saw, the command line can run on pretty much anything, which makes it an invaluable tool to have in your toolkit.

In the next chapter, we will use our newly-installed command-line environment to run some essential commands, learn how to customize the shell, and look at how to use the built-in help when we get stuck.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Perform string processing, numerical computations, and more using CLI tools
  • Understand the essential components of data science development workflow
  • Automate data pipeline scripts and visualization with the command line

Description

The Command Line has been in existence on UNIX-based OSes in the form of Bash shell for over 3 decades. However, very little is known to developers as to how command-line tools can be OSEMN (pronounced as awesome and standing for Obtaining, Scrubbing, Exploring, Modeling, and iNterpreting data) for carrying out simple-to-advanced data science tasks at speed. This book will start with the requisite concepts and installation steps for carrying out data science tasks using the command line. You will learn to create a data pipeline to solve the problem of working with small-to medium-sized files on a single machine. You will understand the power of the command line, learn how to edit files using a text-based and an. You will not only learn how to automate jobs and scripts, but also learn how to visualize data using the command line. By the end of this book, you will learn how to speed up the process and perform automated tasks using command-line tools.

What you will learn

Understand how to set up the command line for data science Use AWK programming language commands to search quickly in large datasets. Work with files and APIs using the command line Share and collect data with CLI tools Perform visualization with commands and functions Uncover machine-level programming practices with a modern approach to data science

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Jan 31, 2019
Length 124 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781789132984
Category :
Concepts :

Table of Contents

8 Chapters
Preface Chevron down icon Chevron up icon
Data Science at the Command Line and Setting It Up Chevron down icon Chevron up icon
Essential Commands Chevron down icon Chevron up icon
Shell Workflows, and Data Acquisition and Massaging Chevron down icon Chevron up icon
Bash Functions and Data Visualization Chevron down icon Chevron up icon
Loops, Functions, and String Processing Chevron down icon Chevron up icon
SQL, Math, and Wrapping it up Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.