Reader small image

You're reading from  Python Web Scraping Cookbook

Product typeBook
Published inFeb 2018
Reading LevelBeginner
PublisherPackt
ISBN-139781787285217
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Author (1)
Michael Heydt
Michael Heydt
author image
Michael Heydt

Michael Heydt is an independent consultant, programmer, educator, and trainer. He has a passion for learning and sharing his knowledge of new technologies. Michael has worked in multiple industry verticals, including media, finance, energy, and healthcare. Over the last decade, he worked extensively with web, cloud, and mobile technologies and managed user experiences, interface design, and data visualization for major consulting firms and their clients. Michael's current company, Seamless Thingies , focuses on IoT development and connecting everything with everything. Michael is the author of numerous articles, papers, and books, such as D3.js By Example, Instant Lucene. NET, Learning Pandas, and Mastering Pandas for Finance, all by Packt. Michael is also a frequent speaker at .NET user groups and various mobile, cloud, and IoT conferences and delivers webinars on advanced technologies.
Read more about Michael Heydt

Right arrow

Processing Data

In this chapter, we will cover:

  • Working with CSV and JSON data
  • Storing data using AWS S3
  • Storing data using MySQL
  • Storing data using PostgreSQL
  • Storing store data using Elasticsearch
  • How to build robust ETL pipelines with AWS SQS

Introduction

In this chapter, we will introduce the use of data in JSON, CSV, and XML formats. This will include the means of parsing and converting this data to other formats, including storing that data in relational databases, search engines such as Elasticsearch, and cloud storage including AWS S3. We will also discuss the creation of distributed and large-scale scraping tasks through the use of messaging systems including AWS Simple Queue Service (SQS). The goal is to provide both an understanding of the various forms of data you may retrieve and need to parse, and an instruction the the various backends where you can store the data you have scraped. Finally, we get a first introduction to one and Amazon Web Service (AWS) offerings. By the end of the book we will be getting quite heavy into AWS and this gives a gentle introduction.

...

Working with CSV and JSON data

Extracting data from HTML pages is done using the techniques in the previous chapter, primarily using XPath through various tools and also with Beautiful Soup. While we will focus primarily on HTML, HTML is a variant of XML (eXtensible Markup Language). XML one was the most popular for of expressing data on the web, but other have become popular, and even exceeded XML in popularity.

Two common formats that you will see are JSON (JavaScript Object Notation) and CSV (Comma Separated Values). CSV is easy to create and a common form for many spreadsheet applications, so many web sites provide data in that for, or you will need to convert scraped data to that format for further storage or collaboration. JSON really has become the preferred format, due to its easy within programming languages such as JavaScript (and Python), and many database now support...

Storing data using AWS S3

There are many cases where we just want to save content that we scrape into a local copy for archive purposes, backup, or later bulk analysis. We also might want to save media from those sites for later use. I've built scrapers for advertisement compliance companies, where we would track and download advertisement based media on web sites to ensure proper usage, and also to store for later analysis, compliance and transcoding.

The storage required for these types of systems can be immense, but with the advent of cloud storage services such as AWS S3 (Simple Storage Service), this becomes much easier and more cost effective than managing a large SAN (Storage Area Network) in your own IT department. Plus, S3 can also automatically move data from hot to cold storage, and then to long-term storage, such as a glacier, which can save you much more money...

Storing data using MySQL

MySQL is a freely available, open source Relational Database Management System (RDBMS). In this example, we will read the planets data from the website and store it into a MySQL database.

Getting ready

You will need to have access to a MySQL database. You can install one locally installed, in the cloud, within a container. I am using a locally installed MySQL server and have the root password set to mypassword. You will also need to install the MySQL python library. You can do this with pip install mysql-connector-python.

  1. The first thing to do is to connect to the database using the mysql command at the terminal:
# mysql -uroot -pmypassword
mysql: [Warning] Using a password on the command line...

Storing data using PostgreSQL

In this recipe we store our planet data in PostgreSQL. PostgreSQL is an open source relational database management system (RDBMS). It is developed by a worldwide team of volunteers, is not controlled by any corporation or other private entity, and the source code is available free of charge. It has a lot of unique features such as hierarchical data models.

Getting ready

First make sure you have access to a PostgreSQL data instance. Again, you can install one locally, run one in a container, or get an instance in the cloud.

As with MySQL, we need to first create a database. The process is almost identical to that of MySQL but with slightly different commands and parameters.

  1. From the terminal...

Storing data in Elasticsearch

Elasticsearch is a search engine based on Lucene. It provides a distributed, multitenant-capable, full-text search engine with an HTTP web interface and schema-free JSON documents. It is a non-relational database (often stated as NoSQL), focusing on the storage of documents instead of records. These documents can be many formats, one of which is useful to us: JSON. This makes using Elasticsearch very simple as we do not need to convert our data to/from JSON. We will use Elasticsearch much more later in the book

For now, let's go and store our planets data in Elasticsearch.

Getting ready

We will access a locally installed Elasticsearch server. To do this from Python, we will use the Elasticsearch...

How to build robust ETL pipelines with AWS SQS

Scraping a large quantity of sites and data can be a complicated and slow process. But it is one that can take great advantage of parallel processing, either locally with multiple processor threads, or distributing scraping requests to report scrapers using a message queue system. There may also be the need for multiple steps in a process similar to an Extract, Transform, and Load pipeline (ETL). These pipelines can also be easily built using a message queuing architecture in conjunction with the scraping.

Using a message queuing architecture gives our pipeline two advantages:

  • Robustness
  • Scalability

The processing becomes robust, as if processing of an individual message fails, then the message can be re-queued for processing again. So if the scraper fails, we can restart it and not lose the request for scraping the page, or the...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Python Web Scraping Cookbook
Published in: Feb 2018Publisher: PacktISBN-13: 9781787285217
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Michael Heydt

Michael Heydt is an independent consultant, programmer, educator, and trainer. He has a passion for learning and sharing his knowledge of new technologies. Michael has worked in multiple industry verticals, including media, finance, energy, and healthcare. Over the last decade, he worked extensively with web, cloud, and mobile technologies and managed user experiences, interface design, and data visualization for major consulting firms and their clients. Michael's current company, Seamless Thingies , focuses on IoT development and connecting everything with everything. Michael is the author of numerous articles, papers, and books, such as D3.js By Example, Instant Lucene. NET, Learning Pandas, and Mastering Pandas for Finance, all by Packt. Michael is also a frequent speaker at .NET user groups and various mobile, cloud, and IoT conferences and delivers webinars on advanced technologies.
Read more about Michael Heydt