Reader small image

You're reading from  Web Scraping with Python

Product typeBook
Published inOct 2015
Reading LevelIntermediate
PublisherPackt
ISBN-139781782164364
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Author (1)
Richard Penman
Richard Penman
author image
Richard Penman

Richard Lawson is from Australia and studied Computer Science at the University of Melbourne. Since graduating, he built a business specializing in web scraping while travelling the world, working remotely from over 50 countries. He is a fluent Esperanto speaker, conversational in Mandarin and Korean, and active in contributing to and translating open source software. He is currently undertaking postgraduate studies at Oxford University and in his spare time enjoys developing autonomous drones.
Read more about Richard Penman

Right arrow

Chapter 4. Concurrent Downloading

In previous chapters, our crawlers downloaded web pages sequentially, waiting for each download to complete before starting the next one. Sequential downloading is fine for the relatively small example website but quickly becomes impractical for larger crawls. To crawl a large website of 1 million web pages at an average of one web page per second would take over 11 days of continuous downloading all day and night. This time can be significantly improved by downloading multiple web pages simultaneously.

This chapter will cover downloading web pages with multiple threads and processes, and then compare the performance to sequential downloading.

One million web pages


To test the performance of concurrent downloading, it would be preferable to have a larger target website. For this reason, we will use the Alexa list in this chapter, which tracks the top 1 million most popular websites according to users who have installed the Alexa Toolbar. Only a small percentage of people use this browser plugin, so the data is not authoritative, but is fine for our purposes.

These top 1 million web pages can be browsed on the Alexa website at http://www.alexa.com/topsites. Additionally, a compressed spreadsheet of this list is available at http://s3.amazonaws.com/alexa-static/top-1m.csv.zip, so scraping Alexa is not necessary.

Parsing the Alexa list

The Alexa list is provided in a spreadsheet with columns for the rank and domain:

Extracting this data requires a number of steps, as follows:

  1. Download the .zip file.

  2. Extract the CSV file from this .zip file.

  3. Parse the CSV file.

  4. Iterate each row of the CSV file to extract the domain.

Here is an implementation...

Sequential crawler


Here is the code to use AlexaCallback with the link crawler developed earlier to download sequentially:

scrape_callback = AlexaCallback()
link_crawler(seed_url=scrape_callback.seed_url, 
    cache_callback=MongoCache(),
    scrape_callback=scrape_callback)

This code is available at https://bitbucket.org/wswp/code/src/tip/chapter04/sequential_test.py and can be run from the command line as follows:

$ time python sequential_test.py
...
26m41.141s

This time is as expected for sequential downloading with an average of ~1.6 seconds per URL.

Threaded crawler


Now we will extend the sequential crawler to download the web pages in parallel. Note that if misused, a threaded crawler could request content too fast and overload a web server or cause your IP address to be blocked. To avoid this, our crawlers will have a delay flag to set the minimum number of seconds between requests to the same domain.

The Alexa list example used in this chapter covers 1 million separate domains, so this problem does not apply here. However, a delay of at least one second between downloads should be considered when crawling many web pages from a single domain in future.

How threads and processes work

Here is a diagram of a process containing multiple threads of execution:

When a Python script or other computer program is run, a process is created containing the code and state. These processes are executed by the CPU(s) of a computer. However, each CPU can only execute a single process at a time and will quickly switch between them to give the impression...

Performance


To further understand how increasing the number of threads and processes affects the time required when downloading; here is a spreadsheet of results for crawling 1000 web pages:

Script

Number of threads

Number of processes

Time

Comparison with sequential

Sequential

1

1

28m59.966s

1

Threaded

5

1

7m11.634s

4.03

Threaded

10

1

3m50.455s

7.55

Threaded

20

1

2m45.412s

10.52

Processes

5

2

4m2.624s

7.17

Processes

10

2

2m1.445s

14.33

Processes

20

2

1m47.663s

16.16

The last column shows the proportion of time in comparison to the base case of sequential downloading. We can see that the increase in performance is not linearly proportional to the number of threads and processes, but appears logarithmic. For example, one process and five threads leads to 4X better performance, but 20 threads only leads to 10X better performance. Each extra thread helps, but is less effective than the previously added thread. This is to be expected, considering the process...

Summary


This chapter covered why sequential downloading creates a bottleneck. We then looked at how to download large numbers of web pages efficiently across multiple threads and processes.

In the next chapter, we will cover how to scrape content from web pages that load their content dynamically using JavaScript.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Web Scraping with Python
Published in: Oct 2015Publisher: PacktISBN-13: 9781782164364
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Richard Penman

Richard Lawson is from Australia and studied Computer Science at the University of Melbourne. Since graduating, he built a business specializing in web scraping while travelling the world, working remotely from over 50 countries. He is a fluent Esperanto speaker, conversational in Mandarin and Korean, and active in contributing to and translating open source software. He is currently undertaking postgraduate studies at Oxford University and in his spare time enjoys developing autonomous drones.
Read more about Richard Penman