Scrapy is a popular web scraping framework that comes with many high-level functions to make scraping websites easier. In this chapter, we will get to know Scrapy by using it to scrape the example website, just as we did in Chapter 2, Scraping the Data. Then, we will cover Portia, which is an application based on Scrapy that allows you to scrape a website through a point and click interface
You're reading from Web Scraping with Python
Scrapy can be installed with the pip
command, as follows:
pip install Scrapy
Scrapy relies on some external libraries so if you have trouble installing it there is additional information available on the official website at: http://doc.scrapy.org/en/latest/intro/install.html.
Currently, Scrapy only supports Python 2.7, which is more restrictive than other packages introduced in this book. Previously, Python 2.6 was also supported, but this was dropped in Scrapy 0.20. Also due to the dependency on Twisted, support for Python 3 is not yet possible, though the Scrapy team assures me they are working to solve this.
If Scrapy is installed correctly, a
scrapy
command will now be available in the terminal:
$ scrapy -h Scrapy 0.24.4 - no active project Usage: scrapy <command> [options] [args] Available commands: bench Run quick benchmark test check Check spider contracts crawl Run a spider ...
Now that Scrapy is installed, we can run the startproject
command to generate the default structure for this project. To do this, open the terminal and navigate to the directory where you want to store your Scrapy project, and then run scrapy startproject <project name>
. Here, we will use example
for the project name:
$ scrapy startproject example $ cd example
Here are the files generated by the scrapy
command:
scrapy.cfg example/ __init__.py items.py pipelines.py settings.py spiders/ __init__.py
The important files for this chapter are as follows:
Additionally, Scrapy uses scrapy.cfg
for project configuration and pipelines.py
to process the scraped fields, but they will not...
Portia is a an open-source tool built on top of Scrapy that supports building a spider by clicking on the parts of a website that need to be scraped, which can be more convenient than creating the CSS selectors manually.
Portia is a powerful tool, and it depends on multiple external libraries for its functionality. It is also relatively new, so currently, the installation steps are somewhat involved. In case the installation is simplified in future, the latest documentation can be found at https://github.com/scrapinghub/portia#running-portia.
The recommended first step is to create a virtual Python environment with virtualenv
. Here, we name our environment portia_example
, which can be replaced with whatever name you choose:
$ pip install virtualenv $ virtualenv portia_example --no-site-packages $ source portia_example/bin/activate (portia_example)$ cd portia_example
For scraping the annotated fields Portia uses a library called Scrapely, which is a useful open-source tool developed independently of Portia and is available at https://github.com/scrapy/scrapely. Scrapely uses training data to build a model of what to scrape from a web page, and then this model can be applied to scrape other web pages with the same structure in future. Here is an example to show how it works:
(portia_example)$ python >>> from scrapely import Scraper >>> s = Scraper() >>> train_url = 'http://example.webscraping.com/view/Afghanistan-1' >>> s.train(train_url, {'name': 'Afghanistan', 'population': '29,121,286'}) >>> test_url = 'http://example.webscraping.com/view/United-Kingdom-239' >>> s.scrape(test_url) [{u'name': [u'United Kingdom'], u'population': [u'62,348,447']}]
First, Scrapely is given the data we want to scrape from the Afghanistan
web page to train the model, being the country...
This chapter introduced Scrapy, a web scraping framework with many high-level features to improve efficiency at scraping websites. Additionally, this chapter covered Portia, which provides a visual interface to generate Scrapy spiders. Finally, we tested Scrapely, the library used by Portia to scrape web pages automatically for a given model.
In the next chapter, we will apply the skills learned so far to some real-world websites.