Reader small image

You're reading from  Hands-On Web Scraping with Python - Second Edition

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781837636211
Edition2nd Edition
Right arrow
Author (1)
Anish Chapagain
Anish Chapagain
author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain

Right arrow

Scraping the Web with Scrapy and Beautiful Soup

In previous chapters, we learned about web scraping-related technologies, data-finding techniques, and using various Python libraries to scrape data from the web.

In this chapter, we will explore and learn practically about two popular Python libraries, Scrapy and Beautiful Soup. Scrapy is a web crawling framework for Python and provides a project-oriented scope for web scraping. Beautiful Soup, on the other hand, deals with document or content parsing. Parsing a document is normally done to effectively traverse and extract content. Apart from this, both libraries are heavily loaded with DOM-related features.

In particular, we will learn about the following topics in this chapter:

  • Web parsing using Python
  • Web scraping using Beautiful Soup
  • Web scraping using Scrapy
  • Deploying a web crawler

Technical requirements

A web browser (Google Chrome or Mozilla Firefox) will be required and we will be using Python notebooks for the code using JupyterLab.

Please refer to the Setting things up and Creating a virtual environment sections in Chapter 2 to continue setting up and using the environment created.

The Python libraries that are required for this chapter are as follows:

  • lxml
  • urllib
  • requests
  • html5lib
  • beautifulsoup4
  • scrapy

The code files for this chapter are available online on GitHub: https://github.com/PacktPublishing/Hands-On-Web-Scraping-with-Python-Second-Edition/tree/main/Chapter05

Web parsing using Python

In earlier chapters (in both the explanations and code examples), we learned that web scraping is a procedure for extracting data from websites, as per our requirements and choice. Data collection can be smooth and error-free from a coding perspective with the use of some Python libraries, but still, identifying content and traversing through elements (individual or nested) are required, at a minimum, to carry out the task.

To ensure high-quality data is collected, the content on the web must be complete and error-free. We use CSS or XPath-based expressions in the DOM structure. If the DOM’s structure is somehow imperfect or it contains bugs, such as incomplete tags, missing closing tags, or spelling errors in tags, then the code expressions and query paths that are deployed will not be directed to the original nodes or elements of the DOM. This will lead to the extraction of incomplete or unnecessary content, which might then require extra tasks...

Web scraping using Beautiful Soup

In this section, we will build and execute a web crawler using Beautiful Soup. To set things up, we have chosen to scrape quotes from http://quotes.toscrape.com. Specifically, we will be scraping from the page http://quotes.toscrape.com/tag/inspirational, as seen in Figure 5.3:

Figure 5.3: Category “inspirational”

Figure 5.3: Category “inspirational”

The example we are dealing with is similar to Example 3 – scraping quotes with author details in Chapter 4. Only the links and compositions have changed, by carrying out a few additional logical steps. The code for the example can be found on GitHub: https://github.com/PacktPublishing/Hands-On-Web-Scraping-with-Python-Second-Edition/blob/main/Chapter05/bs4_scraping.ipynb.

The following code declares the paginated link as url, and columns contains a column header for the CSV file to be generated:

url = "http://quotes.toscrape.com/tag/inspirational/page/"
columns=['id&apos...

Web scraping using Scrapy

We have learned about, explored, and used different Python libraries for web scraping in the current and previous chapters. Scrapy is one of the few open source web crawling frameworks written in Python that allows dynamic adaptation, a project-based scope, and modular extensibility for web scraping tasks.

As per Scrapy’s official website, https://scrapy.org/, it is simple, fast, collaborative, and yet extensible. Scrapy was previously maintained by Scrapinghub, but now it is maintained by Zyte (https://www.zyte.com/) and some other contributors.

Listed here are a few important features that make Scrapy popular and make it stand out among the Python web crawling frameworks:

  • Built-in support for parsing, traversing, XPath, CSS selectors, and regex
  • Handles HTTP requests and responses using built-in libraries
  • Modular structure and components allow developers to focus on a specific task and manage coding collaboratively
  • Provides...

Deploying a web crawler

We have successfully implemented a crawler and extracted and exported data to external files using Scrapy (with the help of the scrapy CLI tool). This process has been done on a local machine or Personal Computer (PC). Deploying a crawler online or on a server is the only option for most developers. The deployed crawler benefits from multiple features of the server (such as having access anytime and anywhere, speed, and ample storage), as well as its dynamic nature.

We can choose any cloud platform, web hosting server, or internet-based service to upload our code and execute it. Most of these services are not 100% free; we have to pay a certain amount for the desired configuration and services.

Scrapy, from the beginning, has been famous for its architecture. There were and are still multiple web-based platforms that allow users to run their Scrapy-based projects. One of these is Scrapinghub (now Zyte). Zyte Scrapy Cloud (https://www.zyte.com/scrapy-cloud...

Summary

In this chapter, we explored and learned about parsing and extracting data from the web using Beautiful Soup and Scrapy.

So far, in this book, we have identified many libraries and techniques that are effective and suitable for web scraping. Beautiful Soup equips developers with a handful of features to parse, traverse, and create a crawler. Scrapy provides the same features as Beautiful Soup and can be used for data extraction, but is more of a project-based framework that uses lots of libraries behind the scenes and enables you to focus only on your tasks. Because of Scrapy’s easy-to-implement and collaborative architecture, it’s quite popular among beginners, professionals, and even web-based service providers such as Zyte, ScrapeOps, and Apify.

In the next chapter, we will learn about and explore more scraping techniques and security-related issues.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Web Scraping with Python - Second Edition
Published in: Oct 2023Publisher: PacktISBN-13: 9781837636211
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain