Reader small image

You're reading from  Hands-On Web Scraping with Python - Second Edition

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781837636211
Edition2nd Edition
Right arrow
Author (1)
Anish Chapagain
Anish Chapagain
author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain

Right arrow

Searching and Processing Web Documents

So far, we have learned about web scraping, data-finding techniques, and related technologies that help us with scraping, and we’ve identified a few reasons to select the Python programming language.

Web- or website-based content exists as HTML elements or as a predefined document or some kind of object (JSON). For extraction purposes, we need to analyze and identify such content, patterns, and objects. HTML-based elements are generally identified with XML Path (XPath) and Cascading Style Sheets (CSS) selectors, which are traversed and processed with scraping logic for the desired content. The lxml library will be used in this chapter to process markup documents. We will be using browser-based Developer Tools (DevTools) for finding content and element identification.

In particular, we will learn about the following topics in this chapter:

  • Introducing XPath and CSS selectors to process markup documents
  • Using web browser...

Technical requirements

The Google Chrome or Mozilla Firefox web browser will be required and we will be using Python notebooks with JupyterLab.

Please refer to the Setting things up and Creating a virtual environment sections of Chapter 2 and continue using the environment we created.

The Python libraries that are required for this chapter are as follows:

  • lxml
  • urllib

The code files for this chapter are available online on GitHub: https://github.com/PacktPublishing/Hands-On-Web-Scraping-with-Python-Second-Edition/tree/main/Chapter03.

Important note

Web- or website-based content refers to the responses or page sources that are received after processing requests to a URL. Content can be of various types, such as PDF, CSV, TXT, XML, HTML, and JSON. In general, in this chapter, we are talking about HTML, page source, or markup documents as our primary content, unless stated otherwise.

Introducing XPath and CSS selectors to process markup documents

In the Understanding the latest web technologies and Data-finding techniques used in web pages sections in Chapter 1, we explored and discussed HTML and XML markup documents and their availability across the web.

Normally, markup is a kind of labeling or tagging of parts, sections, or any entities in documents, which helps to identify the content and even process it using a third-party application. We call them tags in HTML (https://www.w3.org/html/) and nodes in XML (https://www.w3.org/standards/xml/). Hence, markup documents are a tree-like structure, containing tags or nodes (nested or individual), also known as an element tree.

Important note

XML documents have been pretty popular and common across the web since the start of the growing internet era. Readability, encoding support, interoperability, and data exchangeability are a few core powers of XML. XML is still supported by the latest web technologies...

Using web browser DevTools to access web content

DevTools are some of the most important tools available to us to explore response content of any type, such as HTML, JSON, XML, or TXT.

In the Developer tools section of Chapter 1, we introduced browser-based DevTools with various helpful, information-packed panels and a brief introduction to them. In this section, as the heading reads, we will be using DevTools to locate, find, or access the web content that we are seeking. Normally, we will search for and find the elements holding content in a similar way to how we dealt with XPath and CSS selectors using expressions.

We will explore web content using Google Chrome. Chrome has built-in DevTools with plenty of features that help us, with information on cookies, headers, curl scripts, prettifying the DOM, DOM navigation, displaying line numbers, folding/unfolding code blocks, element selection, element identification, content searching, and generating XPath and CSS selector expressions...

Scraping using lxml – a Python library

The lxml library is an XML toolkit with a rich library set to process XML and HTML. lxml is preferred over other XML-based libraries in Python for its high speed and effective memory management, plus it has various other features to handle both small and large XML files.

Python programmers use lxml to process XML and HTML documents. There are plenty of other such libraries in Python; a few even build on top of lxml with extra add-ons. lxml is also used as a parser engine in Python libraries such as Beautiful Soup (https://www.crummy.com/software/BeautifulSoup/bs4/doc/) and pandas (https://pandas.pydata.org/).

DOM parsing, traversing element trees, XPath, and CSS selector are the features that make lxml effective and efficient enough for tasks such as web scraping. For more details on lxml and its documentation, please visit https://lxml.de/.

Important note

lxml provides native support to XPath and XSLT and is built on the powerful...

Parsing robots.txt and sitemap.xml

In this section, we will introduce robots.txt- and sitemap.xml-related information and follow the instructions or resources available in those two files. We mentioned them in the Data-finding techniques used in web pages section of Chapter 1. In general, we can dive deep into the pages, or the directory with pages, of websites and find data or manage missing or hidden links using the robots.txt and sitemap.xml files.

The robots.txt file

The robots.txt file, or the Robots Exclusion Protocol, is a web-based standard or protocol used by websites to exchange information with automated scripts. robots.txt carries instructions regarding site-based links or resources to web robots (crawlers, spiders, web wanderers, or web bots), and uses directives such as Allow, Disallow, SiteMap, Crawl-delay, and User-agent to direct robots’ behavior.

We can find robots.txt by adding robots.txt to the main URL. For example, robots.txt for https://www.python...

Summary

In this chapter, we learned about DOM navigation, XPath, and CSS selectors using the page source and DevTools. We also learned about reading and accessing XML and HTML files and defining and using XPath and CSS selector expressions for content extraction.

We also looked at various aspects of content extraction, plus the benefits and restrictions imposed by robots.txt and sitemaps. The main objective of the chapter was to demonstrate core features related to nodes, element identification from HTTP responses received, using the lxml and urllib libraries as required, and dealing with XML and HTML files. Finally, web scraping techniques were deployed using an example and data was collected and written to a CSV file.

In the next chapter, we will learn more about web scraping techniques and about some new Python libraries.

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Web Scraping with Python - Second Edition
Published in: Oct 2023Publisher: PacktISBN-13: 9781837636211
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain