Reader small image

You're reading from  Hands-On Web Scraping with Python - Second Edition

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781837636211
Edition2nd Edition
Right arrow
Author (1)
Anish Chapagain
Anish Chapagain
author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain

Right arrow

Using Regular Expressions and PDFs

So far, we have learned about and explored some of the core Python libraries in the context of web communication, content reading, and browser automation, for data finding and extraction.

Regular expressions (also referred to as Regex, regex, or RegEx – we will use regex throughout the rest of this chapter) are built using a predefined set of characters to form a pattern used for searching and similar activities. In Chapters 3 and 4, when carrying out web scraping, we tested and applied various available features, such as CSS selectors, XPath, and PyQuery, to find and locate specific types of activities. Regex helps us with pattern matching – we are knowingly or unknowingly using regex most of the time while working on documents or any textual content.

In a data-related context, it is very hard to avoid activities such as finding, searching, and matching. Regex provides us with a simple and elegant approach to dealing with such...

Technical requirements

A web browser (Google Chrome or Mozilla Firefox) will be required and we will be using JupyterLab for the Python code.

Please refer to the Setting things up and Creating a virtual environment sections in Chapter 2 to continue setting up and using the environment created. Refer to https://pypdf2.readthedocs.io/en/3.0.0/user/installation.html to install PyPDF2.

The Python libraries that are required for this chapter are as follows:

  • requests
  • re
  • pypdf2

The code files for this chapter are available online in this book’s GitHub repository: https://github.com/PacktPublishing/Hands-On-Web-Scraping-with-Python-Second-Edition/tree/main/Chapter09.

Overview of regex

There are plenty of cases when it’s quite hard or even impossible to locate some web-based content or elements with XPath and CSS selectors. Fortunately, we can overcome such situations using a regex. A regex is an expression built using strings that is used to find or search content by identifying an existing pattern.

In web scraping and extraction-related activities, a regex is also used as a final or firsthand pattern-matching option. Patterns can be defined using various steps, often accompanied by special notations that represent predefined rules. A regex is like grouping and writing plain text, and many libraries and text-related features exist that use these expressions, providing us with handy, easy-to-use functions.

The latest code editors, document readers, and writing programs all provide facilities such as searching in files, multiple pages, and inside project folders, and using find and replace. To use these options, we need to input text...

Regex with Python

Python programming is known for its simple, readable, reusable, and short code. Python is also popular because of its scientific computing and text computation features (natural language processing (NLP), sentiment analysis (SA), and many more). Regex is also one of the core powers of Python as re (the regex library) is provided as a system or built-in library that is available with Python installation.

Let’s dive deep into re and have a look at some of the features using code. For this example, we will use the following famous quote from Sadhguru (available at https://isha.sadhguru.org/us/en/wisdom/type/quotes):

If you do not turn against yourself, the Human Potential is limitless.

We have defined a Python variable named quote that contains the preceding quote as its value:

quote="If you do not turn against yourself, the Human Potential is limitless"

In the coming sections, we will find words that are at least three characters long...

Using regex to extract data

In the previous sections of this chapter, we explored various aspects of regex, with examples. Regex can be applied to all types of content – such as content analysis, extendibility, and time and resource (machine) analysis. This analysis is important to figure out which extraction-related options to choose, such as XPath, CSS selectors, and PyQuery.

Important note

It’s often mentioned in the literature that regex should only be applied when the content is unstructured (for data extraction), but this is not the case. Regex can be used in any type of content (structured or unstructured).

To extract data, from a scraping point of view, we’ll explore a few examples using regex and explore some of its functionality and properties.

Example 1 – Yamaha dealer information

In this example, we will be collecting information on motor dealers (dealers’ geo-location, more precisely) from https://yamaha-moto.cfaomotors...

Data extraction from a PDF

PDF is a rich (in terms of containing document features and formatting) document format that can be created, shared, and accessed on any supporting device. It is not an understatement to state that PDF files are everywhere, supported by all kinds of electronic devices and systems. It is also quite useful to know that Word documents, PowerPoint presentations, HTML, Jupyter notebooks, analysis reports from various applications, and many more content types support exporting and saving files as PDF.

We often find various types of data (such as textual, tabular, and images) in a PDF file. In Chapters 3 and 4, we saw how to extract web-based content using Python. Here, we will be using the PyPDF2 Python library to extract data from PDF files.

In the next sections, we will install and explore PyPDF2 from a data extraction perspective.

The PyPDF2 library

PyPDF2 (https://pypdf2.readthedocs.io/en/3.0.0/index.html) is a free, open source (https://github...

Summary

Regex and its features are popular for use with not only unstructured but also structured data. Regex provides us with many options, and the code will definitely differ from one use case to another. An advantage of regex is that it can be applied in various cases; there might be a few more steps to deal with, but we can focus on the target using regex. The process of PDF extraction is still evolving. While there are other approaches that can be taken, regex is also one of the most important components of data-related tasks.

The topics covered in this chapter helped you to gain a practical perspective on using regex as required. Regex plays an irreplaceable role in the data extraction activity, unaffected by content structures and document types.

In the next chapter, we will be learning about data mining and data visualization.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Web Scraping with Python - Second Edition
Published in: Oct 2023Publisher: PacktISBN-13: 9781837636211
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain