Reader small image

You're reading from  Hands-On Web Scraping with Python - Second Edition

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781837636211
Edition2nd Edition
Right arrow
Author (1)
Anish Chapagain
Anish Chapagain
author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain

Right arrow

Using Selenium to Scrape the Web

So far, we have learned about some Python libraries, web and API-based technologies, data-finding and locating elements, extraction techniques, and plenty of data-related services in Chapters 1 to 7.

Selenium automates browsers – a quote from https://www.selenium.dev/, and it is primarily a collection of tools also known as a testing framework. Selenium is used to automate the web (applications, website forms, and much more) for testing purposes. Along with testing using automation, there are many potential service cum task-based scenarios that can be performed and handled using Selenium. The Selenium framework consists of various modules or components. We will be using Selenium WebDriver.

In general, we will install and learn about Selenium WebDriver, use WebDriver to automate websites, and use Selenium to scrape data from the web.

In this chapter, we will cover the following topics:

  • Introduction to Selenium
  • Using Selenium...

Technical requirements

You will require a web browser (Google Chrome or Mozilla Firefox) and we will be using JupyterLab for Python code.

Please refer to the Setting things up and Creating a virtual environment sections of Chapter 2 to continue with setting up and using the environment created. We will be using Google Chrome with Selenium WebDriver v4.10.0.

To install the selenium Python library, the following links will be very helpful:

We will require the following Python libraries for this chapter:

  • requests
  • selenium
  • re
  • csv
  • json

The code files for this chapter are available online in this book’s GitHub repository: https://github.com/PacktPublishing/Hands-On-Web-Scraping-with-Python-Second-Edition/tree/main/Chapter08.

Introduction to Selenium

The testing of web-based applications or systems is compulsory according to the system development life cycle (SDLC), and this step is done prior to the launch of applications on the web. Selenium, an open source project, uses a web browser as an interface for automation and can be used for web-related or web-based activities.

Dynamic and secure web applications using JavaScript (JS), cookies/sessions, other JS scripts, and many more web components can be handled, processed, automated, and crawled with the use of Selenium. “Selenium is an umbrella project for a range of tools and libraries that enable and support the automation of web browsers.” (https://www.selenium.dev/documentation/overview/). Though primarily used for browser-based automation, different cases that can be managed or used with a browser can be tackled using Selenium. This makes Selenium the most popular and appreciated automation-related browser-based tool.

Selenium is...

Using Selenium WebDriver

Selenium is used for browser automation, and one of its major components, WebDriver, is the core tool to access browsers. WebDriver implements code logic for selected browsers that is required during automation. It’s also the core system that binds the Selenium framework with the browser and often gets called or referred to as Selenium driver or only driver. For more detailed information, visit this link: https://www.selenium.dev/documentation/webdriver/getting_started/.

Before going deep into the automation or using the framework, let’s install the required libraries in the next section.

Setting things up

To explore browser automation using Python and Selenium WebDriver, first, we need to install the selenium library (a Python library), and browser-related drivers.

Important note

Selenium is a framework that contains various components such as WebDriver and others, whereas selenium is a Python library (https://www.selenium.dev...

Scraping using Selenium

Selenium is used for automation – primarily web testing – using various browsers and coding in different languages. Along with automation, the benefits or features provided are quite handy and can be utilized in tasks such as web scraping.

In this section, we will use and explore quite a few features from the selenium library for web scraping.

Example 1 – book information

In this example, we will collect some details from the books listed in the Childrens category at the URL http://books.toscrape.com, which are available in the fictional bookstore at the https://toscrape.com URL.

In particular, we are searching for the anchor element <a>, which contains the bookstore text (partial text or a portion of the text) after loading mainUrl. With element <a> being traced, the href attribute from <a> can be collected using the get_attribute() method for link. The click() method clicks the element that contains the bookstore...

Summary

The Selenium framework has many features and is widely used for application testing and browser automation. Exploring its features, we learned how to use WebDriver for scraping-related tasks. Python programming does have independent libraries to deal with HTML or web content, browser properties, networking, parsing, and more. Selenium can be used to process such features independently, and it is a major advantage that Selenium holds over various other Python libraries. The framework is also updated continuously, enriching the platform with testing, automation, and developer-friendly features.

In the next chapter, we will learn about regular expressions and using them to extract or scrape data.

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Web Scraping with Python - Second Edition
Published in: Oct 2023Publisher: PacktISBN-13: 9781837636211
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Anish Chapagain

Anish Chapagain is a software engineer with a passion for data science, its processes, and Python programming, which began around 2007. He has been working with web scraping and analysis-related tasks for more than 5 years, and is currently pursuing freelance projects in the web scraping domain. Anish previously worked as a trainer, web/software developer, and as a banker, where he was exposed to data and gained further insights into topics including data analysis, visualization, data mining, information processing, and knowledge discovery. He has an MSc in computer systems from Bangor University (University of Wales), United Kingdom, and an Executive MBA from Himalayan Whitehouse International College, Kathmandu, Nepal.
Read more about Anish Chapagain