Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Web Scraping with Python - Second Edition

You're reading from  Hands-On Web Scraping with Python - Second Edition

Product type Book
Published in Oct 2023
Publisher Packt
ISBN-13 9781837636211
Pages 324 pages
Edition 2nd Edition
Languages
Author (1):
Anish Chapagain Anish Chapagain
Profile icon Anish Chapagain

Table of Contents (20) Chapters

Preface 1. Part 1:Python and Web Scraping
2. Chapter 1: Web Scraping Fundamentals 3. Chapter 2: Python Programming for Data and Web 4. Part 2:Beginning Web Scraping
5. Chapter 3: Searching and Processing Web Documents 6. Chapter 4: Scraping Using PyQuery, a jQuery-Like Library for Python 7. Chapter 5: Scraping the Web with Scrapy and Beautiful Soup 8. Part 3:Advanced Scraping Concepts
9. Chapter 6: Working with the Secure Web 10. Chapter 7: Data Extraction Using Web APIs 11. Chapter 8: Using Selenium to Scrape the Web 12. Chapter 9: Using Regular Expressions and PDFs 13. Part 4:Advanced Data-Related Concepts
14. Chapter 10: Data Mining, Analysis, and Visualization 15. Chapter 11: Machine Learning and Web Scraping 16. Part 5:Conclusion
17. Chapter 12: After Scraping – Next Steps and Data Analysis 18. Index 19. Other Books You May Enjoy

Working with the Secure Web

So far, we have learned about the web, web content, reverse-engineering techniques, data-finding techniques, a few Python libraries, and a framework that we can employ to access and scrape the desired web content.

Plenty of security-related concerns exist on web platforms today, along with measures to ensure security. Lots of web applications, extensions, and even web-based service providers exist to protect us and our web-based systems against unauthenticated usage and unauthorized access.

The growing use of internet applications and e-commerce activity demands a secure web (or web-based security-enabled features) as a high priority to deal with actions that are harmful or even illegal. We often receive irrelevant emails or spam, containing information that we did not ask for.

It’s quite challenging from a web scraping perspective to deal with such issues, but the concept of ethical hacking makes it more viable, even from an application...

Technical requirements

A web browser (Google Chrome or Mozilla Firefox) will be required, and we will also use JupyterLab for Python code.

Please refer to the Setting things up and Creating a virtual environment sections in Chapter 2 to continue using the environment created.

The Python libraries that are required for this chapter are as follows:

  • requests
  • pyquery

The code files for this chapter are available online in this book’s GitHub repository: https://github.com/PacktPublishing/Hands-On-Web-Scraping-with-Python-Second-Edition/tree/main/Chapter06.

Exploring secure web content

Today’s web and internet technologies are quite vulnerable in terms of web security (content, authorization, illegal access, and so on). We want the web to be safe and the content that we browse, search, or view to be genuine, not violating any legal or ethical standards or affecting people’s human rights.

The web must be accessible and available to everyone who seeks information, effectively and by following ethical practices. We often encounter web content that is not exactly what we were looking for, or hear of web content that has been tampered with or hacked into or that private and sensitive information has been leaked illegally, and so on. Although a lot of these cases are beyond our control, we can reduce vulnerabilities and make the web a much safer place to be.

In many new technologies, security-related concerns have been identified and solutions have been implemented. There are even applications and organizations concerned...

HTML <form> processing using Python

Form processing has various titles and functions, such as search, filter, login, registration, submission, and verification. In this section, we will explore http://quotes.toscrape.com/search.aspx, as shown in Figure 6.2, and process or use the forms available on the page to extract or filter out the results, based on a choice of options (Author or Tag):

Figure 6.2: The search form (Author and Tag)

Figure 6.2: The search form (Author and Tag)

Important note

We need to investigate the available <form> tag, the options or form elements that exist, and the resulting pages as they appear on form submission. It’s also advisable to explore form-related actions using DevTools, as this will help you to understand the flow of systems, links, availability, and resources such as HTTP headers, cookies, and payload.

During page analysis, we found that only a single form, named filterform, exists, and options are available for the Author name only; Tag...

User authentication and cookies

User authentication (managing and handling user credentials) is another form of web security. In this section, we will use the user authentication feature available on the http://quotes.toscrape.com/login site, along with cookies.

As seen in Figure 6.6, http://quotes.toscrape.com/login contains a page with the Login text, and a form asking for Username and Password:

Figure 6.6: User login page

Figure 6.6: User login page

Analyzing the page source, there’s only a single <form> element found, with one hidden input element named csrf_token, two text input elements with type="text" (username and password), and finally, an object with the submit type and the value Login, as seen in Figure 6.7:

Figure 6.7: Login page – <form> source code

Figure 6.7: Login page – <form> source code

Important note

http://toscrape.com is a demo site, enriched with content for scraping-related purposes. The site does not provide a user registration...

Using proxies

A proxy (HTTP proxy or web proxy) is considered middleware for the web. A proxy is a gateway that is used to communicate between a client and a server. Put simply, a proxy is an Internet Protocol (IP) address with some random ports assigned to it. On the web, an HTTP request starts from the client end with its own IP, which, when routed through the proxy, gets updated as a new IP and is then forwarded to the actual destination looking for a response. So, a proxy here works as a transport layer that resides between the client and the server or destination.

On the internet, there are plenty of services (paid and free) offered by various organizations providing proxies that work as a filter layer for their customers and deal with different kinds of security-related threats, malware, content filtering, spy protection, and many other activities. There are plenty of benefits to using proxies; a few of them are listed here:

  • Privacy: The destination server does not...

Summary

Web security is a compulsory component of the current internet revolution. We need to participate in web scraping and extraction processes ethically for the betterment of the information that’s available everywhere on the web. We have learned about the basics of processing with HTML forms, cookies, and sessions, and using proxies with the help of the Python programming language, from a web scraping perspective.

In the next chapter, we will use web-based APIs to collect relevant data.

Further reading

lock icon The rest of the chapter is locked
You have been reading a chapter from
Hands-On Web Scraping with Python - Second Edition
Published in: Oct 2023 Publisher: Packt ISBN-13: 9781837636211
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}