Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Web Scraping with Python - Second Edition

You're reading from  Hands-On Web Scraping with Python - Second Edition

Product type Book
Published in Oct 2023
Publisher Packt
ISBN-13 9781837636211
Pages 324 pages
Edition 2nd Edition
Languages
Author (1):
Anish Chapagain Anish Chapagain
Profile icon Anish Chapagain

Table of Contents (20) Chapters

Preface 1. Part 1:Python and Web Scraping
2. Chapter 1: Web Scraping Fundamentals 3. Chapter 2: Python Programming for Data and Web 4. Part 2:Beginning Web Scraping
5. Chapter 3: Searching and Processing Web Documents 6. Chapter 4: Scraping Using PyQuery, a jQuery-Like Library for Python 7. Chapter 5: Scraping the Web with Scrapy and Beautiful Soup 8. Part 3:Advanced Scraping Concepts
9. Chapter 6: Working with the Secure Web 10. Chapter 7: Data Extraction Using Web APIs 11. Chapter 8: Using Selenium to Scrape the Web 12. Chapter 9: Using Regular Expressions and PDFs 13. Part 4:Advanced Data-Related Concepts
14. Chapter 10: Data Mining, Analysis, and Visualization 15. Chapter 11: Machine Learning and Web Scraping 16. Part 5:Conclusion
17. Chapter 12: After Scraping – Next Steps and Data Analysis 18. Index 19. Other Books You May Enjoy

Parsing robots.txt and sitemap.xml

In this section, we will introduce robots.txt- and sitemap.xml-related information and follow the instructions or resources available in those two files. We mentioned them in the Data-finding techniques used in web pages section of Chapter 1. In general, we can dive deep into the pages, or the directory with pages, of websites and find data or manage missing or hidden links using the robots.txt and sitemap.xml files.

The robots.txt file

The robots.txt file, or the Robots Exclusion Protocol, is a web-based standard or protocol used by websites to exchange information with automated scripts. robots.txt carries instructions regarding site-based links or resources to web robots (crawlers, spiders, web wanderers, or web bots), and uses directives such as Allow, Disallow, SiteMap, Crawl-delay, and User-agent to direct robots’ behavior.

We can find robots.txt by adding robots.txt to the main URL. For example, robots.txt for https://www.python...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}