Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Parallel Programming with Python

You're reading from   Parallel Programming with Python Develop efficient parallel systems using the robust Python environment.

Arrow left icon
Product type Paperback
Published in Jun 2014
Publisher
ISBN-13 9781783288397
Length 124 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
 Palach Palach
Author Profile Icon Palach
Palach
Arrow right icon
View More author details
Toc

Table of Contents (10) Chapters Close

Preface 1. Contextualizing Parallel, Concurrent, and Distributed Programming FREE CHAPTER 2. Designing Parallel Algorithms 3. Identifying a Parallelizable Problem 4. Using the threading and concurrent.futures Modules 5. Using Multiprocessing and ProcessPoolExecutor 6. Utilizing Parallel Python 7. Distributing Tasks with Celery 8. Doing Things Asynchronously Index

Using Celery to make a distributed Web crawler

We will now move on to adapting our Web crawler to Celery. We already have webcrawler_queue, which is responsible for encapsulating web-type hcrawler tasks. However, in the server side, we will create our crawl_task task inside the tasks.py module.

First, we will add our imports to the re and requests modules, which are the modules for regular expression and the HTTP library respectively. The code is as follows:

import re
import requests

Then, we will define our regular expression, which we studied in the previous chapters, as follows:

hTML_link_regex = re.compile(
    '<a\s(?:.*?\s)*?href=[\'"](.*?)[\'"].*?>')

Now, we will place our crawl_task function in the Web crawler, add the @app.task decorator, and change the return message a bit, as follows:

@app.task
def crawl_task(url):
    request_data = requests.get(url)
    links = html_link_regex.findall(request_data.text)
    message = "The task %s found the...
lock icon The rest of the chapter is locked
Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Parallel Programming with Python
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $24.99/month. Cancel anytime
Modal Close icon
Modal Close icon