Reader small image

You're reading from  Python Web Scraping Cookbook

Product typeBook
Published inFeb 2018
Reading LevelBeginner
PublisherPackt
ISBN-139781787285217
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Author (1)
Michael Heydt
Michael Heydt
author image
Michael Heydt

Michael Heydt is an independent consultant, programmer, educator, and trainer. He has a passion for learning and sharing his knowledge of new technologies. Michael has worked in multiple industry verticals, including media, finance, energy, and healthcare. Over the last decade, he worked extensively with web, cloud, and mobile technologies and managed user experiences, interface design, and data visualization for major consulting firms and their clients. Michael's current company, Seamless Thingies , focuses on IoT development and connecting everything with everything. Michael is the author of numerous articles, papers, and books, such as D3.js By Example, Instant Lucene. NET, Learning Pandas, and Mastering Pandas for Finance, all by Packt. Michael is also a frequent speaker at .NET user groups and various mobile, cloud, and IoT conferences and delivers webinars on advanced technologies.
Read more about Michael Heydt

Right arrow

Making the Scraper as a Service Real

In this chapter, we will cover:

  • Creating and configuring an Elastic Cloud trial account
  • Accessing the Elastic Cloud cluster with curl
  • Connecting to the Elastic Cloud cluster with Python
  • Performing an Elasticsearch query with the Python API
  • Using Elasticsearch to query for jobs with specific skills
  • Modifying the API to search for jobs by skill
  • Storing configuration in the environment
    Creating an AWS IAM user and a key pair for ECS
  • Configuring Docker to authenticate with ECR
  • Pushing containers into ECR
  • Creating an ECS cluster
  • Creating a task to run our containers
  • Starting and accessing the containers in AWS

Introduction

In this chapter, we will first add a feature to search job listings using Elasticsearch and extend the API for this capability. Then will move Elasticsearch functions to Elastic Cloud, a first step in cloud-enabling our cloud based scraper. Then, we will move our Docker containers to Amazon Elastic Container Repository (ECR), and finally run our containers (and scraper) in Amazon Elastic Container Service (ECS).

Creating and configuring an Elastic Cloud trial account

In this recipe we will create and configure an Elastic Cloud trial account so that we can use Elasticsearch as a hosted service. Elastic Cloud is a cloud service offered by the creators of Elasticsearch, and provides a completely managed implementation of Elasticsearch.

While we have examined putting Elasticsearch in a Docker container, actually running a container with Elasticsearch within AWS is very difficult due to a number of memory requirements and other system configurations that are complicated to get working within ECS. Therefore, for a cloud solution, we will use Elastic Cloud.

How to do it

We'll proceed with the recipe as follows:

  1. Open your browser...

Accessing the Elastic Cloud cluster with curl

Elasticsearch is fundamentally accessed via a REST API. Elastic Cloud is no different and is actually an identical API. We just need to be able to know how to construct the URL properly to connect. Let's look at that.

How to do it

We proceed with the recipe as follows:

  1. When you signed up for Elastic Cloud, you were given various endpoints and variables, such as username and password. The URL was similar to the following:
https://<account-id>.us-west-2.aws.found.io:9243
Depending on the cloud and region, the rest of the domain name, as well as the port, may differ.
  1. We'll use a slight variant of the following URL to communicate and authenticate with Elastic...

Connecting to the Elastic Cloud cluster with Python

Now let's look at how to connect to Elastic Cloud using the Elasticsearch Python library.

Getting ready

The code for this recipe is in the 11/01/elasticcloud_starwars.py script. This script will scrape Star Wars character data from the swapi.co API/website and put it into the Elastic Cloud.

How to do it

We proceed with the recipe as follows:

  1. Execute the file as a Python script:
$ python elasticcloud_starwars.py
  1. This will loop through up to 20 characters and drop them into the sw index with a document type of...

Performing an Elasticsearch query with the Python API

Now let's look at how we can search Elasticsearch using the Elasticsearch Python library. We will perform a simple search on the Star Wars index.

Getting ready

Make sure to modify the connection URL in the samples to your URL.

How to do it

The code for the search is in the 11/02/search_starwars_by_haircolor.py script, and can be run by simply executing the script. This is a fairly simple search to find the characters whose hair color is blond:

  1. The main portion of the code is:
es = Elasticsearch(
[
...

Using Elasticsearch to query for jobs with specific skills

In this recipe, we move back to using the crawler that we created to scrape and store job listings from StackOverflow in Elasticsearch. We then extend this capability to query Elasticsearch to find job listings that contain one or more specified skills.

Getting ready

The example we will use is coded to use a local Elastic Cloud engine and not a local Elasticsearch engine. You can change that if you want. For now, we will perform this process within a single python script that is run locally and not inside a container or behind an API.

How to do it

...

Modifying the API to search for jobs by skill

In this recipe, we will modify our existing API to add a method to enable searching for jobs with a set of skills.

How to do it

We will be extending the API code. We will make two fundamental changes to the implementation of the API. The first is that we will add an additional Flask-RESTful API implementation for the search capability, and the second is that we will make addresses for both Elasticsearch and our own microservice configurable by environment variables.

The API implementation is in 11/04_scraper_api.py. By default, the implementation attempts to connect to Elasticsearch on the local system. If you are using Elastic Cloud, make sure to change the URL (and make...

Storing configuration in the environment

This recipe points out a change made in the code of the API in the previous recipe to support one of the factors of a 12-Factor application. A 12-Factor app is defined as an app that is designed to be run as a software as a service. We have been moving our scraper in this direction for a while now, breaking it into components that can be run independently, as scripts, or in containers, and as we will see soon, in the cloud. You can learn all about 12-Factor apps at https://12factor.net/.

Factor-3 states that we should pass in configuration to our application through environment variables. While we definitely don't want to hardcode things, such as URLs, to external services, it also isn't best practice to use configuration files. When deploying to various environments, such as containers or the cloud, a config file will often...

Creating an AWS IAM user and a key pair for ECS

In this recipe, we will create an Identity and Access Management (IAM) user account to allow us to access the AWS Elastic Container Service (ECS). We need this as we are going to package our scraper and API up in Docker containers (we've done this already), but now we are going to move these containers into and run them from AWS ECS, making our scraper a true cloud service.

Getting ready

This assumes that you have already created an AWS account, which we used earlier in the book when we looked at SQS and S3. You don't need a different account, but we need to create a non-root user that has permissions to use ECS.

...

Configuring Docker to authenticate with ECR

In this recipe, we will configure docker to be able to push our containers to the Elastic Container Repository (ECR).

Getting ready

A key element of Docker is docker container repositories. We have previously used Docker Hub to pull containers. But we can also push our containers to Docker Hub, or any Docker-compatible container repository, such as ECR. But this is not without its troubles. The docker CLI does not naturally know how to authenticate with ECR, so we have to jump through a few hoops to get it to work.

Make sure that the AWS command line tools are installed. These are required to get Docker authenticated to work with ECR. Good instructions are found at https:...

Pushing containers into ECR

In this recipe we will rebuild our API and microservice containers and push them to ECR. We will also push a RabbitMQ container to ECR.

Getting ready

Bear with this, as this can get tricky. In addition to our container images, we also need to push our RabbitMQ container to ECR. ECS doesn't talk to Docker Hub and and can't pull that image. it would be immensely convenient, but at the same time it's probably also a security issue.

Pushing these containers to ECR from a home internet connection can take a long time. I create a Linux image in EC2 in the same region as my ECR, pulled down the code from github, build the containers on that EC2 system, and then push to ECR. The push...

Creating an ECS cluster

Elastic Container Service (ECS) is an AWS service that runs your Docker containers in the cloud. There is a lot of power (and detail) in using ECS. We will look at a simple deployment that runs our containers on a single EC2 virtual machine. Our goal is to get our scraper to the cloud. Extensive detail on using ECS to scale out the scraper is for another time (and book).

How to do it

We start by creating an ECR cluster using the AWS CLI. The we will create one EC2 virtual machine in the cluster to run our containers.

I've included a shell file, in the 11/06 folder, names create-cluster-complete.sh, which runs through all of these commands in one run.

There are number of steps to getting...

Creating a task to run our containers

In this recipe, we will create an ECS task. A task tells the ECR cluster manager which containers to run. A task is a description of which containers in ECR to run and the parameters required for each. The task description will feel a lot like that which we have done with Docker Compose.

Getting ready

The task definition can be built with the GUI or started by submitting a task definition JSON file. We will use the latter technique and examine the structure of the file, td.json, which describes how to run our containers together. This file is in the 11/07 recipe folder.

How to do it

...

Starting and accessing the containers in AWS

In this recipe, we will start our scraper as a service by telling ECS to run our task definition. Then we will check hat it is running by issuing a curl to get contents of a job listing.

Getting ready

We need to do one quick thing before running the task. Tasks in ECS go through revisions. Each time you register a task definition with the same name ("family"), ECS defines a new revision number. You can run any of the revisions.

To run the most recent one, we need to list the task definitions for that family and find the most recent revision number. The following lists all of the task definitions in the cluster. At this point we only have one:

$ aws ecs list-task...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Python Web Scraping Cookbook
Published in: Feb 2018Publisher: PacktISBN-13: 9781787285217
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Michael Heydt

Michael Heydt is an independent consultant, programmer, educator, and trainer. He has a passion for learning and sharing his knowledge of new technologies. Michael has worked in multiple industry verticals, including media, finance, energy, and healthcare. Over the last decade, he worked extensively with web, cloud, and mobile technologies and managed user experiences, interface design, and data visualization for major consulting firms and their clients. Michael's current company, Seamless Thingies , focuses on IoT development and connecting everything with everything. Michael is the author of numerous articles, papers, and books, such as D3.js By Example, Instant Lucene. NET, Learning Pandas, and Mastering Pandas for Finance, all by Packt. Michael is also a frequent speaker at .NET user groups and various mobile, cloud, and IoT conferences and delivers webinars on advanced technologies.
Read more about Michael Heydt