2014-12-31



In this article we’re going to build a scraper for an actual freelance gig where the client wants a Python program to scrape data from Stack Overflow to grab new questions (question title and URL). Scraped data should then be stored in MongoDB. It’s worth noting that Stack Overflow has an API, which can be used to access the exact same data. However, the client wanted a scraper, so a scraper is what he got.

As always, be sure to review the site’s terms of use/service and respect the robots.txt file before starting any scraping job. Make sure to adhere to ethical scraping practices by not flooding the site with numerous requests over a short span of time. Treat any site you scrape as if it were your own.

Installation

We need the Scrapy library (v0.24.4) along with PyMongo (v2.7.2) for storing the data in MongoDB. You need to install MongoDB as well (not covered).

Scrapy

If you’re running OSX or a flavor of Linux, install Scrapy with pip (with your virtualenv activated):

If you are on Windows machine, you will need to manually install a number of dependencies. Please refer to the official documentation for detailed instructions as well as this Youtube video that I created.

Once Scrapy is setup, verify your installation by running this command in the Python shell:

If you don’t get an error then you are good to go!

PyMongo

Next, install PyMongo with pip:

Now we can start building the crawler.

Scrapy Project

Let’s start a new Scrapy project:

This creates a number of files and folders that includes a basic boilerplate for you to get started quickly:

Specify Data

The items.py file is used to define storage “containers” for the data that we plan to scrape.

The StackItem() class inherits from Item (docs), which basically has a number of pre-defined objects that Scrapy has already built for us:

Let’s add some items that we actually want to collect. For each question the client needs the title and URL. So, update items.py like so:

Create the Spider

Create a file called stack_spider.py in the “spiders” directory. This is where the magic happens – e.g., where we’ll tell Scrapy how to find the exact data we’re looking for. As you can imagine, this is specific to each individual web page.

Start by defining a class that inherits from Scrapy’s Spider and then adding attributes as needed:

The first few variables are self-explanatory (docs):

name defines the name of the Spider.

allowed_domains contains the base-URLs for the allowed domains for the spider to crawl.

start_urls is a list of URLs for the spider to start crawling from. All subsequent URLs will start from the data that the spider downloads from the URLS in start_urls.

XPath Selectors

Next, Scrapy uses XPath selectors to extract data from a website. In other words, we can select certain parts of the HTML data based on a given XPath. As stated in Scrapy’s documentation, “XPath is a language for selecting nodes in XML documents, which can also be used with HTML.”

You can easily find a specific Xpath using Chrome’s Developer Tools. Simply inspect a specific HTML element, copy the XPath, and then tweak (as needed):



Developer Tools also gives you the ability to test XPath selectors in the JavaScript Console by using $x – i.e., $x("//img"):



Again, we basically tell Scrapy where to start looking for information based on a defined XPath. Let’s navigate to the Stack Overflow site in Chrome and find the XPath selectors.

Right click on the first question and select “Inspect Element”:

Now grab the XPath for the <div class="summary">, //*[@id="question-summary-27624141"]/div[2], and then test it out in the JavaScript Console:

As you can tell, it just selects that one question. So we need to alter the XPath to grab all questions. Any ideas? It’s simple: //div[@class="summary"]. What does this mean? Essentially, this XPath states: Grab all <div> elements that have a class of summary. Test this XPath out in the JavaScript Console.

Notice how we are not using the actual XPath output from Chrome Developer Tools. In most cases, the output is just a helpful aside, which generally points you in the right direction for finding the working XPath.

Now let’s update the stack_spider.py script:

Extract the Data

We still need to parse and scrape the data we want, which falls within <div class="summary">. Again, update stack_spider.py like so:

`

We are iterating through the questions and assigning the title and url values from the scraped data. Be sure to test out the XPath selectors in the JavaScript Console within Chrome Developer Tools – e.g., $x('//a[@class="question-hyperlink"]/text()') and $x('//a[@class="question-hyperlink"]/@href').

Test

Ready for the first test? Simply run the following command within the “stack” directory:

Along with the Scrapy stack trace, you should see 50 question titles and URLs outputted. You can render the output to a JSON file with this little command:

We’ve now implemented our Spider based on our data that we are seeking. Now we need to store the scraped data within MongoDB.

Store the Data in MongoDB

Each time an item is returned, we want to validate the data and then add it to a Mongo collection.

The initial step is to create the database that we plan to use to save all of our crawled data. Open settings.py and specify the pipeline and add the database settings:

Pipeline Management

We’ve setup our spider to crawl and parse the HTML, and we’ve set up our database settings. Now we have to connect the two together through a pipeline in pipelines.py.

Connect to Database

First, let’s define a method to actually connect to the database:

Here, we create a class, MongoDBPipeline(), and we have a constructor function to initialize the class by defining the Mongo settings and then connecting to the database.

Process the Data

Next, we need to define a method to process the parsed data:

We establish a connection to the database, unpack the data, and then save it to the database. Now we can test again!

Test

Again, run the following command within the “stack” directory:

Hooray! We have successfully stored our crawled data into the database:

Conclusion

This is a pretty simple example of using Scrapy to crawl and scrape a web page. The actual freelance project required the script to follow the pagination links and scrape each page using the CrawlSpider (docs), which is super easy to implement. Try implementing this on your own, and leave a comment below with the link to the Github repository for a quick code review. Need help? Start with this script, which is nearly complete. Cheers!

You can download the entire source code from the Github repository. Comment below with questions. Thanks for Reading!

Happy New Year!

:)

Looking for more web scraping? Be sure to check out the Real Python courses. Looking to hire a professional web scraper? Check out GoScrape.

Show more