Web Scraping with Python — Part Two — Library overview of requests, urllib2, BeautifulSoup, lxml, Scrapy, and more!

Welcome to part 2 of the Big-Ish Data general web scraping writeups! I wrote the first one a little bit ago, got some good feedback, and figured I should take some time to go through some of the many Python libraries that you can use for scraping, talk about them a little, and then give suggestions on how to use them.

If you want to check the code I used and not just copy and paste from the sections below, I pushed the code to github in my bigishdata repo. In that folder you’ll find a requirements.txt file with all the libraries you need to pip install, and I highly suggest using a virtualenv to install them. Gotta keep it all contained and easier to deploy if that’s the type of project you’re working on. On this front, also let me know if you’re running this and have any issues!

Overall, the goal of the scraping project in this post is to grab all the information – text, headings, code segments and image urls – from the first post on this subject. We want to get the headings (both h1 and h3), paragraphs, and code sections and print them into local files, one for each tag. This task is very simple overall which means it doesn’t require super advanced parts of the libraries. Some scraping tasks require authentication, remote JSON data loading, or scheduling the scraping tasks. I might write an article about other scraping projects that require this type of knowledge, but it does not apply here. The goal here is to show basics of all the libraries as an introduction.

In this article, there’ll be three sections. First, I’ll talk about libraries that execute http requests to obtain HTML. Second, I’ll talk about libraries that are great for parsing HTML to allow you to scrape the data. Third, I’ll write about libraries that perform both actions at once. And if you have more suggestions of libraries to show, let me know on twitter and I’ll throw them in here.

Finally, a couple notes:

Note 1: There are many different ways of web scraping. People like using different methods, different libraries, different code structures, etc. I understand that.  I recognize that there are other useful methods out there – this is what I’ve found to be successful over time.

Note 2: I’m not here to tell you that it’s legal to scrape every website. There are laws about what data is copyrighted, what data that is owned by the company, and whether or not public data is actually legal to scrape. You might have to check things like robots.txt, their Terms of Service, maybe a frequently asked questions page.

Note 3: If you’re looking for data, or any other data engineering task, get in contact and we’ll see what I can do!

Ok! That all being said, it’s time to get going!

Requesting the Page

The first section here is showing a few libraries that can hit web servers and ask nicely for the HTML.

For all the examples here, I request the page, and then save the HTML in a local file. The other note on this section is that if you’re going to use one of these libraries, this is part one of the scraping! I talked about that a lot in the first post of this series, how you need to make sure you split up getting the HTML, and then work on scraping the data from the HTML.

First library on the first section is the famous, popular, and very simple to use library, requests.

requests

Let’s see it in action.

import requests
url = "https://bigishdata.com/2017/05/11/general-tips-for-web-scraping-with-python/" 
params = {"limit": 48, 'p': 2} #used for query string (?) values
headers = {'user-agent' : 'Jack Schultz, bigishdata.com, contact@bigishdata.com'}
page = requests.get(url, headers=headers)
helpers.write_html('requests', page.text.encode('UTF-8'))

Like I said, incredibly simple.

Requests also has the ability to use the more advanced features like SSL, credentials, https, cookies, and more. Like I said, I’m not going to go into those features (but maybe later). Time for simple examples for an actual project.

Overall, even before talking about the other libraries below, requests is the way to go.

urllib / urllib2

Ok, time to ignore that last sentence in the requests section, and move on to another simple library, urllib2. If you’re using Python 2.X, then it’s very simple to request a single page. And by simple, I mean couple lines.

Continue reading