Blog

Beautiful Soup introduction: A web scraping tools in practice

Dominik Vach
Dominik Vach
CTO, Co-founder
Tutorial
June 2, 2023
Beautiful Soup introduction: A web scraping tools in practice

Imagine you're an adventurer, a data explorer in the vast, infinite expanse of the digital universe known as the internet. But instead of magical creatures and ancient artifacts, this universe is filled with valuable data that can revolutionize how you approach your coding projects. The only problem? There's just too much of it to sift through. Like an explorer lost in an alien jungle, it's easy to get overwhelmed. But fear not, fellow adventurer, there's a tool to cut through this tangled web - say hello to the magic sword of data web scraping: Beautiful Soup!

In this spellbinding journey, we're going to show you the magic of Beautiful Soup. Let's prepare you for this epic quest. We'll be diving deep into the mystical scrolls that reveal the basics, the strengths, and even the weaknesses of Beautiful Soup. Buckle up, because we're not just going to discuss the theory; we're also going to show you how to wield it. We'll be hacking and slashing through the digital jungle of https://www.forloop.ai/blog.

Introduction to beautiful soup
forloop.ai/blog website that we will scrape.

By the end of this adventure, you'll be a seasoned Beautiful Soup warrior, ready to uncover the hidden treasures of the web!

So, grab your coding gear, adventurer, and let's dive in!

This is the 1st article from the series A web scraping tools in practice.

Introduction

Let’s start with the basics of what exactly Beautiful Soup is.

What exactly is Beautiful Soup?

Beautiful Soup is a Python library that parses HTML and XML documents and provides methods to search, navigate, and modify the parse tree. It was designed to be easy and human-friendly, making it an excellent choice for beginners new to web scraping. With Beautiful Soup, you can quickly and easily extract data from websites to create reports, visualizations, or other types of analysis.

What are the main benefits of Beautiful Soup?

Beautiful Soup has many benefits, including its ease of use, speed, and compatibility with Python. It can handle a wide range of document types and can extract data from even the most complex websites.

Additionally, the tool is open-source and well-documented, so many users and developers can provide support and resources. Its robust and intuitive API makes it easy to extract information from a webpage and store it in a structured format for further analysis.

Many web scraping tools struggle with pages that contain broken or inconsistent HTML, but Beautiful Soup is able to parse even the most challenging pages and extract the information you need. In addition, Beautiful Soup is able to parse and extract information from pages protected by CAPTCHA or other security mechanisms. This can be a major roadblock for many web scraping tools, but Beautiful Soup has built-in functionality that can help you bypass these protections and extract the data you need.

Beautiful Soup Docs
A Beautiful Soup Documentation, source: crummy.com/software/BeautifulSoup/bs4/doc/

What are the limitations of Beautiful Soup?

However, Beautiful Soup also has some limitations. It is unsuitable for extracting real-time data, as it does not support JavaScript, since it does only static scraping. Additionally, it may not be the best choice for large-scale data scraping, as it is not designed for high-performance web scraping.

Beautiful Soup can be used for a variety of purposes, including:

  • Data collection for market research
  • Extracting information for data analysis and visualization
  • Scraping job postings for competitive analysis
  • Building custom web crawlers for specific purposes

Installing Beautiful Soup

Now that you understand what Beautiful Soup is let's move on to the installation process. Installing Beautiful Soup is relatively straightforward and can be done in just a few steps.

To install Beautiful Soup, you must have Python installed on your computer. You can download and install it from the official Python website if you don't have Python installed.

Once you have Python installed, open up the command prompt or terminal on your computer. Then, type the following command:

pip install beautifulsoup4

This will install the latest version of Beautiful Soup on your computer. If you're using a version of Python older than Python 3, you may need to install the lxml and html5lib libraries as well. You can do this by running the following commands:

pip install lxml
pip install html5lib

That's it! You now have Beautiful Soup installed and ready to use.

beautiful soup web scraping
Go to our Colab to see the complete code for this tutorial.

Web Scraping with Beautiful Soup

In this section, we'll go through a step-by-step tutorial on how to use Beautiful Soup to web scrape data from the https://www.forloop.ai/blog website.

Step 1: Import libraries

First, let's start by importing the necessary libraries. We'll be using the requests library to send a GET request to the website, and the beautifulsoup4 library to parse the HTML content of the website.

import requests
from bs4 import BeautifulSoup
import pandas as pd

Step 2: Setup a scraper.

Next, let's send a GET request to the website to retrieve the HTML content. We'll use the requests.get() function to do this.

# Make a request to the website
url = 'https://www.forloop.ai/blog'
response = requests.get(url)

Now that we have the HTML content, let's parse it using Beautiful Soup. We'll use the BeautifulSoup() function for this.

# Parse the HTML content of the website
soup = BeautifulSoup(response.content, 'html.parser')

The BeautifulSoup() function takes two arguments: the first argument is the HTML content that we want to parse, and the second argument is the parser that we want to use. In this case, we're using the html.parser parser.

Now that we have the HTML content parsed, let's start web scraping. Let's say we want to extract the title and the link of all the blog posts on the https://www.forloop.ai/blog website.

To do this, we need to find the HTML element that contains the information we want to extract. We can use the soup.find_all() function to do this.

# Find all the arciles elemetns on the page
posts = soup.find_all('div', {'class': 'article-item'})

The soup.find_all() function takes two arguments: the first argument is the HTML tag that we want to find, and the second argument is a dictionary of attributes and values that we want to match. In this case, we're finding all div elements with a class attribute of article-item.

Step 3: Prepare for a run

Once we have the HTML element that contains the information we want to extract, we can extract the information using the .text attribute or the ['attribute'] syntax. Therefore the final script should look as follows:

# Create an empty list to store the data
data = []

# Iterate through each article element
for post in posts:
title = post.find('h4').text
date = post.find(class_='blog-post-date').text
link = post.find('a')['href']

# Append the data as a dictionary to the list
data.append({'title': title, 'date': date, 'link': link})

Step 3: Run and test

Now, you can print all data.

# Present all data
for n in range(0, len(data)):
print(f"Title: {data[n]['title']}")
print(f"Release date: {data[n]['date']}")
print(f"Link: {url}{data[n]['link']}\n---")

Done!

The final results should look as follows.

forloop.ai
The final results should look as follows.

In case of any problems or challenges, you can check out the Colab with the whole code.

Conclusion

One of the great things about Beautiful Soup is its ability to handle many data formats, from simple HTML pages to complex, multi-level XML structures. So, if you're looking for a tool that can help you extract and analyze data from the web, Beautiful Soup is definitely worth considering. Whether you're a data scientist or a developer, you'll find that Beautiful Soup is a powerful and versatile tool that can help you extract valuable insights from the vast amount of data available online.

If you're interested in learning more about web scraping and data extraction in practice, join our Forloop Slack Community, where we cover AI and data-related topics. You can also try our web scraping Forloop Platform in order to give us some feedback.