When you scrape the web, HTML parsing is vital no matter which tools you’re using. Web scraping with Java is no exception to this rule. In Python, we use tools like Requests and BeautifulSoup. With Java, we can send our HTTP requests and parse our HTML using jsoup. We’ll use Books to Scrape for this tutorial.
Getting Started
In this tutorial, we’re going to use Maven for dependency management. If you don’t already have it, you can install Maven here.
Once you’ve got Maven installed, you need to create a new Java project. The command below creates a new project, jsoup-scraper
.
Next, you’ll need to add relevant dependencies. Replace the code in pom.xml
with the code below. This is similar to dependency management in Rust with Cargo.
Go ahead and paste the following code into App.java
. It’s not much, but this is the basic scraper we’ll build from.
Jsoup.connect("https://books.toscrape.com").get()
: This line fetches the page and returns aDocument
object that we can manipulate.doc.title()
returns the title in the HTML document, in this case:All products | Books to Scrape - Sandbox
.
Using DOM Methods With Jsoup
jsoup contains a variety of methods for finding elements in the DOM(Document Object Model). We can use any of the following to find page elements easily.
getElementById()
: Find an element using itsid
.getElementsByClass()
: Find all elements using their CSS class.getElementsByTag()
: Find all elements using their HTML tag.getElementsByAttribute()
: Find all elements containing a certain attribute.
getElementById
On our target site, the sidebar contains a div
with an id
of promotions_left
. You can see this in the image below.
This code outputs the HTML element you see in the inspect page.
getElementsByTag
getElementsByTag()
allows us to find all elements on the page with a certain tag. Let’s look at the books on this page.
Each book is contained in a unique article
tag.
The code below won’t print anything, but it will return an array of books. These books will provide the basis for the rest of our data.
getElementsByClass
Let’s look at the price of a book. As you can see hightlighted, its class is price_color
.
In this snippet, we find all elements of the price_color
class. We then print the text of the first one using .first().text()
.
getElementsByAttribute
As you might already know, all a
elements require an href
attribute. In the code below, we use getElementsByAttribute("href")
to find all elements with an href
. We use .first().attr("href")
to return its href
.
Advanced Techniques
CSS Selectors
When we want to use multiple criteria to find elements, we can pass CSS selectors into the select()
method. This method returns an array of all objects matching the selector. Below, we use li[class='next']
to find all li
items with the next
class.
Handling Pagination
To handle our pagination, we use nextPage.first()
to call getElementsByAttribute("href").attr("href")
on the first element returned from the array and extract its href
. Interestingly enough, after page 2, the word catalogue
gets removed from the links, so if it isn’t present in the href
, we add it back in. We then combine this link with our base url and use it to get the link to the next page.
Putting Everything Together
Here is our final code. If you wish to scrape more than one page, simply change the 1
in while (pageCount <= 1)
to your desired target. If you want to scrape 4 pages, use while (pageCount <= 4)
.
Before you run the code, remember to compile it.
Then run it with the following command.
Here is the output from the first page.
Conclusion
Now that you’ve learned how to extract HTML data using jsoup, you can start building more advanced web scrapers. Whether you’re scraping product listings, news articles, or research data, handling dynamic content and avoiding blocks are key challenges.
To scale your scraping efforts efficiently, consider using Bright Data’s tools:
- Residential Proxies – Avoid IP bans and access geo-restricted content.
- Scraping Browser – Render JavaScript-heavy sites effortlessly.
- Ready-to-Use Datasets – Skip scraping altogether and get structured data instantly.
By combining jsoup with the right infrastructure, you can extract data at scale while minimizing detection risks. Ready to take your web scraping to the next level? Sign up now and start your free trial.
No credit card required