cURL: What It Is, And How You Can Use It For Web Scraping

cURL is a versatile command used by programmers for data collection and data transfers. But how can you leverage cURL for web scraping? This article will help you get started.
7 min read
data collection and web scraping with cURL

In this blog post you will learn:

What Is cURL?

cURL is a command-line tool that you can use to transfer data via network protocols. The name cURL stands for ‘Client URL’, and is also written as ‘curl’. This popular command uses URL syntax to transfer data to and from servers. Curl is powered by ‘libcurl’, a free and easy-to-use client-side URL transfer library.

Why using curl is advantageous?

The versatility of this command means you can use curl for a variety of use cases, including:

  • User authentication
  • HTTP posts
  • SSL connections
  • Proxy support
  • FTP uploads

The simplest ‘use case’ for curl would be downloading and uploading entire websites using one of the supported protocols.

Curl protocols

While curl has a long list of supported protocols it will use HTTP by default if you don’t provide a specific protocol. Here is the list of supported protocols:

DICTFILEFTP
FTPSGOPHERHTTP
HTTPSIMAPIMAPS
LDAPPOP3RTMP
RTSPSCPSFTP
SMBSMBSTELNET
TFTP

Installing curl

The curl command is installed by default in Linux distributions.

How do you check if you already have curl installed?

1. Open your Linux console

2. Type ‘curl’, and press ‘enter’.

3. If you already have curl installed, you will see the following message:

curl: try 'curl --help' for more information

4. If you don’t have curl installed already, you will see the following message: ‘command not found’. You can then turn to your distribution package and install it (more details below).

How to use cURL

Curl’s syntax is pretty simple:

curl [options] [url]

For example, if you want to download a webpage: webpage.com just run:

curl www.webpage.com

The command will then give you the source code of the page in your terminal window. Keep in mind that if you don’t specify a protocol, curl will default to HTTP. Below you can find an example of how to define specific protocols:

curl ftp://webpage.com

If you forget to add the :// curl will guess the protocol you want to use.

We talked briefly about the basic use of the command, but you can find a list of options on the curl documentation site. The options are the possible actions you can perform on the URL. When you choose an option, it tells curl what action to take on the URL you listed. The URL tells cURL where it needs to perform this action. Then cURL lets you list one or several URLs.

To download multiple URLs, prefix each URL with a -0 followed by a space. You can do this in a single line or write a different line for each URL. You can also download part of a URL by listing the pages. For example:

 curl.exe -O http://example.com/page{1,4,6}.html

Saving the download

You can save the content of the URL to a file by using curl using two different methods:

1. -o method: Allows you to add a filename where the URL will be saved. This option has the following structure:

curl -0 filename.html http://example.com/file.html

2. -O method: Here you don’t need to add a filename, since this option allows you to save the file under the URL name. To use this option, you just need to prefix the URL with a -O.

Resuming the download

It may happen that your download stops in the middle. In this case scenario, rewrite the command adding the -C option at the beginning :

curl -C - -O http://website.com/file.html

Why is curl so popular?

Curl is really the ‘swiss-knife’ of commands, created for complex operations. However, there are alternatives, for example, ‘wget’ or ‘Kurly’, that are good for simpler tasks.

Curl is a favorite among developers because it is available for almost every platform. Sometimes it is even installed by default. This means, whatever programs/jobs you are running, curl commands should work.

Also, chances are that if your OS is less than a decade old, you will have curl installed. You can also read the docs in a browser, and check the curl documentation. If you are running a recent version of Windows, you probably already have curl installed. If you don’t, check out this post on Stack Overflow to learn more about how to do this.

Using cURL with proxies

Some people may prefer using cURL in conjunction with a proxy. The benefits here include:

  1. Increasing one’s ability to successfully manage data requests from different geolocations.
  2. Exponentially driving the number of concurrent data jobs one can run simultaneously. 

In order to accomplish this you can make use of the ‘-x’, and ‘(- – proxy)’ capabilities already built into cURL. Here is an example of the command line that you can use in order to integrate the proxy you are using with cURL:

$ curl -x 026.930.77.2:6666 http://linux.com/

In the above code snippet – ‘6666’ is a placeholder for the port number, while ‘026.930.77.2’ is the IP address. 


Good to know: cUrl is compatible with most of the common proxy types currently in use including HTTP, HTTPS, and SOCKS.

How to change the User-Agent

User-Agents are characteristics that allow target sites to identify the device requesting information. A target site may require requesters to meet certain criteria before returning desired target data. This could pertain to a device type, operating system, or the browser being used. In this scenario, entities collecting data will want to emulate their target site’s ideal ‘candidate’. 

For argument’s sake, let’s say that the site you are targeting ‘prefers’ requesting parties to use Chrome as a browser. In order to obtain the desired data set using cURL, one will need to emulate this ‘browser trait’ as follows: 

curl -A "Goggle/9.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Chrome/103.0.5060.71" https://getfedora.org/

Web Scraping with cURL

Pro tip: Be sure to abide by a website’s rules, and in general do not try to access password-protected content which is illegal for the most part or at the very least frowned upon.

You can use curl to automate the repetitive process when web scraping, helping you avoid tedious tasks. For that, you will need to use PHP. Here’s an example we found on GitHub:


<?php

/**
 * @param string $url - the URL you wish to fetch.
 * @return string - the raw HTML response.
 */
function web_scrape($url) {
    $ch = curl_init($url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
    $response = curl_exec($ch);
    curl_close($ch);

    return $response;
}

/**
 * @param string $url - the URL you wish to fetch.
 * @return array - the HTTP headers returned.
 */
function fetch_headers($url) {
    $ch = curl_init($url);
    curl_setopt($ch, CURLOPT_HEADER, 1);
    $response = curl_exec($ch);
    curl_close($ch);

    return $response;
}

// Example usage:
// var_dump(get_headers("https://www.example.com"));
// echo web_scrape('https://www.example.com/');

?>

When you use curl to scrape a webpage there are three options, you should use:

  • Initializes the session
curl_init($url)
  • Executes the session
curl_exec()
  • Closes the session
curl_close()

Other options you should use include:

  • Sets the URL you want to scrape
CURLOPT_URL
  • Tells curl to save the scraped page as a variable. (This enables you to get exactly what you wanted to extract from the page.)
CURLOPT_RETURNTRANSFER

The bottom line

While cURL is a powerful web scraping tool, it requires companies to use valuable developer time both in terms of data collection as well as data cleaning. This is where Bright Data comes in. Bright Data, the leading Web Data platform, has a wide range of products to assist with web scraping needs. Bright Data products include the Web Scraper IDE that is built with developers in mind, an automated browser that comes built-in website unblocking automation, a dataset marketplace for those who would rather receive accurate data than scrape, and over 72 million reliable proxies.