In this guide, we’ll walk through the following concepts of API usage with Python:
- What is HTTP?
- What is a REST API?
- How to make a GET request
- How to make a POST request
- How to use an SDK
What Is HTTP?
HTTP (Hypertext Transfer Protocol) is the standard for how most data travels the web. You’ve probably heard that databases make up the backend of most websites—and this is true, but there are nuances in how our client (browser or Python script) actually interacts with the database. HTTP is the communication layer between the client and the backend server.
When using HTTP for scraping and web APIs, these are the methods you’re most likely to use.
- GET: The most commonly used method by far. Whenever you visit a site, your browser performs a GET for the HTML, and then renders the page for you to view.
- POST: This is the second most common method. POST is used to transfer larger bodies of data securely—and most often to add something to a database. When you complete forms and surveys or post on social media, you’re performing a POST request.
- PUT: PUT requests are used to update existing items in a database. When you edit a social media post, PUT gets used under the hood.
- DELETE: If you wish to delete a social media post (or anything else from a database), your browser sends a DELETE request to the server to remove it.
HTTP and Its Lack of Return Standards
For all its simplicity, HTTP lacks a universal return standard. Some servers return HTML by default, while others spit back JSON or even legacy data structures like XML and plaintext.
First, let’s make a basic GET request. If you don’t already have Python Requests installed, you can install it via pip.
Once you’ve got Requests installed, you can run the following code to make a simple GET. Pay attention to the terminal output.
After running the code, you should notice that we’ve got an HTML page. This is great for viewing in the browser, but in the terminal, it’s pretty ugly. The output below has been trimmed, but you get the idea.
HTML pages are meant to be read and rendered by browsers. They’re not designed for you to read or integrate into your code.
How REST (Representational State Transfer) Fixes This
REST APIs give us a design standard for data pipelines. JSON is by far the most popular return type with REST APIs. It’s flexible and easy to read. This clear, readable syntax also makes it easy to parse from your programming environment.
Take a look below to see what JSON actually looks like. Remember, we use a REST API to get this type of data structure.
REST APIs use endpoints, parameters and HTTP methods to control the return data and its format.
Making Your First API Request
Now that you know what a REST API is supposed to do, let’s try actually using one. Quotes to Scrape also has a REST API. Instead of simply fetching the home page, now we’ll access their API. We’re communicating with the server through endpoints.
Our full endpoint /api/quotes
can be broken into two pieces.
/api
: This tells the server that we want structured API data, not HTML pages./quotes
: We want the API to return data from thequotes
endpoint.
Making The Request
Go ahead and run the code like you did before.
Our data now comes back clean and structured. It’s easy to parse—and from there, we can do just about anything with it.
Making an Authenticated Request
Now that we’ve seen how to request public data, let’s look at authenticated APIs. In many cases, you’ll need an API key to get your data. Most API servers require an Authorization
header with your API key to authenticate your request.
Making a basic GET request is pretty easy. Now, we’ll try making a POST request. POST requests are used to handle larger payloads of information securely. In the code below, we use the Web Unlocker API to parse the page and return markdown.
This time, our request is going to https://api.brightdata.com/request
. Everything is controlled by our headers
and payload
.
Here are our headers
:
"Authorization": f"Bearer {API_KEY}"
: This ties the request to your Bright Data account."Content-Type": "application/json"
: This tells the server that we’re sending data in JSON format.
Now, take a look at the payload
:
"url"
: The url we wish to access with Web Unlocker."zone"
: The zone name that you gave your instance of Web Unlocker."format"
: The response format we want (in this case raw)"data_format"
: We use “markdown”—this tells Bright Data that we want the page parsed into markdown format. It’s not quite as flexible as JSON, but it can be converted to JSON easily.
Here’s the terminal output now that the page is converted to markdown.
Authentication uses a unique identifier—usually an API key. In this case, we gained access to Web Unlocker but the principle is the same—no matter which API service you’re using.
Handling The Response
Each response includes a status code. Status codes are used to relay different messages back to the client. In a perfect world, you’ll always receive a 200
status.
Sadly, the world isn’t perfect. If you receive a non-200 code, this means that something is wrong.
- 400-499: These codes typically imply an error on the client side. Double check your API key and your request format.
- 500-599: This range indicates a server error. Your request was fine, but the server couldn’t complete it for one reason or another.
You can learn more about status codes here. If you want to learn how to handle these status codes from Python, take a look at this guide on retry logic.
Skipping The Boilerplate With an SDK
An SDK (software development kit) allows us to connect to a REST API without having to write boilerplate for error handling and retry logic. The OpenAI API offers a full REST API as well. You can take a look at it here.
To install their SDK and skip the HTTP requests, run the following command.
Now, we import the OpenAI SDK. We fetch the plain old HTML page like we did initially. If you’re interested in parsing HTML manually, you can learn how to use Requests with BeautifulSoup. Once we’ve retrieved the HTML page, we use the SDK to pass the page into ChatGPT for parsing.
Take a look at the output this time. Zero parsing required—just data inside of a json block.
SDKs give you the full power of a REST API without the need for manual HTTP management. If you’re interested in learning how to scrape with AI, take a look at our guides for Claude and DeepSeek.
Conclusion
Now that you know how to make basic API requests with Python, you can move on to bigger projects. You can use APIs to interact with various services to retrieve data and you can even utilize an SDK to automatically parse that data. In this tutorial, we used Web Unlocker, but Bright Data offers a variety of other products to help with your data needs.
- Residential Proxies: Route your HTTP traffic through real devices with residential IP addresses.
- Scraper API: Completely automate your scrape and download the results straight to your programming environment.
- Scraping Browser: Bypass CAPTCHAs and control a real headless browser from right inside your Python script.
Sign up for a free trial and get started today!
No credit card required