How to Rotate Proxies in Python

Master proxy rotation in Python to overcome IP bans and streamline your web scraping process.
11 min read
How to Rotate Proxies in Python blog image

In this guide on how to rotate proxies in Python, you will learn:

  • What proxies are and why use them
  • What proxy rotation is and why you might need it
  • How to rotate proxies in Python
  • Common limitations when you rotate proxies in Python

Let’s dive in!

What Is a Proxy?

A proxy is a server that acts as an intermediary between a user and a network resource over the Internet. So, you can think of a proxy as a middleman that forwards requests and responses between parties.

Why Use Proxies in Python?

When you send a request to a website using a proxy, the request first goes through the proxy server. Then, the proxy forwards your request to the website, receives the response, and sends it back to you. This process masks your IP address to the destination, making it appear as if the request is coming from the proxy server instead of your device.

The typical reason why you want to use a proxy is in the case of web request automation or web scraping. In this scenario, Python is one of the best languages for web scraping, thanks to its extensive libraries and large, active community.

What Is Proxy Rotation and Why Do You Need it?

If you make too many requests from a single IP address, websites may block you through rate limiting or outright IP bans. This is where proxy rotation via proxies comes into help.

Systematically switching between different proxy servers while making web requests is one of the best ways to implement IP address rotation. This procedure helps you bypass common anti-scraping techniques and provides you with the following benefits:

  • Avoiding IP blocks: Distribute requests across multiple IPs, making it harder for websites to detect and block your scraping activity.
  • Bypassing rate limits: Websites often set request limits per IP address within a specific timeframe. Rotating proxies helps you continue scraping even after reaching these limits on one IP.
  • Accessing geo-restricted content: Some websites show different content based on geographic location. Proxy rotation with proxies from different countries allows you to access location-specific content.

How to Rotate Proxies in Python: 3 Approaches

Now that you know what proxies are and why to rotate them, get ready for some step-by-step tutorials in Python. The next paragraphs will show you how to rotate proxies in Python by using different approaches and libraries.

The target site for all scripts will be the /ip endpoint from the HTTPBin project. This special endpoint returns the caller’s IP address, making it perfect for testing whether the IP seen by the server is rotating.

Time to rotate some proxies in Python!

Requirements

To replicate the tutorials for rotating proxies with Python, you must have Python 3.7 or higher installed on your machine.

Prerequisites

Suppose you call the main folder of your project proxy_rotation/. At the end of this step, the folder will have the following structure:

proxy_rotation/
    ├── requests_file.py
    ├── async.py
    ├── scrapy_rotation/
    └── venv/

Where:

  • requests.py, and async.py are Python files that store Requests and AIOHTTP proxy rotation logic, respectively.
  • scrapy_rotation/ is a folder that contains a Scrapy project. You will create and instantiate it later.
  • venv/ contains the virtual environment

You can create the venv/ virtual environment directory like so:

python -m venv venv

To activate it, on Windows, run:

venv\Scripts\activate

Equivalently, on macOS and Linux, execute:

source venv/bin/activate

As a final prerequisite, you need to retrieve a list of proxies. For this article, you can use our free proxies list.

How to Rotate Proxies in Python With Requests

In this tutorial section, you will learn how to rotate proxies in Python with Requests.

Step #1: Install Dependencies

In the virtual environment activated, install Requests with:

pip install requests

Step #2: Define the Rotation Logic

To rotate proxies in Python with Requests, write the following code into the requests_file.py file:

import random
import requests

# Define a list of proxies and return a random one
def get_random_proxy():
    proxies = [
        "http://PROXY_1:PORT_X",
        "http://PROXY_2:PORT_Y",
        "http://PROXY_3:PORT_X",
        # Add more proxies here...
    ]

    # Randomly pick a proxy
    return random.choice(proxies)

for i in range(3):
    proxy_url = get_random_proxy()
    proxies = {
        "http": proxy_url,
        "https": proxy_url,
    }
    response = requests.get("https://httpbin.io/ip", proxies=proxies)
    print(response.text)

Where:

  • The get_random_proxy() function stores the list of proxies you have retrieved and returns a random one with the method random.choice().
  • The for loop iterates over the randomized list of proxies and makes the actual request with the method requests.get(). For more information, read our guide on using a proxy with Python Requests.

Step #3: Launch the Script

To launch the script, run:

python requests_file.py

Below is the expected response:

{
  "origin": "PROXY_3:PORT_K"
}
{
  "origin": "PROXY_1:PORT_N"
}
{
  "origin": "PROXY_2:PORT_P"
}

Wonderful! The exit IPs of your script have been rotated as desired.

How to Rotate Proxies in Python With AIOHTTP

The main limitation of the randomized approach using the Requests library is that it uses one proxy at a time. That means you need to wait for each request to be concluded before the next proxy is used.

To avoid that limitation, you can use AIOHTTP. This library enables you to make asynchronous requests, thus using multiple proxies simultaneously in a non-blocking way. In other words, it allows you to rotate the proxies from your list by making asynchronous, parallel requests to the target server. See AIOHTTP in action in our guide on asynchronous web scraping.

The following section shows how to rotate proxies in Python with AIOHTTP.

Step #1: Install Dependencies

In the virtual environment activated, install AIOHTTP with:

pip install aiohttp

Step #2: Define the Rotation Logic

To rotate proxies in Python with AIOHTTP, write the following code into the async.py file:

import asyncio
import aiohttp

# Define a list of proxies
proxies_list = [
    "http://PROXY_1:PORT_X",
    "http://PROXY_2:PORT_Y",
    "http://PROXY_3:PORT_X",
    # Add more proxies here...
]

async def fetch_ip(session, proxy_address, attempt):
    print(f"Attempt {attempt} using proxy: {proxy_address}")
    async with session.get("https://httpbin.io/ip", proxy=proxy_address) as response:
        json_response = await response.json()
        print(f"Response from httpbin.io/ip (Attempt {attempt}):")
        print(f"IP Address: {json_response.get('origin', 'Unknown')}")
        print("-" * 40)
        return json_response

async def main():
    async with aiohttp.ClientSession() as session:
        tasks = []
        num_attempts = 3
        for i in range(num_attempts):
            # Rotate proxies using the modulus operator.
            proxy_address = proxies_list[i % len(proxies_list)]
            tasks.append(fetch_ip(session, proxy_address, i + 1))
        # Run all requests concurrently
        await asyncio.gather(*tasks)

# Launch the script
asyncio.run(main())

This code does the following:

  • The fetch_ip() function manages the requests by taking the session, a proxy, and the attempt number. In particular, it sends a GET request to the target website and prints the response.
  • The main() function:
    • Creates a session with the method aiohttp.ClientSession() for managing HTTP connections.
    • Sets up a list of tasks to send multiple requests concurrently and the number of attempts.
    • Rotates the proxies using the modulus operator (i % len(proxies_list)) to ensure proxies are reused in a round-robin fashion if the number of requests exceeds the number of proxies. Discover more in our tutorial on how to set a proxy in AIOHTTP.
    • Uses the method asyncio.gathe``r() to run all the tasks concurrently.

Step #3: Launch the Script

To launch the script, run:

python async.py

This is the expected response:

Attempt 1 using proxy: http://PROXY_1:PORT_X
Attempt 2 using proxy: http://PROXY_2:PORT_Y
Attempt 3 using proxy: http://PROXY_3:PORT_Z

Response from httpbin.io/ip (Attempt 3):
IP Address: xxx.xxx.xxx.xxx
----------------------------------------
Response from httpbin.io/ip (Attempt 1):
IP Address: yyy.yyy.yyy.yyy
----------------------------------------
Response from httpbin.io/ip (Attempt 2):
IP Address: zzz.zzz.zzz.zzz
----------------------------------------

Amazing! The IPs are being rotated as expected.

How to Rotate Proxies With Python Scrapy

In a previous article, we discussed the possibility of rotating proxies in Python with Scrapy by using scrapy-rotating-proxies.

In this guided section, you will learn how to do that!

Step #1: Install Dependencies

In the virtual environment activated, install the necessary libraries:

pip install scrapy scrapy-rotating-proxies

Step #2: Create a New Scrapy Project

Inside the main folder of your repository (proxy_rotation/), instantiate a new Scrapy project with this command:

scrapy startproject scrapy_rotation

This will create a new subfolder called scrapy_rotation/ that has the following structure:

scrapy_rotation/
  ├── scrapy_rotation/
  │   ├── __init__.py
  │   ├── items.py # Defines the data structure for scraped items
  │   ├── middlewares.py # Custom middlewares
  │   ├── pipelines.py # Handles post-processing of scraped data
  │   ├── settings.py # Project settings
  │   └── spiders/ # Folder for all spiders
  └── scrapy.cfg # Scrapy configuration file

From the main folder (proxy_rotation/), move to the scrapy_rotation/ one:

cd scrapy_rotation

You can now create a new spider that points to the target website by running:

scrapy genspider rotation http://httpbin.io/ip

This script also creates the rotation.py file inside the spiders/ folder.

Step #3: Define the Rotation Logic

The proxy rotation logic can be managed by modifying the settings.py file with the following settings:

# Enable the rotating proxies middleware
DOWNLOADER_MIDDLEWARES = {
    "rotating_proxies.middlewares.RotatingProxyMiddleware": 610,
    "rotating_proxies.middlewares.BanDetectionMiddleware": 620,
}

# List of proxies to rotate
ROTATING_PROXY_LIST = [
    "http://PROXY_1:PORT_X",
    "http://PROXY_2:PORT_Y",
    "http://PROXY_3:PORT_Z",
    # Add more proxies as needed
]

# Configure retry settings
RETRY_TIMES = 5  # Number of retries for failed requests
RETRY_HTTP_CODES = [500, 502, 503, 504, 408]  # HTTP codes to retry

What manages proxy rotation here is the rotating_proxies.middlewares.RotatingProxyMiddleware: 610 option in DOWNLOADER_MIDDLEWARES. In particular, this option selects a proxy from the ROTATING_PROXY_LIST and assigns it to each request.

Also, the rotating_proxies.middlewares.BanDetectionMiddleware: 620 option allows the scraper to detect if an IP has been banned or blocked by the target website. If a request fails due to that reason, the middleware will retry the request with a new proxy. So, this option works closely with the RotatingProxyMiddleware to ensure that banned proxies are automatically avoided.

Now, in the rotation.py file inside the spiders/ folder you can write the following:

import scrapy

class IpSpider(scrapy.Spider):
    name = "ip_spider"
    start_urls = ["http://httpbin.io/ip"]
    def parse(self, response):
        # Extract and print the IP address from the response
        ip = response.json().get("origin")
        self.log(f"IP Address: {ip}")

This class instantiates the whole spider and prints the response at each request.

Step #4: Launch the Script

To launch the script, you have to use the name of the IpSpider() class—which is ip_spider:

scrapy crawl ip_spider

The data returned by Scrapy on the CLI is particularly complete. So, if everything went fine, among the other information, you will find something like this:

2025-02-18 14:55:17 [rotating_proxies.expire] DEBUG: Proxy <http://PROXY_1:PORT_X> is GOOD
2025-02-18 14:55:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://httpbin.io/robots.txt> (referer: None)
2025-02-18 14:55:24 [rotating_proxies.middlewares] INFO: Proxies(good: 1, dead: 0, unchecked: 2, reanimated: 0, mean backoff time: 0s)

Limitations of the Above Approaches to Proxy Rotation in Python

The proxy rotation methods mentioned above are useful, but they come with some limitations:

  • They require you to manually retrieve and manage a list of proxies.
  • They involve boilerplate code.
  • They may still result in IP bans if you don’t use high-quality proxy servers.

If you are looking for a more efficient and effective way to handle proxy rotation in Python, Bright Data offers some of the best rotating proxies on the market. With just a single proxy URL, you can integrate them into your HTTP client or scraping library. That eliminates the need for boilerplate code and manual rotation management.

Other key benefits of this approach are:

  • Automatic IP rotation with configurable sticky IPs
  • Access to 72+ million residential IPs
  • Geolocation control over the proxy server locations
  • Support for HTTP, HTTPS, and SOCKS protocols

Simplify your proxy management—discover our auto-rotating proxies!

Conclusion

In this article, you learned how to rotate proxies in Python using three different libraries: Requests, AIOHTTP, and Scrapy. As demonstrated in the guided sections above, the process is not complex and requires only a few lines of code.

However, that approach comes with a few drawbacks:

  • The code is boilerplate-heavy, making your script less maintainable.
  • You need to manage and provide access to a large list of proxy servers.

Fortunately, you can skip all that hassle with Bright Data’s auto-rotating proxies—a more efficient solution to rotate proxies in Python.

Bright Data controls the best proxy servers in the world, serving Fortune 500 companies and more than 20,000 customers. Its offer includes a wide range of proxy types:

Create a free Bright Data account today to test our proxies and scraping solutions!

No credit card required