Target is one of the most difficult eCommerce sites to scrape today. Between dynamic CSS selectors, lazy loaded content and a powerful blocking system, it can feel impossible. By the end of this guide, you’ll be able to scrape Target like a pro. We’ll cover two different ways of extracting product listings.
How to scrape Target with Claude and Bright Data’s MCP Server
How To Scrape Target With Python
We’ll go through the process of scraping Target listings manually using Python. Target’s content is dynamically loaded, so results are often spotty at best without a headless browser. First, we’ll go through with Requests and BeautifulSoup. Then, we’ll go through and extract content with Selenium.
Inspecting the Site
Before we start coding, we need to actually inspect Target’s results page. If you inspect the page, you should notice that all product cards come with a data-test value of @web/site-top-of-funnel/ProductCardWrapper. We’ll use this value as a CSS selector when extracting our data.
Python Requests and BeautifulSoup Won’t Work
If you don’t have Requests and BeautifulSoup, you can install them via pip.
pip install requests beautifulsoup4
The code below outlines a basic scraper we can use. We set our headers using our Bright Data API Key and application/json. Our data holds our actual configuration such as our zone name, Target URL and format. After finding the product cards, we iterate through them and extract each product’s title, link and price.
All of our extracted products get stored in an array and then we write the array to a JSON file when the scrape is complete. Notice the continue statements when elements aren’t found. If a product is on the page without these elements, it hasn’t finished loading. Without a browser, we can’t render the page to wait for the content to load.
import requests
from bs4 import BeautifulSoup
import json
#headers to send to the web unlocker api
headers = {
"Authorization": "your-bright-data-api-key",
"Content-Type": "application/json"
}
#our configuration
data = {
"zone": "web_unlocker1",
"url": "https://www.target.com/s?searchTerm=laptop",
"format": "raw",
}
#send the request to the api
response = requests.post(
"https://api.brightdata.com/request",
json=data,
headers=headers
)
#array for scraped products
scraped_products = []
card_selector = "@web/site-top-of-funnel/ProductCardWrapper"
#parse them with beautifulsoup
soup = BeautifulSoup(response.text, "html.parser")
cards = soup.select(f"div[data-test='{card_selector}']")
#log the amount of cards found for debugging purposes
print("products found", len(cards))
#iterate through the cards
for card in cards:
#find the product data
#if a product hasn't loaded yet, drop it from the list
listing_text = card.text
link_element = card.select_one("a[data-test='product-title']")
if not link_element:
continue
title = link_element.get("aria-label").replace("\"")
link = link_element.get("href")
price = card.select_one("span[data-test='current-price'] span")
if not price:
continue
product_info = {
"title": title,
"link": f"https://www.target.com{link}",
"price": price.text
}
#add the extracted product to our scraped data
scraped_products.append(product_info)
#write our extracted data to a JSON file
with open("output.json", "w", encoding="utf-8") as file:
json.dump(scraped_products, file, indent=4)
Skipping unrendered objects severely limits our extracted data. As you can see in the results below, we only were able to extract four complete results.
With Requests and BeautifulSoup, we can get through to the page but we’re unable to load all the results.
Scraping With Python Selenium
We need a browser to render the page. This is where Selenium comes in. Run the command below to install Selenium.
pip install selenium
In the code below, we connect to a remote instance of Selenium using Scraping Browser. Pay attention to the actual code here. Our logic here is largely the same as the example above. Most of the extra code you see below is error handling and preprogrammed waits for the page content to load.
from selenium.webdriver import Remote, ChromeOptions
from selenium.webdriver.chromium.remote_connection import ChromiumRemoteConnection
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException, TimeoutException
import json
import time
import sys
AUTH = 'brd-customer-<your-username>-zone-<your-zone-name>:<your-password>'
SBR_WEBDRIVER = f'https://{AUTH}@brd.superproxy.io:9515'
def safe_print(*args):
#force safe ascii-only output on windows terminals
text = " ".join(str(arg) for arg in args)
try:
sys.stdout.write(text + '\n')
except UnicodeEncodeError:
sys.stdout.write(text.encode('ascii', errors='replace').decode() + '\n')
#our actual runtime
def main():
#array for scraped products
scraped_products = []
card_selector = "@web/site-top-of-funnel/ProductCardWrapper"
safe_print('Connecting to Bright Data SBR Browser API...')
#remote connection config to scraping browser
sbr_connection = ChromiumRemoteConnection(SBR_WEBDRIVER, 'goog', 'chrome')
#launch scraping browser
with Remote(sbr_connection, options=ChromeOptions()) as driver:
safe_print('Connected! Navigating...')
driver.get("https://www.target.com/s?searchTerm=laptop")
#set a 30 second timeout for items to load
wait = WebDriverWait(driver, 30)
safe_print('Waiting for initial product cards...')
try:
wait.until(
EC.presence_of_element_located((By.CSS_SELECTOR, f"div[data-test='{card_selector}']"))
)
except TimeoutException:
safe_print("No product cards loaded at all — possible block or site structure change.")
return
#get the document height for some scrolling math
safe_print('Starting pixel-step scroll loop...')
last_height = driver.execute_script("return document.body.scrollHeight")
scroll_attempt = 0
max_scroll_attempts = 10
#gently scroll down the page
while scroll_attempt < max_scroll_attempts:
driver.execute_script("window.scrollBy(0, window.innerHeight);")
time.sleep(1.5)
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
safe_print("Reached page bottom.")
break
last_height = new_height
scroll_attempt += 1
safe_print("Scrolling done — doing final settle nudges to keep session alive...")
try:
for _ in range(5):
driver.execute_script("window.scrollBy(0, -50); window.scrollBy(0, 50);")
time.sleep(1)
except Exception as e:
safe_print(f"Connection closed during final settle: {type(e).__name__} — {e}")
return
#now that everything's loaded, find the product cards
safe_print("Scraping product cards...")
try:
product_cards = driver.find_elements(By.CSS_SELECTOR, f"div[data-test='{card_selector}']")
safe_print(f"Found {len(product_cards)} cards.")
except Exception as e:
safe_print(f"Failed to find product cards: {type(e).__name__} — {e}")
return
#drop empty cards and extract data from the rest
for card in product_cards:
inner_html = card.get_attribute("innerHTML").strip()
if not inner_html or len(inner_html) < 50:
continue
safe_print("\n--- CARD HTML (truncated) ---\n", inner_html[:200])
try:
link_element = card.find_element(By.CSS_SELECTOR, "a[data-test='product-title']")
title = link_element.get_attribute("aria-label") or link_element.text.strip()
link = link_element.get_attribute("href")
except NoSuchElementException:
safe_print("Link element not found in card, skipping.")
continue
try:
price_element = card.find_element(By.CSS_SELECTOR, "span[data-test='current-price'] span")
price = price_element.text.strip()
except NoSuchElementException:
price = "N/A"
product_info = {
"title": title,
"link": f"https://www.target.com{link}" if link and link.startswith("/") else link,
"price": price
}
scraped_products.append(product_info)
#write the extracted products to a json file
if scraped_products:
with open("scraped-products.json", "w", encoding="utf-8") as file:
json.dump(scraped_products, file, indent=2)
safe_print(f"Done! Saved {len(scraped_products)} products to scraped-products.json")
else:
safe_print("No products scraped — nothing to save.")
if __name__ == '__main__':
main()
As you can see, we get more complete results using Selenium. Instead of four listings, we’re able to extract eight. This is much better than our first attempt.
Our results here are better, but we can improve them even more — with less work and zero code.
How To Scrape Target With Claude
Next, we’ll go through and perform the same task using Claude with Bright Data’s MCP server. You can get started by opening up Claude Desktop. Make sure you’ve got active Web Unlocker and Scraping Browser zones. Scraping Browser isn’t required for the MCP Server, but Target requires a browser.
Configuring The MCP Connection
From Claude Desktop, click “File” and choose “Settings.” Click on “Developer” and then choose “Edit Config.” Copy and paste the code below into your config file. Make sure to replace the API key and zone names with your own.
{
"mcpServers": {
"Bright Data": {
"command": "npx",
"args": ["@brightdata/mcp"],
"env": {
"API_TOKEN": "<your-brightdata-api-token>",
"WEB_UNLOCKER_ZONE": "<optional—override default zone name 'mcp_unlocker'>",
"BROWSER_AUTH": "<optional—enable full browser control via Scraping Browser>"
}
}
}
}
After saving the config and restarting Claude, you can open up your developer settings and you should see Bright Data as an option. If you click on Bright Data to inspect your configuration, it should look similar to what you see in the image below.
Once connected, check with Claude to make sure it’s got access to the MCP Server. The prompt below should be just fine.
Are you connected to the Bright Data MCP?
If everything is hooked up, Claude should respond similarly to the image below. Claude acknowledges the connection and then explains what it can do.
Running the Actual Scrape
From this point, the job is easy. Give Claude your Target listings URL and just let it go to work. The prompt below should work just fine.
Please extract laptops from https://www.target.com/s?searchTerm=laptop
During this process, don’t be surprised if you get popups asking if Claude can use certain tools. This is a nice security feature. Claude won’t use these tools unless you explicitly give it permission.
Claude will likely ask permission to use tools like scrape_as_markdown, extract and probably a few others. Just make sure to give permission to use the tools. Without them, Claude can’t scrape the results.
Storing the Results
Next, ask Claude to store the results in a JSON file. Within seconds, Claude will be writing all the extracted results in a highly detailed, well structured JSON file.
If you choose to view the file, it should look similar to the screenshot below. Claude extracts far more detail about each product than we did initially.
Target is tough, but not impossible. Manually, you need a smart approach with a real browser, automated scrolling, waits and a proxy connection. You can also create an AI agent that knows exactly how to handle Target’s dynamic content. Bright Data’s Scraping Browser and MCP Server make it possible — whether you’re a developer or you’d rather let an AI handle the heavy lifting.
Bright Data also offers a dedicated Target Scraper API that delivers the results to your preferred storage.
Jacob Nulty is a Detroit-based software developer and technical writer exploring AI and human philosophy, with experience in Python, Rust, and blockchain.