AI

Bedrock Agent in Python via AWS CDK with Bright Data’s SERP API

This guide shows how to use AWS CDK in Python to build an AWS Bedrock agent that uses Bright Data’s SERP API for real-time web search data.
9 min read
AWS CDK_SERP API blog image

In this article, you will see:

  • What the AWS Cloud Development Kit (CDK) is and how it can be used to define and deploy cloud infrastructure.
  • Why you should give AWS Bedrock AI agents, built with AWS CDK, access web search results using an AI-ready tool like Bright Data’s SERP API.
  • How to build an AWS Bedrock agent integrated with the SERP API using AWS CDK in Python.

Let’s dive in!

What Is AWS Cloud Development Kit (CDK)?

The AWS Cloud Development Kit, also known as AWS CDK, is an open-source framework to build cloud infrastructure as code using modern programming languages. It equips you with what you need to provision AWS resources and deploy applications via AWS CloudFormation using programming languages like TypeScript, Python, Java, C#, and Go.

Thanks to AWS CDK, you can also build AI agents for Amazon Bedrock programmatically—exactly what you will do in this tutorial!

Large language models are trained on datasets representing knowledge only up to a certain point in time. As a result, they tend to produce inaccurate or hallucinated responses. That is especially problematic for AI agents that require up-to-date information.

This can be solved by giving your AI agent the ability to fetch fresh, reliable data in a RAG (Retrieval-Augmented Generation) setup. For instance, the AI agent could perform web searches to gather verifiable information, expanding its knowledge and improving accuracy.

Building a custom AWS Lambda function to scrape search engines is possible, but quite challenging. You would have to handle JavaScript rendering, CAPTCHAs, changing site structures, and IP blocks.

A better approach is to use a feature-rich SERP API, like Bright Data’s SERP API. This handles proxies, unblocking, scalability, data formatting, and much more for you. By integrating it with AWS Bedrock using a Lambda function, your AI agent built via AWS CDK will be able to access live search results for more trustworthy responses.

How to Develop an AI Agent with SERP API Integration Using AWS CDK in Python

In this step-by-step section, you will learn how to use AWS CDK with Python to build an AWS Bedrock AI agent. This will be capable of fetching data from search engines via the Bright Data SERP API.

The integration will be implemented through a Lambda function (which calls the SERP API) that the the agent can invoke as a tool. Specifically, to create an Amazon Bedrock agent, the main components are:

  • Action Group: Defines the functions the agent can see and call.
  • Lambda Function: Implements the logic to query the Bright Data SERP API.
  • AI Agent: Orchestrates interactions between the foundation models, functions, and user requests.

This setup will be implemented entirely with AWS CDK in Python. To achieve the same results using the visual AWS Bedrock console, see our Amazon Bedrock + Bright Data guide.

Follow the steps below to build an AWS Bedrock AI agent using AWS CDK, enhanced with real-time web search capabilities via the SERP API!

Prerequisites

To follow along with this tutorial, you need:

Step #1: Install and Authorize the AWS CLI

Before proceeding with AWS CDK, you need to install the AWS CLI and configure it so your terminal can authenticate with your AWS account.

Note: If you already have the AWS CLI installed and configured for authentication, skip this step and move on to the next one.

Install the AWS CLI by following the official installation guide for your operating system. Once installed, verify the installation by running:

aws --version

You should see output similar to:

aws-cli/2.31.32 Python/3.13.9 Windows/11 exe/AMD64

Then, run the configure command to set up your credentials:

aws configure

You will be prompted to enter:

  1. AWS Access Key ID
  2. AWS Secret Access Key
  3. Default region name (e.g., us-east-1)
  4. Default output format (optional, e.g., json)

Fill in the first three fields, as they are required for CDK development and deployment. If you are wondering where to get that information:

  1. Go to AWS and sign in.
  2. In the top-right corner, click your account name to open the account menu and select the “Security Credentials” option.
  3. Under the “Access Keys” section, create a new key. Save both the “Access Key ID” and “Secret Access Key” somewhere safe.

Done! Your machine can connect to your AWS account via the CLI. You are ready to proceed with AWS CDK development.

Step #2: Install AWS CDK

Install the AWS CDK globally on your system using the aws-cdk npm package:

npm install -g aws-cdk

Then, verify the installed version by running:

cdk --version

You should see output similar to:

2.1031.2 (build 779352d)

Note: Version 2.174.3 or later is required for AI agent development and deployment using AWS CDK with Python.

Great! You now have the AWS CDK CLI installed locally.

Step #3: Set Up Your AWS CDK Python Project

Start by creating a new project folder for your AWS CDK + Bright Data SERP API AI agent.
For example, you can name it aws-cdk-bright-data-web-search-agent:

mkdir aws-cdk-bright-data-web-search-agent

Navigate into the folder:

cd aws-cdk-bright-data-web-search-agent

Then, nitialize a new Python-based AWS CDK application through the init command:

cdk init app --language python

This may take a moment, so be patient while the CDK CLI sets up your project structure.

Once initialized, your project folder should look like this:

aws-cdk-bright-data-web-search-agent
├── .git/
├── venv/
├── aws_cdk_bright_data_web_search_agent/
│   ├── __init__.py
│   └── aws_cdk_bright_data_web_search_agent_stack.py
├── tests/
│   ├── __init__.py
│   └── unit/
│       ├── __init__.py
│       └── test_aws_cdk_bright_data_web_search_agent_stack.py
├── .gitignore
├── app.py
├── cdk.json
├── README.md
├── requirements.txt
├── requirements-dev.txt
└── source.bat

What you need to focus on are these two files:

  • app.py: Contains the top-level definition of the AWS CDK application.
  • aws_cdk_bright_data_web_search_agent/aws_cdk_bright_data_web_search_agent_stack.py: Defines the stack for the web search agent (this is where you will implement your AI agent logic).

For more details, refer to the official AWS guide on working with CDK in Python.

Now, load your project in your favorite Python IDE, such as PyCharm or Visual Studio Code with the Python extension.

Notice that the cdk init command automatically creates a Python virtual environment in the project. On Linux or macOS, activate it with:

source .venv/bin/activate

Or equivalently, on Windows, run:

.venvScriptsactivate

Then, inside the activated virtual environment, install all required dependencies:

python -m pip install -r requirements.txt

Fantastic! You now have an AWS CDK Python environment set up for AI agent development.

Step #4: Run AWS CDK Bootstrapping

Bootstrapping is the process of preparing your AWS environment for use with the AWS Cloud Development Kit. Before deploying a CDK stack, your environment must be bootstrapped.

In simpler terms, this process sets up the following resources in your AWS account:

  • An Amazon S3 bucket: Stores your CDK project files, such as AWS Lambda function code and other assets.
  • An Amazon ECR repository: Stores Docker images.
  • AWS IAM roles: Grant the necessary permissions for the AWS CDK to perform deployments. (For more details, see the AWS documentation on IAM roles created during bootstrapping.)

To start the CDK bootstrap process, run the following command in your project’s folder:

cdk bootstrap

In the AWS CloudFormation service, this command creates a stack called “CDKToolkit” that contains all the resources required to deploy CDK applications.

Verify that by reaching the CloudFormation console and checking the “Stacks” page:
Note the “CDKToolkit” stack in CloudFormation

You will see a “CDKToolkit” stack. Follow its link and you should see something like:
The “CDKToolkit” stack generated by the bootstrapping process
For more information on how the bootstrapping process works, why it is required, and what happens behind the scenes, refer to the official AWS CDK documentation.

Step #5: Get Ready With Bright Data’s SERP API

Now that your AWS CDK environment is set up for development and deployment, complete the preliminary steps by preparing your Bright Data account and configuring the SERP API service. You can either follow the official Bright Data documentation or follow the steps below.

If you do not already have an account, create a Bright Data account. Alternatively, simply log in. In your Bright Data account, reach the “Proxies & Scraping” page. In the “My Zones” section, check for a “SERP API” row in the table:
Note the “serp_api” row in the table
If you do not see a row with a “SERP API” label, it means you have not set up a zone yet. Scroll down to the “SERP API” section and click “Create zone” to add one:
Configuring the SERP API zone
Create a SERP API zone and give it a name like serp_api (or any name you prefer). Remember the zone name you chose, as you will need it to reach the service via API.

On the SERP API product page, toggle the “Activate” switch to enable the zone:
Activating the SERP API zone
Lastly, follow the official guide to generate your Bright Data API key. Store it in a safe place, as you will need it shortly.

Terrific! You now have everything set up to use Bright Data’s SERP API in your AWS Bedrock AI agent developed with AWS CDK.

Step #6: Store Your CDK Application Secrets in AWS Secrets Manager

You just obtained sensitive information (e.g., your Bright Data API key and SERP API zone name). Instead of hardcoding these values in the code of your Lambda function, you should read them securely from the AWS Secrets Manager.

Run the following Bash command to create a secret named BRIGHT_DATA containing your Bright Data API key and SERP API zone:

aws secretsmanager create-secret 
  --name "BRIGHT_DATA" 
  --description "API credentials for Bright Data SERP API integration" 
  --secret-string '{
    "BRIGHT_DATA_API_KEY": "<YOUR_BRIGHT_DATA_API_KEY>",
    "BRIGHT_DATA_SERP_API_ZONE": "<YOUR_BRIGHT_DATA_SERP_API_ZONE>"
  }'

Or, equivalently, in PowerShell:

aws secretsmanager create-secret `
  --name "BRIGHT_DATA" `
  --description "API credentials for Bright Data SERP API integration" `
  --secret-string '{"BRIGHT_DATA_API_KEY":"<YOUR_BRIGHT_DATA_API_KEY>","BRIGHT_DATA_SERP_API_ZONE":"<YOUR_BRIGHT_DATA_SERP_API_ZONE>"}'

Make sure to replace <YOUR_BRIGHT_DATA_API_KEY> and <YOUR_BRIGHT_DATA_SERP_API_ZONE> with your actual values you retrieved earlier.

That command will set up the BRIGHT_DATA secret, as you can confirm in the AWS Secrets Manager console under the “Secrets” page:
Note the “BRIGHT_DATA” secret in AWS Secrets Manager

If you click the “Retrieve secret value” button, you should see the BRIGHT_DATA_API_KEY and BRIGHT_DATA_SERP_API_ZONE secrets:
Note the "BRIGHT_DATA_API_KEY” and “BRIGHT_DATA_SERP_API_ZONE” secrets
Amazing! These secrets will be used to authenticate requests to the SERP API in a Lambda function you will define soon.

Step #7: Implement Your AWS CDK Stack

Now that you have set up everything needed to build your AI agent, the next step is to implement the AWS CDK stack in Python. First, it is important to understand what an AWS CDK stack is.

A stack is the smallest deployable unit in CDK. It represents a collection of AWS resources defined using CDK constructs. When you deploy a CDK app, all resources in the stack are deployed together as a single CloudFormation stack.

The default stack file is located at:

aws_cdk_bright_data_web_search_agent/aws_cdk_bright_data_web_search_agent_stack.py

Inspect it in Visual Studio Code:
The CDK stack script in Visual Studio Code
This contains a generic stack template where you will define your logic. Your task is to implement the full AWS CDK stack to build the AI agent with Bright Data SERP API integration, including Lambda functions, IAM roles, action groups, and the Bedrock AI agent.

Achieve all that with:

import aws_cdk.aws_iam as iam
from aws_cdk import (
  Aws,
  CfnOutput,
  Duration,
  Stack
)
from aws_cdk import aws_bedrock as bedrock
from aws_cdk import aws_lambda as _lambda
from constructs import Construct

# Define the required constants
AI_MODEL_ID = "amazon.nova-lite-v1:0" # The name of the LLM used to power the agent
ACTION_GROUP_NAME = "action_group_web_search"
LAMBDA_FUNCTION_NAME = "serp_api_lambda"
AGENT_NAME = "web_search_agent"

# Define the CDK Stack for deploying the Bright Data-powered Web Search Agent
class AwsCdkBrightDataWebSearchAgentStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        # Grants Lambda permissions for logging and reading secrets
        lambda_policy = iam.Policy(
            self,
            "LambdaPolicy",
            statements=[
                # Permission to create CloudWatch log groups
                iam.PolicyStatement(
                    sid="CreateLogGroup",
                    effect=iam.Effect.ALLOW,
                    actions=["logs:CreateLogGroup"],
                    resources=[f"arn:aws:logs:{Aws.REGION}:{Aws.ACCOUNT_ID}:*"],
                ),
                # Permission to create log streams and put log events
                iam.PolicyStatement(
                    sid="CreateLogStreamAndPutLogEvents",
                    effect=iam.Effect.ALLOW,
                    actions=["logs:CreateLogStream", "logs:PutLogEvents"],
                    resources=[
                        f"arn:aws:logs:{Aws.REGION}:{Aws.ACCOUNT_ID}:log-group:/aws/lambda/{LAMBDA_FUNCTION_NAME}",
                        f"arn:aws:logs:{Aws.REGION}:{Aws.ACCOUNT_ID}:log-group:/aws/lambda/{LAMBDA_FUNCTION_NAME}:log-stream:*",
                    ],
                ),
                # Permission to read BRIGHT_DATA secrets from Secrets Manager
                iam.PolicyStatement(
                    sid="GetSecretsManagerSecret",
                    effect=iam.Effect.ALLOW,
                    actions=["secretsmanager:GetSecretValue"],
                    resources=[
                        f"arn:aws:secretsmanager:{Aws.REGION}:{Aws.ACCOUNT_ID}:secret:BRIGHT_DATA*",
                    ],
                ),
            ],
        )

        # Define the IAM role for the Lambda functions
        lambda_role = iam.Role(
            self,
            "LambdaRole",
            role_name=f"{LAMBDA_FUNCTION_NAME}_role",
            assumed_by=iam.ServicePrincipal("lambda.amazonaws.com"),
        )

        # Attach the policy to the Lambda role
        lambda_role.attach_inline_policy(lambda_policy)

        # Lambda function definition
        lambda_function = _lambda.Function(
            self,
            LAMBDA_FUNCTION_NAME,
            function_name=LAMBDA_FUNCTION_NAME,
            runtime=_lambda.Runtime.PYTHON_3_12,  # Python runtime
            architecture=_lambda.Architecture.ARM_64,
            code=_lambda.Code.from_asset("lambda"),  # Read the Lambda code from the "lambda/" folder
            handler=f"{LAMBDA_FUNCTION_NAME}.lambda_handler",
            timeout=Duration.seconds(120),
            role=lambda_role,  # Attach IAM role
            environment={"LOG_LEVEL": "DEBUG", "ACTION_GROUP": f"{ACTION_GROUP_NAME}"},
        )

        # Allow the Bedrock service to invoke Lambda functions
        bedrock_account_principal = iam.PrincipalWithConditions(
            iam.ServicePrincipal("bedrock.amazonaws.com"),
            conditions={
                "StringEquals": {"aws:SourceAccount": f"{Aws.ACCOUNT_ID}"},
            },
        )
        lambda_function.add_permission(
            id="LambdaResourcePolicyAgentsInvokeFunction",
            principal=bedrock_account_principal,
            action="lambda:invokeFunction",
        )

        # Define the IAM Policy for the Bedrock agent
        agent_policy = iam.Policy(
            self,
            "AgentPolicy",
            statements=[
                iam.PolicyStatement(
                    sid="AmazonBedrockAgentBedrockFoundationModelPolicy",
                    effect=iam.Effect.ALLOW,
                    actions=["bedrock:InvokeModel"],  # Give it the permission to call foundation model
                    resources=[f"arn:aws:bedrock:{Aws.REGION}::foundation-model/{AI_MODEL_ID}"],
                ),
            ],
        )

        # Trust relationship for agent role to allow Bedrock to assume it
        agent_role_trust = iam.PrincipalWithConditions(
            iam.ServicePrincipal("bedrock.amazonaws.com"),
            conditions={
                "StringLike": {"aws:SourceAccount": f"{Aws.ACCOUNT_ID}"},
                "ArnLike": {"aws:SourceArn": f"arn:aws:bedrock:{Aws.REGION}:{Aws.ACCOUNT_ID}:agent/*"},
            },
        )

        # Define the IAM role for the Bedrock agent
        agent_role = iam.Role(
            self,
            "AmazonBedrockExecutionRoleForAgents",
            role_name=f"AmazonBedrockExecutionRoleForAgents_{AGENT_NAME}",
            assumed_by=agent_role_trust,
        )
        agent_role.attach_inline_policy(agent_policy)

        # Define the action group for the AI agent
        action_group = bedrock.CfnAgent.AgentActionGroupProperty(
            action_group_name=ACTION_GROUP_NAME,
            description="Call Bright Data's SERP API to perform web searches and retrieve up-to-date information from search engines.",
            action_group_executor=bedrock.CfnAgent.ActionGroupExecutorProperty(lambda_=lambda_function.function_arn),
            function_schema=bedrock.CfnAgent.FunctionSchemaProperty(
                functions=[
                    bedrock.CfnAgent.FunctionProperty(
                        name=LAMBDA_FUNCTION_NAME,
                        description="Integrates with Bright Data's SERP API to perform web searches.",
                        parameters={
                            "search_query": bedrock.CfnAgent.ParameterDetailProperty(
                                type="string",
                                description="The search query for the Google web search.",
                                required=True,
                            )
                        },
                    ),
                ]
            ),
        )

        # Create and specify the Bedrock AI Agent
        agent_description = """
            An AI agent that can connect to Bright Data's SERP API to retrieve fresh web search context from search engines.
        """

        agent_instruction = """
            You are an agent designed to handle use cases that require retrieving and processing up-to-date information.
            You can access current data, including news and search engine results, through web searches powered by Bright Data's SERP API.
        """

        agent = bedrock.CfnAgent(
            self,
            AGENT_NAME,
            description=agent_description,
            agent_name=AGENT_NAME,
            foundation_model=AI_MODEL_ID,
            action_groups=[action_group],
            auto_prepare=True,
            instruction=agent_instruction,
            agent_resource_role_arn=agent_role.role_arn,
        )

        # Export the key outputs for deployment
        CfnOutput(self, "agent_id", value=agent.attr_agent_id)
        CfnOutput(self, "agent_version", value=agent.attr_agent_version)

The above code is a modified version of the official AWS CDK stack implementation for a web search AI agent.

What happens in those 150+ lines of code reflects the steps outlined earlier in the guide: Step #1, Step #2, and Step #3 of our article “How to Integrate Bright Data SERP API with AWS Bedrock AI Agents”.

To better understand what is going on in that file, let’s break down the code into five functional blocks:

  1. Lambda IAM policy and role: Sets up the permissions for the Lambda function. The Lambda requires access to CloudWatch logs to record execution details and to AWS Secrets Manager to securely read the Bright Data API key and zone. An IAM role is created for the Lambda, and the appropriate policy is attached, ensuring it operates securely with only the permissions it needs.
  2. Lambda function definition: Here, the Lambda function itself is defined and configured. It specifies the Python runtime and architecture, points to the folder containing the Lambda code (which will be implemented in the next step), and sets up environment variables such as log level and action group name. Permissions are granted so that AWS Bedrock can invoke the Lambda, allowing the AI agent to trigger web searches through Bright Data’s SERP API.
  3. Bedrock agent IAM role: Creates the execution role for the Bedrock AI agent. The agent needs permissions to invoke the selected foundation AI model from the supported ones (amazon.nova-lite-v1:0, in this case). A trust relationship is defined so that only Bedrock can assume the role within your account. A policy is attached granting access to the model.
  4. Action group definition: An action group defines the specific actions the AI agent can perform. It links the agent to the Lambda function that enables it to execute web searches via Bright Data’s SERP API. The action group also includes metadata for input parameters, so the agent understands how to interact with the function and what information is required for each search.
  5. Bedrock AI agent definition: Defines the AI agent itself. It links the agent to the action group and the execution role and provides a clear description and instructions for its use.

After deploying the CDK stack, an AI agent will appear in your AWS console. That agent can autonomously perform web searches and retrieve up-to-date information by leveraging the Bright Data SERP API integration in the Lambda function. Fantastic!

Step #8: Implement the Lambda for SERP API Integration

Take a look at this snippet from the previous code:

lambda_function = _lambda.Function(
    self,
    LAMBDA_FUNCTION_NAME,
    function_name=LAMBDA_FUNCTION_NAME,
    runtime=_lambda.Runtime.PYTHON_3_12,  # Python runtime
    architecture=_lambda.Architecture.ARM_64,
    code=_lambda.Code.from_asset("lambda"),  # Read the Lambda code from the "lambda/" folder
    handler=f"{LAMBDA_FUNCTION_NAME}.lambda_handler",
    timeout=Duration.seconds(120),
    role=lambda_role,  # Attach IAM role
    environment={"LOG_LEVEL": "DEBUG", "ACTION_GROUP": f"{ACTION_GROUP_NAME}"},
)

The line code=_lambda.Code.from_asset("lambda") line specifies that the Lambda function code will be loaded from the lambda/ folder. Therefore, create a lambda/ folder in your project and add a file named serp_api_lambda.py inside it:
The serp_api_lambda.py file inside the lambda/ folder
The serp_api_lambda.py file needs to contain the implementation of the Lambda function used by the AI agent defined eariler. Implement this function to handle integration with the Bright Data SERP API as follows:

import json
import logging
import os
import urllib.parse
import urllib.request

import boto3

# ----------------------------
# Logging configuration
# ----------------------------
log_level = os.environ.get("LOG_LEVEL", "INFO").strip().upper()
logging.basicConfig(
    format="[%(asctime)s] p%(process)s {%(filename)s:%(lineno)d} %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
logger.setLevel(log_level)

# ----------------------------
# AWS Region from environment
# ----------------------------
AWS_REGION = os.environ.get("AWS_REGION")
if not AWS_REGION:
    logger.warning("AWS_REGION environment variable not set; boto3 will use default region")

# ----------------------------
# Retrieve the secret object from AWS Secrets Manager
# ----------------------------
def get_secret_object(key: str) -> str:
    """
    Get a secret value from AWS Secrets Manager.
    """
    session = boto3.session.Session()
    client = session.client(
        service_name='secretsmanager',
        region_name=AWS_REGION
    )

    try:
        get_secret_value_response = client.get_secret_value(
            SecretId=key
        )
    except Exception as e:
        logger.error(f"Could not get secret '{key}' from Secrets Manager: {e}")
        raise e

    secret = json.loads(get_secret_value_response["SecretString"])

    return secret


# Retrieve Bright Data credentials
bright_data_secret = get_secret_object("BRIGHT_DATA")
BRIGHT_DATA_API_KEY = bright_data_secret["BRIGHT_DATA_API_KEY"]
BRIGHT_DATA_SERP_API_ZONE = bright_data_secret["BRIGHT_DATA_SERP_API_ZONE"]

# ----------------------------
# SERP API Web Search
# ----------------------------
def serp_api_web_search(search_query: str) -> str:
    """
    Calls Bright Data SERP API to retrieve Google search results.
    """
    logger.info(f"Executing Bright Data SERP API search for search_query='{search_query}'")

    # Encode the query for URL
    encoded_query = urllib.parse.quote(search_query)
    # Build the Google URL to scrape the SERP from
    search_engine_url = f"https://www.google.com/search?q={encoded_query}"

    # Bright Data API request (docs: https://docs.brightdata.com/scraping-automation/serp-api/send-your-first-request)
    url = "https://api.brightdata.com/request"
    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {BRIGHT_DATA_API_KEY}"
    }
    data = {
        "zone": BRIGHT_DATA_SERP_API_ZONE,
        "url": search_engine_url,
        "format": "raw",
        "data_format": "markdown" # To get the SERP as an AI-ready Markdown document
    }

    payload = json.dumps(data).encode("utf-8")
    request = urllib.request.Request(url, data=payload, headers=headers)

    try:
        response = urllib.request.urlopen(request)
        response_data: str = response.read().decode("utf-8")
        logger.debug(f"Response from SERP API: {response_data}")
        return response_data
    except urllib.error.HTTPError as e:
        logger.error(f"Failed to call Bright Data SERP API. HTTP Error {e.code}: {e.reason}")
    except urllib.error.URLError as e:
        logger.error(f"Failed to call Bright Data SERP API. URL Error: {e.reason}")

    return ""


# ----------------------------
# Lambda handler
# ----------------------------
def lambda_handler(event, _):
    """
    AWS Lambda handler.
    Expects event with actionGroup, function, and optional parameters (including search_query).
    """
    logger.debug(f"lambda_handler called with event: {event}")

    action_group = event.get("actionGroup")
    function = event.get("function")
    parameters = event.get("parameters", [])

    # Extract search_query from parameters
    search_query = next(
        (param["value"] for param in parameters if param.get("name") == "search_query"),
        None,
    )
    logger.debug(f"Input search query: {search_query}")

    serp_page = serp_api_web_search(search_query) if search_query else ""
    logger.debug(f"Search query results: {serp_page}")

    # Prepare the Lambda response
    function_response_body = {"TEXT": {"body": serp_page}}

    action_response = {
        "actionGroup": action_group,
        "function": function,
        "functionResponse": {"responseBody": function_response_body},
    }

    response = {"response": action_response, "messageVersion": event.get("messageVersion")}

    logger.debug(f"lambda_handler response: {response}")

    return response

This Lambda function handles three main tasks:

  1. Retrieve API credentials securely: Fetches the Bright Data API key and SERP API zone from AWS Secrets Manager, so sensitive information is never hardcoded.
  2. Perform web searches via the SERP API: Encodes the search query, constructs the Google search URL, and sends a request to the Bright Data SERP API. The API returns the search results formatted as Markdown, which is an ideal data format for AI consumption.
  3. Respond to AWS Bedrock: Returns the results in a structured response that the AI agent can use.

Here we go! Your AWS Lambda function for connecting to the Bright Data SERP API has now been successfully defined.

Step #9: Deploy Your AWS CDK Application

Now that your CDK stack and its related Lambda function have been implemented, the final step is to deploy your stack to AWS. To do so, in your project folder, run the deploy command:

cdk deploy

When prompted to give permission to create the IAM role, type y to approve.

After a few moments, if everything works as expected, you should see an output like this:
The output of the “deploy” command

Next, go to the Amazon Bedrock console. Inside the “Agents” page, you should notice a ”web_search_agent” entry:
Note the “web_search_agent” entry

Open the agent, and you will see the deployed agent details:
The details page for the “web_search_agent”
Inspect it by pressing the “Edit and Agent Builder” button, and you will see precisely the same AI agent implemented in “How to Integrate Bright Data SERP API with AWS Bedrock.”

Finally, note that you can test the agent directly using the chat interface in the right corner. This is what you will do in the next and final step!

Step #10: Test the AI Agent

To test the web search and live data retrieval capabilities of your AI agent, try a prompt like:

"Give me the top 3 latest news articles on the US government shutdown"

(Note: This is just an example. Thus, you can test any prompt that requires web search results.)

That is an ideal prompt because it asks for up-to-date information that the foundation model alone does not know. The agent will call the Lambda function integrated with the SERP API, retrieve the results, and process the data to generate a coherent response.

Run this prompt in the “Test Agent” section of your agent, and you should see an output similar to this:
Running the prompt in the AWS Amazon Bedrock console
Behind the scenes, the agent invokes the SERP API Lambda function, retrieves the latest Google search results for the “US government shutdown” query, and extracts the top relevant articles (along with their URLs). This is something a standard LLM, such as the configured Nova Lite, cannot do on its own.

In detail, this is the response generated by the agent:

The agent’s response
The selected news articles (and their URLs) match what you would manually find on Google for the “US government shutdown” query (at least as of the date the agent was tested):
The SERP for the “US government shutdown” search query
Now, anyone who has tried scraping Google search results knows how challenging it can be due to bot detection, IP bans, JavaScript rendering, and other challenges. The Bright Data SERP API handles all these issues for you, returning scraped SERPs in AI-optimized Markdown (or HTML, JSON, etc.) format.

To confirm that your agent actually called the SERP API Lambda function, click the “Show trace” button in the response box. In the “Trace step 1” section, scroll down to the group invocation log to inspect the output from the Lambda call:
The output of the SERP API Lambda function
This verifies that the Lambda function executed successfully and that the agent interacted with the SERP API as intended. You can also check the AWS CloudWatch logs for your Lambda function to confirm execution.

Time to take your agent further! Test prompts related to fact-checking, brand monitoring, market trend analysis, or other scenarios to see how it performs across different agentic and RAG use cases.

Et voilà! You have successfully built an AWS Bedrock agent integrated with Bright Data’s SERP API using Python and the AWS CDK library. This AI agent is capable of retrieving up-to-date, reliable, and contextual web search data on demand.

Conclusion

In this blog post, you learned how to integrate Bright Data’s SERP API into an AWS Bedrock AI agent using an AWS CDK Python project. This workflow is perfect for developers aiming to build more context-aware AI agents on AWS.

To create even more sophisticated AI agents, explore Bright Data’s infrastructure for AI. This offers a suite of tools for retrieving, validating, and transforming live web data.

Create a Bright Data account today for free and start experimenting with our AI-ready web data solutions!

Antonello Zanini

Technical Writer

5.5 years experience

Antonello Zanini is a technical writer, editor, and software engineer with 5M+ views. Expert in technical content strategy, web development, and project management.

Expertise
Web Development Web Scraping AI Integration
AI

Give AWS Bedrock Agents the Ability to Search the Web via Bright Data SERP API

Learn to supercharge AWS Bedrock agents by integrating Bright Data’s SERP API, enabling real-time web search and more accurate, up-to-date responses.
16 min read
AWS Bedrock Agents with SERP API blog image

In this tutorial, you will learn:

  • What an AWS Bedrock AI agent is and what it can do.
  • Why giving agents access to web search results via an AI-ready tool like Bright Data’s SERP API improves accuracy and context.
  • How to build an AWS Bedrock agent integrated with the SERP API step by step in the AWS console.
  • How it is possible to achieve the same using code with AWS CDK.

Let’s dive in!

What Is an AWS Bedrock AI Agent?

In AWS Bedrock, an AI agent is a service that uses LLMs to automate multi-step tasks by breaking them down into logical steps. More in detail, an Amazon Bedrock AI agent is a system that takes advantage of foundation models, APIs, and data to understand user requests, gather relevant information, and complete tasks automatically.

AWS Bedrock agents support memory for task continuity, built-in security via Bedrock Guardrails, and can collaborate with other agents for complex workflows. Thanks to the AWS infrastructure, they are managed, scalable, and serverless, simplifying the process of building and deploying AI-powered applications.

Why AI Agents in Amazon Bedrock Benefit from Dynamic Web Search Context

LLMs are trained on large datasets that represent the knowledge available up to a specific point in time. This means that when an LLM is released, it only knows what was included in its training data, which quickly becomes outdated.

That is why AI models lack real-time awareness of current events and emerging knowledge. As a result, they may produce outdated, incorrect, or even hallucinated responses, which is a huge problem in AI agents.

This limitation can be overcome by giving your AI agent the ability to fetch fresh, trustworthy information in a RAG (Retrieval-Augmented Generation) setup. The idea is to equip your agent with a reliable tool that can perform web searches and retrieve verifiable data to support its reasoning, expand its knowledge, and ultimately deliver more accurate results.

One option would be to build a custom AWS Lambda function that scrapes SERPs (Search Engine Results Pages) and prepares the data for LLM ingestion. However, this approach is technically complex. It requires handling JavaScript rendering, CAPTCHA solving, and constantly changing site structures. Moreover, it is not easily scalable, as search engines like Google can quickly block your IPs after a few automated requests.

A much better solution is to use a top-rated SERP and web search API, such as Bright Data’s SERP API. This service returns search engine data while automatically managing proxies, unblocking, data parsing, and all other challenges.

By integrating Bright Data’s SERP API into AWS Bedrock via a Lambda function, your AI agent gains access to web search information—without any of the operational burden. Let’s see how!

How to Integrate Bright Data SERP API into Your AWS Bedrock AI Agent for Search-Contextualized Use Cases

In this step-by-step section, you’ll learn how to give your AWS Bedrock AI agent the ability to fetch real-time data from search engines using the Bright Data SERP API.

This integration enables your agent to deliver more contextual and up-to-date answers, including relevant links for further reading. The setup will be done visually in the AWS Bedrock console, with only a small amount of code required.

Follow the steps below to build an AWS Bedrock AI agent supercharged through SERP API!

Prerequisites

To follow along with this tutorial, you need:

Do not worry if you have not created a Bright Data account yet. You will be guided through that process later in the guide.

Step #1: Create an AWS Bedrock Agent

To create an AWS Bedrock AI agent, log in to your AWS account and open the Amazon Bedrock console by searching for the service in the search bar:

Reaching the Amazon Bedrock console

Next, in the Bedrock console, select “Agents” from the left-hand menu, then click the “Create agent” button:

Pressing the “Create agent” button

Give your agent a name and description, for example:

  • Name: web_search_agent
  • Description: “An AI agent that can connect to Bright Data’s SERP API to retrieve fresh web search context from search engines.”

Important: If you plan to use Amazon LLMs, you must use underscores (_) instead of hyphens (-) in the agent name, functions, etc. Using hyphens can cause a “Dependency resource: received model timeout/error exception from Bedrock. If this happens, try the request again.” error.

Filling out the “Create agent” modal

Click “Create” to finalize your AI agent. You should now be redirected to the “Agent builder” page:

Reaching the “Agent builder” page

Great! You have successfully created an AI agent in AWS Bedrock.

Step #2: Configure the AI Agent

Now that you have created an agent, you need to complete its setup by configuring a few options.

For the “Agent resource role” info, leave the default “Create and use a new service role” option. This automatically creates an AWS Identity and Access Management (IAM) role that the agent will assume:

Configuring the “Agent resource role” option

Next, click “Select model” and choose the LLM model to power your agent. In this example, we will use the “Nova Lite” model from Amazon (but any other model will do):

Selecting the “Nova Lite” model

After selecting the model, click “Apply” to confirm.

In the “Instructions for the Agent” section, provide clear and specific instructions that tell the agent what it should do. For this web search agent, you can enter something like:

You are an agent designed to handle use cases that require retrieving and processing up-to-date information. You can access current data, including news and search engine results, through web searches powered by Bright Data’s SERP API.

After completing the instructions, the final agent details section should look like this:

Editing the agent details

Complete the configuration of the agent details by clicking “Save” in the top bar to save all agent details.

Note: AWS Bedrock offers many other configuration options. Explore them to adapt your agents to your specific use cases.

Step #3: Add the SERP API Action Group

Action groups allow agents to interact with external systems or APIs to gather information or perform actions. You will need to define one to integrate with Bright Data’s SERP API.

In the “Agent builder” page, scroll down to the “Action groups” section and click “Add”:

Pressing the “Add” button in “Action groups”

This will open a form to define the action group. Fill it out as follows:

  • Enter Action group name: action_group_web_search (again, use underscores _, not hyphens -)
  • Description: “Call Bright Data’s SERP API to perform web searches and retrieve up-to-date information from search engines.”
  • Action group type: Select the “Define with function details” option.
  • Action group invocation: Select the “Quick create a new Lambda function” option, which sets up a basic Lambda function that the agent can call. In this case, the function will handle the logic to call Bright Data’s SERP API.
Filling out the action group form

Note: With the “Quick create a new Lambda function” option, Amazon Bedrock generates a Lambda function template for your agent. You can later modify this function in the Lambda console (we will do that in a later step).

Now, configure the function in the group. Scroll down to the “Action group function 1: serp_api” section and fill it out as follows:

  • Name: serp_api (again, prefer underscores).
  • Description: “Integrates with Bright Data’s SERP API to perform web searches.”
  • Parameters: Add a parameter named search_query of type string and mark it as required. This parameter will be passed to the Lambda function and represents the input for the Bright Data SERP API to retrieve web search context.
Filling out the “Action group function“ form

Finally, click “Create” to complete the action group setup:

Pressing the “Create” button to add the action group

Lastly, click “Save” to save your agent configuration. Well done!

Step #4: Set Up Your Bright Data Account

Now, it is time to set up your Bright Data account and configure the SERP API service. Notice that you can either follow the official Bright Data documentation or the steps below.

If you do not already have an account, sign up for Bright Data. Otherwise, log in to your existing account. Once logged in, reach the “My Zones” section in the “Proxies & Scraping” page and check for a “SERP API” row in the table:

Note the “serp_api” row in the table

If you do not see a row for SERP API, it means you have not configured a zone yet. Scroll down and click “Create zone” under the “SERP API” section:

Configuring the SERP API zone

Create a SERP API zone, naming it something like serp_api (or any name you prefer). Keep track of the SERP API zone name, as you will require it to connect via API.

On the product page, toggle the “Activate” switch to enable the zone:

Activating the SERP API zone

Lastly, follow the official guide to generate your Bright Data API key. Store it in a safe place, as you will need it shortly.

This is it! You now have everything you need to use Bright Data’s SERP API in your AWS Bedrock AI agent.

Step #5: Store Secrets in AWS

In the previous step, you obtained sensitive information, such as your Bright Data API key and your SERP API zone name. Instead of hardcoding these values in your Lambda function code, store them securely in AWS Secrets Manager.

Search for “Secrets Manager” in the AWS search bar and open the service:

Pressing the “Store a new secret” button

Click the “Store a new secret” button and select the “Other type of secret” option. In the “Key/value pairs” section, add the following key-value pairs:

  • BRIGHT_DATA_API_KEY: Enter your Bright Data API key obtained earlier.
  • BRIGHT_DATA_SERP_API_ZONE: Enter your Bright Data SERP API zone name (e.g., serp_api).
Giving the secret the name

Then, click “Next” and provide a secret name. For example, call it BRIGHT_DATA:

Giving the secret the name

This secret will store a JSON object containing the fields BRIGHT_DATA_API_KEY and BRIGHT_DATA_SERP_API_ZONE.

Complete the secrets creation by following the remaining prompts to finish storing the secret. After completion, your secret will look like this:

The BRIGHT_DATA secret

Cool! You will access these secrets in your Python Lambda function to securely connect to the Bright Data SERP API, which you will set up in the next step.

Step #6: Create the Lambda Function to Call the SERP API

You have all the building blocks to define the Lambda function associated with the action group created in Step #3. That function will contain the Python code to Bright Data’s SERP API and retrieve web search data.

To create the Lambda, search for “Lambda” in the AWS search bar and open the Amazon Lambda console. You will notice that a function has already been created automatically in Step #3 by AWS Bedrock:

The “action/_gorup_serp_api_web_serach_XXXX” function

Click on the function named action_group_serp_api_web_search_XXXX to open its overview page:

The Lambda function overview page

On this page, scroll down to the Code tab, where you will find a built-in Visual Studio Code editor for editing your Lambda logic:

The online IDE to edit the Lambda logic

Replace the contents of the dummy_lambda.py file with the following code:

import json
import logging
import os
import urllib.parse
import urllib.request

import boto3

# ----------------------------
# Logging configuration
# ----------------------------
log_level = os.environ.get("LOG_LEVEL", "INFO").strip().upper()
logging.basicConfig(
    format="[%(asctime)s] p%(process)s {%(filename)s:%(lineno)d} %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
logger.setLevel(log_level)

# ----------------------------
# AWS Region from environment
# ----------------------------
AWS_REGION = os.environ.get("AWS_REGION")
if not AWS_REGION:
    logger.warning("AWS_REGION environment variable not set; boto3 will use default region")

# ----------------------------
# Retrieve the secret object from AWS Secrets Manager
# ----------------------------
def get_secret_object(key: str) -> str:
    """
    Get a secret value from AWS Secrets Manager.
    """
    session = boto3.session.Session()
    client = session.client(
        service_name='secretsmanager',
        region_name=AWS_REGION
    )

    try:
        get_secret_value_response = client.get_secret_value(
            SecretId=key
        )
    except Exception as e:
        logger.error(f"Could not get secret '{key}' from Secrets Manager: {e}")
        raise e

    secret = json.loads(get_secret_value_response["SecretString"])

    return secret


# Retrieve Bright Data credentials
bright_data_secret = get_secret_object("BRIGHT_DATA")
BRIGHT_DATA_API_KEY = bright_data_secret["BRIGHT_DATA_API_KEY"]
BRIGHT_DATA_SERP_API_ZONE = bright_data_secret["BRIGHT_DATA_SERP_API_ZONE"]

# ----------------------------
# SERP API Web Search
# ----------------------------
def serp_api_web_search(search_query: str) -> str:
    """
    Calls Bright Data SERP API to retrieve Google search results.
    """
    logger.info(f"Executing Bright Data SERP API search for search_query='{search_query}'")

    # Encode the query for URL
    encoded_query = urllib.parse.quote(search_query)
    # Build the Google URL to scrape the SERP from
    search_engine_url = f"https://www.google.com/search?q={encoded_query}"

    # Bright Data API request (docs: https://docs.brightdata.com/scraping-automation/serp-api/send-your-first-request)
    url = "https://api.brightdata.com/request"
    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {BRIGHT_DATA_API_KEY}"
    }
    data = {
        "zone": BRIGHT_DATA_SERP_API_ZONE,
        "url": search_engine_url,
        "format": "raw",
        "data_format": "markdown" # To get the SERP as an AI-ready Markdown document
    }

    payload = json.dumps(data).encode("utf-8")
    request = urllib.request.Request(url, data=payload, headers=headers)

    try:
        response = urllib.request.urlopen(request)
        response_data: str = response.read().decode("utf-8")
        logger.debug(f"Response from SERP API: {response_data}")
        return response_data
    except urllib.error.HTTPError as e:
        logger.error(f"Failed to call Bright Data SERP API. HTTP Error {e.code}: {e.reason}")
    except urllib.error.URLError as e:
        logger.error(f"Failed to call Bright Data SERP API. URL Error: {e.reason}")

    return ""


# ----------------------------
# Lambda handler
# ----------------------------
def lambda_handler(event, _):
    """
    AWS Lambda handler.
    Expects event with actionGroup, function, and optional parameters including search_query.
    """
    logger.debug(f"lambda_handler called with event: {event}")

    action_group = event.get("actionGroup")
    function = event.get("function")
    parameters = event.get("parameters", [])

    # Extract search_query from parameters
    search_query = next(
        (param["value"] for param in parameters if param.get("name") == "search_query"),
        None,
    )
    logger.debug(f"Input search query: {search_query}")

    serp_page = serp_api_web_search(search_query) if search_query else ""
    logger.debug(f"Search query results: {serp_page}")

    # Prepare the Lambda response
    function_response_body = {"TEXT": {"body": serp_page}}

    action_response = {
        "actionGroup": action_group,
        "function": function,
        "functionResponse": {"responseBody": function_response_body},
    }

    response = {"response": action_response, "messageVersion": event.get("messageVersion")}

    logger.debug(f"lambda_handler response: {response}")

    return response

This snippet retrieves Bright Data API credentials from AWS Secrets Manager, calls the SERP API with a Google search query, and returns the search results as Markdown text. For more information on how to call the SERP API, refer to the docs.

Paste the above code into the online IDE:

Updating the Lambda's code

Then, press “Deploy” to update the Lambda function:

Clicking the “Deploy” button to update the Lambda code

If everything goes as expected, you will receive a message like “Successfully updated the function action_group_serp_api_web_search_XXXX.”

Note: The Lambda function is automatically configured with a resource-based policy that allows Amazon Bedrock to invoke it. Since this was handled by the “Quick create a new Lambda function” option earlier, there is no need to manually modify the IAM role.

Perfect! Your AWS Lambda function for connecting to the Bright Data SERP API is now complete.

Step #7: Configure Lambda Permissions

While your Lambda function does not require custom IAM permissions to be invoked, it does need access to the API keys stored in AWS Secrets Manager.

To configure that, on the Lambda function details page, go to the “Configuration” tab, then select the “Permissions” option:

Reaching the AWS Lambda “Permissions” page

Under the “Execution role” section, click the “Role name” link to open the IAM console:

Clicking the role name

On the role page, locate and select the attached permission policy:

Selecting the permission policy

Open the “JSON” view and then click “Edit” to prepare to update the permission policy:

Accessing the JSON view of the permission policy

Make sure the Statement array includes the following block:

{
  "Action": "secretsmanager:GetSecretValue",
  "Resource": [
    "arn:aws:secretsmanager:<YOUR_AWS_REGION>:<YOUR_AWS_ACCOUNT_ID>:secret:BRIGHT_DATA*"
  ],
  "Effect": "Allow",
  "Sid": "GetSecretsManagerSecret"
}

Replace <YOUR_AWS_REGION> and <YOUR_AWS_ACCOUNT_ID> with the correct values for your setup. That block of JSON code will give the Lambda function the ability to access the BRIGHT_DATA secret from the Secrets Manager.

Then, click the “Next” button, and finally “Save changes.” As a result, you should see that your Lambda function also has Secrets Manager access permissions:

Note the “Secrets Manager” option

Fantastic! Your Lambda function is now properly configured and can be invoked by your AWS Bedrock AI agent with access to the Bright Data credentials stored in Secrets Manager.

Step #8: Finalize Your AI Agent

Go back to the “Agent builder” section and click “Save” one last time to apply all the changes you made earlier:

Saving the changes in your agent

Next, click “Prepare” to ready your agent for testing with the latest configuration.

Pressing the “Prepare” button

You should get a confirmation message like “Agent: web_search_agent was successfully prepared.”

Mission complete! Your Bright Data SERP API-powered AI agent is now fully implemented and ready to go.

Step #9: Test Your AI Agent

Your AWS Bedrock AI Agent has access to the serp_api group function, implemented by the lambda_handler Lambda function you defined in Step #6. In simpler terms, your agent can perform real-time web searches on Google (and potentially other search engines) via the Bright Data SERP API, retrieving and processing fresh online content dynamically.

To test this capability, let’s assume you want to fetch the latest news about Hurricane Melissa. Try prompting your agent with:

"Give me the top 3 latest news articles on Hurricane Melissa"

(Remember: This is just an example, and you can test any other prompt involving real-time web search results.)

Run this prompt in the “Test Agent” section of your agent, and you should see an output similar to this:

The agent execution on the given prompt

Behind the scenes, the agent invokes the SERP API Lambda, retrieves the latest Google Search results for “Hurricane Melissa,” and extracts the top relevant news items with their URLs. That is something any vanilla LLM, including Nova Lite, cannot achieve on its own!

This is the specific text response generated by the agent (with URLs omitted):

Here are the top 3 latest news articles on Hurricane Melissa:
1. Jamaica braces for Hurricane Melissa, strongest storm of 2025 - BBC, 6 hours ago
2. Hurricane Melissa live updates, tracker—Jamaica faces "catastrophic" impact - Newsweek, 7 hours ago
3. Hurricane Melissa bears down on Jamaica and threatens to be the island's strongest recorded storm - AP News, 10 hours ago
Please visit the original sources for the full articles.

These results are not hallucinations! On the contrary, they match what you would find on Google when manually searching for “Hurricane Melissa” (as of the date the agent was tested):

The SERP for “melissa hurricane news"

Now, if you have ever tried scraping Google search results, you know how difficult it can be due to bot detection, IP bans, JavaScript rendering, and many other challenges.

The Bright Data SERP API solves all of that efficiently, and its ability to return scraped SERPs in AI-optimized Markdown format (which is especially valuable for LLM ingestion).

To confirm that your agent actually called the SERP API Lambda, click “Show trace”:

Following the “Show trace” link

In the “Trace step 1” section, scroll down to the group invocation log section to see the output from the function call:

The output of the SERP API Lambda function

This confirms that the Lambda function successfully executed, and the agent interacted with the SERP API as intended. Similarly, verify that by checking the AWS CloudWatch logs for your Lambda.

Now to push your agent further! Try prompts related to fact-checking, brand monitoring, market trend analysis, or other scenarios. See how your agent performs in different use cases.

Et voilà! You just built an AWS Bedrock AI Agent integrated with Bright Data’s SERP API, capable of retrieving up-to-date, trusted, and contextual web search data on demand.

[Extra] Build an Amazon Bedrock Agent with Web Search Using AWS CDK

In the previous section, you learned how to define and implement an AI agent that integrates with the SERP API directly through the Amazon Bedrock console.

If you prefer a code-first approach, you can achieve the same result using the AWS Cloud Development Kit (AWS CDK). This method follows the same overall steps but manages everything locally within an AWS CDK project.

For detailed guidance, refer to the official AWS guide. You should also take a look at the GitHub repository supporting that tutorial. This codebase can be easily adapted to work with Bright Data’s SERP API.

Conclusion

In this blog post, you saw how to integrate Bright Data’s SERP API into an AWS Bedrock AI agent. This workflow is ideal for anyone looking to build more powerful, context-aware AI agents in AWS.

To create even more advanced AI workflows, explore Bright Data’s infrastructure for AI. You will find a suite of tools for retrieving, validating, and transforming live web data.

Sign up for a free Bright Data account today and start experimenting with our AI-ready web data solutions!

Antonello Zanini

Technical Writer

5.5 years experience

Antonello Zanini is a technical writer, editor, and software engineer with 5M+ views. Expert in technical content strategy, web development, and project management.

Expertise
Web Development Web Scraping AI Integration