Build Once, Deploy Anywhere

Define agents in Python. Deploy to any cloud with a single command.

3 Targets Multi-Environment CI/CD Ready v0.4

The Problem with Cloud Agent Platforms

Every cloud has its own way of defining agents. AWS Bedrock wants you clicking through a web console. Google's Agent Development Kit (ADK) requires specific Python patterns. Each platform locks you in.

Bedsheet's solution: Define your agents once in standard Python using Bedsheet's API. Then run bedsheet generate to compile them into cloud-native artifacts for your chosen target.

🐳
Local
Docker + FastAPI
☁️
GCP
Terraform + ADK
πŸ”Ά
AWS
CDK + Bedrock

1Installation

Install Bedsheet with CLI

# Using uv (recommended)
uv add bedsheet

# Or with pip
pip install bedsheet

Verify CLI Installation

bedsheet --help
Usage: bedsheet [OPTIONS] COMMAND [ARGS]... Bedsheet CLI - Build Once, Deploy Anywhere Commands: init Initialize a new Bedsheet project generate Generate deployment artifacts validate Validate bedsheet.yaml configuration deploy Deploy to target platform version Show version information

Target-Specific Prerequisites

Target Prerequisites
Local Docker, Docker Compose
GCP Terraform, gcloud CLI, GCP project
AWS AWS CDK, AWS CLI, AWS account

2Quick Start

Let's deploy an agent in under 5 minutes:

Step 1: Initialize a Project

bedsheet init my-agent
cd my-agent
Created project: my-agent β”œβ”€β”€ bedsheet.yaml # Configuration β”œβ”€β”€ pyproject.toml # Dependencies └── agents/ # Your agent code └── assistant.py # Example agent

Step 2: Install Dependencies & Generate

# Install dependencies (using uv - recommended)
uv sync

# Or with pip
pip install -e .

# Generate deployment artifacts
uv run bedsheet generate --target local
Generated 6 files in deploy/local/ β”œβ”€β”€ Dockerfile β”œβ”€β”€ docker-compose.yaml β”œβ”€β”€ app.py β”œβ”€β”€ Makefile β”œβ”€β”€ pyproject.toml └── .env.example

Step 3: Build & Run Locally

cd deploy/local
cp .env.example .env   # Add your ANTHROPIC_API_KEY
make build
make run
Starting my-agent... Agent running at http://localhost:8000 Press Ctrl+C to stop
That's it!

Your agent is now running locally in Docker. To deploy to the cloud, just change the target: bedsheet generate --target gcp or --target aws.

3Configuration

All deployment settings live in bedsheet.yaml:

# bedsheet.yaml - Your deployment configuration
name: investment-advisor

# Define your agents
agents:
  - name: advisor
    module: agents.advisor        # Python module path
    class_name: InvestmentAdvisor # Class name
    description: Provides investment recommendations

# Active deployment target
target: local

# Target-specific configuration
targets:
  local:
    port: 8080
    hot_reload: true

  gcp:
    project: my-gcp-project
    region: us-central1
    model: gemini-2.5-flash
    # Or use Claude via Vertex AI:
    # model: claude-sonnet-4-5@20250929

  aws:
    region: us-east-1
    lambda_memory: 512
    bedrock_model: anthropic.claude-sonnet-4-5-v2:0
    style: serverless  # or "bedrock_native", "containers"

# Multi-environment settings
environments:
  - dev
  - staging
  - prod

Configuration Reference

Agent Configuration

Field Required Description
name Yes Agent identifier (used in URLs and resource names)
module Yes Python module path (e.g., myapp.agents.advisor)
class_name Yes Agent class name to instantiate
description No Human-readable description

GCP Target Configuration

Field Default Description
project - GCP project ID (required)
region us-central1 GCP region for Cloud Run
model claude-sonnet-4-5@20250929 Model via Vertex AI

AWS Target Configuration

Field Default Description
region us-east-1 AWS region
lambda_memory 256 Lambda memory in MB (128-10240)
bedrock_model anthropic.claude-sonnet-4-5-v2:0 Bedrock model ID
style serverless Deployment style

Validating Configuration

bedsheet validate
βœ“ Configuration valid Name: investment-advisor Agents: 1 Active target: local

4Local Target (Docker)

The local target generates a Docker-based development environment with FastAPI.

Generate Local Artifacts

bedsheet generate --target local

Generated Files

deploy/local/ β”œβ”€β”€ docker-compose.yaml # Container orchestration β”œβ”€β”€ Dockerfile # Multi-stage build (Node + Python) β”œβ”€β”€ app.py # FastAPI wrapper with SSE streaming β”œβ”€β”€ pyproject.toml # Dependencies β”œβ”€β”€ Makefile # Convenience commands β”œβ”€β”€ .env.example # Environment template └── debug-ui/ # React Debug UI source β”œβ”€β”€ package.json └── src/ # React components

Running Locally

# Copy environment template
cp .env.example .env
# Edit .env with your API keys

# Start the agent
make run

# Or with Docker Compose directly
docker-compose up

Hot Reload

With hot_reload: true in your config, changes to your agent code are automatically detected:

# In one terminal
make run

# In another terminal, edit your agent
vim agents/advisor.py

# Changes are automatically picked up!
Local Development Workflow

Use the local target for development and testing. When ready for production, generate for GCP or AWS without changing any agent code.

Debug UI

The local target includes a built-in Debug UI for testing and debugging your agents. Access it by opening http://localhost:8000 in your browser.

Debug UI Preview

Features:

Disabling the Debug UI

For production-like testing without the UI, set the environment variable:

# In .env or docker-compose.yaml
BEDSHEET_DEBUG_UI=false

When disabled, the root endpoint (/) returns a JSON health check instead of the UI.

API Endpoints

Endpoint Method Description
/ GET Debug UI (or health check if disabled)
/health GET Health check with agent info
/invoke POST Invoke agent (batch response)
/invoke/stream POST Invoke agent (SSE streaming)

Example: Streaming API Call

# Stream events via curl
curl -X POST http://localhost:8000/invoke/stream \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!", "session_id": "optional-session-id"}'

# Output (Server-Sent Events):
# data: {"type": "session", "session_id": "abc-123"}
# data: {"type": "text_token", "token": "Hello"}
# data: {"type": "text_token", "token": "!"}
# data: {"type": "completion", "response": "Hello! How can I help?"}
# data: {"type": "done"}

Makefile Commands

Command Description
make run Start the agent locally
make build Build Docker image
make test Run tests
make logs View container logs
make clean Remove containers and images

5GCP Target (Terraform + ADK)

The GCP target generates Google Agent Development Kit (ADK) compatible code with Terraform infrastructure.

flowchart LR subgraph "Your Code" Agent[Bedsheet Agent] end subgraph "Generated" ADK[ADK agent.py] TF[Terraform] GHA[GitHub Actions] end subgraph "GCP" CR[Cloud Run] SM[Secret Manager] IAM[IAM] end Agent --> ADK Agent --> TF TF --> CR TF --> SM TF --> IAM GHA --> TF style Agent fill:#dbeafe,stroke:#0969da style ADK fill:#dcfce7,stroke:#1a7f37 style TF fill:#f3e8ff,stroke:#8250df style CR fill:#fef3c7,stroke:#d97706

Generate GCP Artifacts

bedsheet generate --target gcp

Generated Files

deploy/gcp/ β”œβ”€β”€ agent/ β”‚ β”œβ”€β”€ agent.py # ADK-compatible agent β”‚ └── __init__.py β”œβ”€β”€ terraform/ β”‚ β”œβ”€β”€ main.tf # Cloud Run, IAM, Secrets β”‚ β”œβ”€β”€ variables.tf # Input variables β”‚ β”œβ”€β”€ outputs.tf # Service URL, etc. β”‚ └── terraform.tfvars.example β”œβ”€β”€ Dockerfile # Container image β”œβ”€β”€ cloudbuild.yaml # Cloud Build config β”œβ”€β”€ requirements.txt β”œβ”€β”€ Makefile β”œβ”€β”€ .env.example └── .github/workflows/ β”œβ”€β”€ ci.yaml # Tests on PR └── deploy.yaml # Deploy on push

Deploying to GCP

# 1. Set up Terraform variables
cd deploy/gcp/terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your project details

# 2. Initialize Terraform
terraform init

# 3. Plan the deployment
terraform plan

# 4. Apply (creates resources)
terraform apply
Apply complete! Resources: 5 added, 0 changed, 0 destroyed. Outputs: service_url = "https://my-agent-abc123-uc.a.run.app"

Generated ADK Agent Code

The generator creates ADK-compatible agent.py:

# agent/agent.py (generated)
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool

# Tools extracted from your Bedsheet ActionGroups
def get_stock_price(symbol: str) -> dict:
    """Get current stock price."""
    # Your original tool implementation
    ...

get_stock_price_tool = FunctionTool(func=get_stock_price)

# Agent definition
root_agent = LlmAgent(
    name="InvestmentAdvisor",
    model="claude-sonnet-4-5@20250929",  # From bedsheet.yaml
    instruction="""You are an investment advisor...""",  # Your instruction
    tools=[get_stock_price_tool],
)
ADK Orchestration

For Supervisors with collaborators, the generator uses SequentialAgent or ParallelAgent from ADK to match your Bedsheet architecture.

Terraform Resources

Resource Purpose
google_cloud_run_v2_service Runs your agent container
google_service_account Service identity for the agent
google_secret_manager_secret Stores API keys securely
google_cloud_run_v2_service_iam_member IAM bindings

6AWS Target (CDK + Bedrock)

The AWS target generates AWS CDK infrastructure with Bedrock agent integration.

flowchart LR subgraph "Your Code" Agent[Bedsheet Agent] end subgraph "Generated" CDK[AWS CDK] Lambda[Lambda Handler] Schema[OpenAPI Schema] GHA[GitHub Actions] end subgraph "AWS" BR[Bedrock Agent] LF[Lambda Functions] IAM[IAM Roles] end Agent --> CDK Agent --> Lambda Agent --> Schema CDK --> BR CDK --> LF CDK --> IAM GHA --> CDK style Agent fill:#dbeafe,stroke:#0969da style CDK fill:#dcfce7,stroke:#1a7f37 style Lambda fill:#f3e8ff,stroke:#8250df style BR fill:#fef3c7,stroke:#d97706

Generate AWS Artifacts

bedsheet generate --target aws

Generated Files

deploy/aws/ β”œβ”€β”€ app.py # CDK app entry point β”œβ”€β”€ cdk.json # CDK configuration β”œβ”€β”€ stacks/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── agent_stack.py # Bedrock + Lambda stack β”œβ”€β”€ lambda/ β”‚ β”œβ”€β”€ handler.py # Lambda handler (Powertools) β”‚ β”œβ”€β”€ __init__.py β”‚ └── requirements.txt β”œβ”€β”€ schemas/ β”‚ └── openapi.yaml # Action group schema β”œβ”€β”€ requirements.txt # CDK dependencies β”œβ”€β”€ Makefile β”œβ”€β”€ .env.example └── .github/workflows/ β”œβ”€β”€ ci.yaml └── deploy.yaml

Deploying to AWS

# 1. Install CDK dependencies
cd deploy/aws
pip install -r requirements.txt

# 2. Bootstrap CDK (first time only)
cdk bootstrap

# 3. Deploy
cdk deploy
βœ“ InvestmentAdvisorStack Outputs: InvestmentAdvisorStack.AgentId = ABC123DEF InvestmentAdvisorStack.AgentAliasId = PROD456

Generated Lambda Handler

The generator creates Lambda handlers using AWS Powertools patterns:

# lambda/handler.py (generated)
from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()

@logger.inject_lambda_context
@tracer.capture_lambda_handler
def handler(event: dict, context: LambdaContext) -> dict:
    """Handle Bedrock agent action group invocations."""
    action = event.get("actionGroup")
    api_path = event.get("apiPath")
    parameters = event.get("parameters", [])

    # Route to your tool implementations
    if api_path == "/get_stock_price":
        symbol = next(p["value"] for p in parameters if p["name"] == "symbol")
        result = get_stock_price(symbol)
        return format_response(result)

    return {"error": f"Unknown action: {api_path}"}

Generated OpenAPI Schema

Your @action decorators are compiled into an OpenAPI schema for Bedrock:

# schemas/openapi.yaml (generated)
openapi: 3.0.0
info:
  title: InvestmentAdvisor Actions
  version: 1.0.0
paths:
  /get_stock_price:
    post:
      summary: Get current stock price
      description: Fetches real-time stock price for a symbol
      operationId: get_stock_price
      parameters:
        - name: symbol
          in: query
          required: true
          schema:
            type: string
            description: Stock ticker symbol (e.g., AAPL, GOOGL)
      responses:
        '200':
          description: Stock price data
          content:
            application/json:
              schema:
                type: object

CDK Stack Resources

Resource Purpose
bedrock.CfnAgent Bedrock agent definition
bedrock.CfnAgentAlias Versioned agent alias
lambda.Function Action group handler
iam.Role Agent and Lambda execution roles
AWS Deployment Styles

The style option controls how your agent is deployed:

  • serverless (default) - Lambda + API Gateway
  • bedrock_native - Pure Bedrock agent
  • containers - ECS Fargate

7Multi-Environment Deployment

Deploy the same agent to dev, staging, and production with environment-specific configurations.

flowchart LR subgraph "Environments" Dev[dev] Staging[staging] Prod[prod] end subgraph "GCP (Terraform Workspaces)" TFDev[workspace: dev] TFStaging[workspace: staging] TFProd[workspace: prod] end subgraph "AWS (CDK Contexts)" CDKDev[context: dev] CDKStaging[context: staging] CDKProd[context: prod] end Dev --> TFDev Dev --> CDKDev Staging --> TFStaging Staging --> CDKStaging Prod --> TFProd Prod --> CDKProd style Dev fill:#dcfce7,stroke:#1a7f37 style Staging fill:#fef3c7,stroke:#d97706 style Prod fill:#fee2e2,stroke:#dc2626

Environment Configuration

# bedsheet.yaml
name: investment-advisor

environments:
  - dev
  - staging
  - prod

targets:
  gcp:
    project: my-project
    region: us-central1
    # Environment-specific overrides handled by Terraform workspaces

  aws:
    region: us-east-1
    # Environment-specific overrides handled by CDK contexts

GCP: Terraform Workspaces

# Create workspaces
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod

# Switch to dev and deploy
terraform workspace select dev
terraform apply -var-file=environments/dev.tfvars

# Switch to prod and deploy
terraform workspace select prod
terraform apply -var-file=environments/prod.tfvars

Environment-Specific Variables

# terraform/environments/dev.tfvars
min_instances = 0
max_instances = 2
memory        = "512Mi"
log_level     = "DEBUG"

# terraform/environments/prod.tfvars
min_instances = 2
max_instances = 10
memory        = "2Gi"
log_level     = "INFO"

AWS: CDK Contexts

# Deploy to dev
cdk deploy --context environment=dev

# Deploy to staging
cdk deploy --context environment=staging

# Deploy to prod
cdk deploy --context environment=prod

Context Configuration

# cdk.json
{
  "context": {
    "environments": {
      "dev": {
        "lambda_memory": 256,
        "log_level": "DEBUG"
      },
      "staging": {
        "lambda_memory": 512,
        "log_level": "INFO"
      },
      "prod": {
        "lambda_memory": 1024,
        "log_level": "WARN"
      }
    }
  }
}
Why Separate Environments?

Development environments let you test changes safely. Staging mirrors production for final validation. Production serves real users. This separation prevents "it worked on my machine" problems.

8GitHub Actions CI/CD

Bedsheet generates production-ready GitHub Actions workflows for automated testing and deployment.

Generated Workflows

CI Workflow

  • Runs on every pull request
  • Executes tests
  • Validates configuration
  • Lints code

Deploy Workflow

  • Runs on push to main
  • Deploys to dev automatically
  • Staging on approval
  • Prod with manual gate

GCP Deploy Workflow

# .github/workflows/deploy.yaml (generated)
name: Deploy to GCP

on:
  push:
    branches: [main]

jobs:
  deploy-dev:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write  # For Workload Identity Federation
    steps:
      - uses: actions/checkout@v4

      - uses: google-github-actions/auth@v2
        with:
          workload_identity_provider: ${{ secrets.WIF_PROVIDER }}
          service_account: ${{ secrets.WIF_SERVICE_ACCOUNT }}

      - uses: hashicorp/setup-terraform@v3

      - name: Deploy to dev
        working-directory: terraform
        run: |
          terraform init
          terraform workspace select dev
          terraform apply -auto-approve

  deploy-staging:
    needs: deploy-dev
    environment: staging  # Requires approval
    # ... similar steps with staging workspace

  deploy-prod:
    needs: deploy-staging
    environment: production  # Requires approval
    # ... similar steps with prod workspace

AWS Deploy Workflow

# .github/workflows/deploy.yaml (generated)
name: Deploy to AWS

on:
  push:
    branches: [main]

jobs:
  deploy-dev:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write  # For OIDC
    steps:
      - uses: actions/checkout@v4

      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          aws-region: us-east-1

      - uses: actions/setup-python@v5
        with:
          python-version: '3.11'

      - name: Install CDK
        run: npm install -g aws-cdk

      - name: Deploy to dev
        run: |
          pip install -r requirements.txt
          cdk deploy --context environment=dev --require-approval never

  deploy-staging:
    needs: deploy-dev
    environment: staging
    # ... similar steps with staging context

  deploy-prod:
    needs: deploy-staging
    environment: production
    # ... similar steps with prod context

Setting Up Authentication

Security: Use Workload Identity, Not Keys

Both workflows use keyless authentication (Workload Identity Federation for GCP, OIDC for AWS). Never store long-lived credentials in GitHub secrets.

GCP: Workload Identity Federation Setup

# Create the Workload Identity Pool
gcloud iam workload-identity-pools create "github-pool" \
  --location="global" \
  --display-name="GitHub Actions Pool"

# Create the provider
gcloud iam workload-identity-pools providers create-oidc "github-provider" \
  --location="global" \
  --workload-identity-pool="github-pool" \
  --issuer-uri="https://token.actions.githubusercontent.com" \
  --attribute-mapping="google.subject=assertion.sub,attribute.actor=assertion.actor,attribute.repository=assertion.repository"

# Allow GitHub to impersonate service account
gcloud iam service-accounts add-iam-policy-binding "your-sa@project.iam.gserviceaccount.com" \
  --role="roles/iam.workloadIdentityUser" \
  --member="principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/github-pool/attribute.repository/your-org/your-repo"

AWS: OIDC Provider Setup

# Create OIDC provider (one-time setup)
aws iam create-open-id-connect-provider \
  --url https://token.actions.githubusercontent.com \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list 6938fd4d98bab03faadb97b34396831e3780aea1

# Create IAM role for GitHub Actions
# (See AWS documentation for the trust policy)

Required GitHub Secrets

Secret Target Description
WIF_PROVIDER GCP Workload Identity Pool provider
WIF_SERVICE_ACCOUNT GCP Service account email
AWS_ROLE_ARN AWS IAM role ARN for GitHub Actions

9Agent Introspection

Bedsheet extracts metadata from your agents at build time to generate accurate deployment artifacts.

What Gets Extracted

from bedsheet.deploy.introspect import extract_agent_metadata

# Your Bedsheet agent
agent = Agent(
    name="InvestmentAdvisor",
    instruction="You provide investment advice...",
    model_client=AnthropicClient(),
)

tools = ActionGroup(name="StockTools")

@tools.action(name="get_price", description="Get stock price")
async def get_price(symbol: str) -> dict:
    """Fetches current stock price."""
    ...

agent.add_action_group(tools)

# Extract metadata for deployment
metadata = extract_agent_metadata(agent)
print(metadata)
AgentMetadata( name='InvestmentAdvisor', instruction='You provide investment advice...', is_supervisor=False, tools=[ ToolMetadata( name='get_price', description='Get stock price', parameters=[ ParameterMetadata(name='symbol', type='str', required=True) ], return_type='dict' ) ], collaborators=[] )

Introspection Flow

flowchart LR Agent[Your Agent Code] --> Extract[extract_agent_metadata] Extract --> Meta[AgentMetadata] Meta --> Gen[Target Generator] Gen --> Files[Deployment Files] style Agent fill:#dbeafe,stroke:#0969da style Meta fill:#f3e8ff,stroke:#8250df style Files fill:#dcfce7,stroke:#1a7f37

Extracted Information

Field Source Used For
name Agent.name Resource naming, ADK agent name
instruction Agent.instruction ADK instruction, Bedrock prompt
tools @action decorators OpenAPI schema, Lambda handlers
collaborators Supervisor.collaborators Multi-agent orchestration
is_supervisor Class type check ADK SequentialAgent vs LlmAgent
Type Hints Matter

Introspection uses Python type hints to generate accurate schemas. Always type your @action functions:

@tools.action(name="search", description="Search")
async def search(query: str, limit: int = 10) -> list[dict]:
    ...

10Best Practices

Project Structure

Good: Organized

my-agent/ β”œβ”€β”€ bedsheet.yaml β”œβ”€β”€ agents/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── advisor.py β”œβ”€β”€ tools/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── stock_tools.py └── tests/ └── test_advisor.py

Bad: Messy

my-agent/ β”œβ”€β”€ config.yaml β”œβ”€β”€ agent.py β”œβ”€β”€ other_agent.py β”œβ”€β”€ tool1.py β”œβ”€β”€ tool2.py β”œβ”€β”€ test.py └── deploy_stuff/

Configuration

Keep Secrets Out of bedsheet.yaml

Never put API keys or credentials in configuration files. Use environment variables or secret managers.

Good: Reference secrets

targets:
  gcp:
    project: my-project
    # API key in Secret Manager

Bad: Hardcoded

targets:
  gcp:
    project: my-project
    api_key: sk-abc123...

Deployment

Always Validate Before Generating
bedsheet validate && bedsheet generate --target gcp

Use Dry Run for Safety

# See what would be generated without writing files
bedsheet generate --target aws --dry-run

Review Generated Code

Generated code is meant to be understood and customized:

Testing

Test Locally First

Always use the local target for development. Only deploy to cloud when you're confident the agent works correctly.

# Development flow
bedsheet generate --target local
make run
# Test your agent...

# When ready for cloud
bedsheet generate --target gcp
terraform plan  # Review changes
terraform apply

Multi-Environment

Promotion Flow

Changes should flow: dev β†’ staging β†’ prod. Never deploy directly to production without staging validation.

End-to-End Tutorial: Investment Advisor

In this tutorial, you'll build a complete multi-agent investment advisor from scratch, test it locally, then deploy it to both Google Cloud and AWS.

What you'll build: A supervisor agent that coordinates a Market Analyst and News Researcher to provide comprehensive stock analysis.

flowchart TB subgraph "What We're Building" User[User Query] --> Advisor[Investment Advisor
Supervisor] Advisor --> |parallel| MA[Market Analyst
Agent] Advisor --> |parallel| NR[News Researcher
Agent] MA --> |stock data| Advisor NR --> |news sentiment| Advisor Advisor --> Response[Synthesized
Recommendation] end style User fill:#dbeafe,stroke:#0969da style Advisor fill:#f3e8ff,stroke:#8250df style MA fill:#dcfce7,stroke:#1a7f37 style NR fill:#dcfce7,stroke:#1a7f37 style Response fill:#fef3c7,stroke:#d97706

ABuild the Agent

Step A.1: Create Project Structure

# Initialize a new project (creates full scaffold)
bedsheet init investment-advisor
cd investment-advisor

# Add tools directory for our custom tools
mkdir tools
touch tools/__init__.py tools/stock_tools.py

# Add additional agent files
touch agents/analysts.py

The bedsheet init command creates a complete project scaffold:

Created project: investment-advisor
β”œβ”€β”€ bedsheet.yaml      # Configuration
β”œβ”€β”€ agents/            # Your agent code
β”‚   └── assistant.py   # Example agent (we'll replace this)
└── requirements.txt   # Dependencies

You can also initialize with a specific target:

# For GCP deployment (prompts for project ID)
bedsheet init investment-advisor --target gcp

# For AWS deployment (prompts for region)
bedsheet init investment-advisor --target aws

Step A.2: Define the Tools

First, create the tools our agents will use. These simulate real API calls:

# tools/stock_tools.py
"""Stock market tools for the investment advisor."""
import random
from datetime import datetime, timedelta

from bedsheet import ActionGroup

# Create action groups for different capabilities
market_tools = ActionGroup(
    name="MarketTools",
    description="Tools for fetching market data"
)

news_tools = ActionGroup(
    name="NewsTools",
    description="Tools for researching financial news"
)


@market_tools.action(
    name="get_stock_price",
    description="Get the current stock price and daily change for a ticker symbol"
)
async def get_stock_price(symbol: str) -> dict:
    """Fetch current stock price. In production, call a real API like Alpha Vantage."""
    # Simulated data - replace with real API call
    prices = {
        "AAPL": {"price": 178.52, "change": 2.34, "change_pct": 1.33},
        "GOOGL": {"price": 141.80, "change": -0.92, "change_pct": -0.64},
        "MSFT": {"price": 378.91, "change": 4.21, "change_pct": 1.12},
        "NVDA": {"price": 721.33, "change": 15.67, "change_pct": 2.22},
        "TSLA": {"price": 248.50, "change": -3.20, "change_pct": -1.27},
    }
    data = prices.get(symbol.upper(), {
        "price": round(random.uniform(50, 500), 2),
        "change": round(random.uniform(-10, 10), 2),
        "change_pct": round(random.uniform(-5, 5), 2),
    })
    return {
        "symbol": symbol.upper(),
        "currency": "USD",
        "timestamp": datetime.now().isoformat(),
        **data
    }


@market_tools.action(
    name="get_historical_prices",
    description="Get historical price data for technical analysis"
)
async def get_historical_prices(symbol: str, days: int = 30) -> dict:
    """Fetch historical prices for trend analysis."""
    base_price = 150.0
    prices = []
    for i in range(days):
        date = datetime.now() - timedelta(days=days - i)
        # Simulate price movement with trend
        price = base_price + (i * 0.5) + random.uniform(-5, 5)
        prices.append({
            "date": date.strftime("%Y-%m-%d"),
            "close": round(price, 2),
            "volume": random.randint(1000000, 50000000)
        })
    return {
        "symbol": symbol.upper(),
        "period": f"{days} days",
        "prices": prices,
        "trend": "bullish" if prices[-1]["close"] > prices[0]["close"] else "bearish"
    }


@market_tools.action(
    name="get_key_metrics",
    description="Get fundamental metrics like P/E ratio, market cap, etc."
)
async def get_key_metrics(symbol: str) -> dict:
    """Fetch key financial metrics."""
    return {
        "symbol": symbol.upper(),
        "pe_ratio": round(random.uniform(15, 40), 2),
        "market_cap_billions": round(random.uniform(100, 3000), 1),
        "dividend_yield": round(random.uniform(0, 3), 2),
        "52_week_high": round(random.uniform(150, 200), 2),
        "52_week_low": round(random.uniform(80, 140), 2),
        "avg_volume": f"{random.randint(10, 100)}M"
    }


@news_tools.action(
    name="search_news",
    description="Search for recent news articles about a company or topic"
)
async def search_news(query: str, max_results: int = 5) -> list:
    """Search financial news. In production, use a news API."""
    # Simulated news results
    headlines = [
        f"{query} Reports Strong Q4 Earnings, Beats Expectations",
        f"Analysts Upgrade {query} After Product Launch",
        f"{query} Expands Into New Markets, Stock Rises",
        f"Industry Report: {query} Leads in Innovation",
        f"{query} CEO Discusses Future Growth Strategy",
        f"Breaking: {query} Announces Strategic Partnership",
        f"Market Watch: {query} Shows Resilience Amid Volatility",
    ]
    return [
        {
            "title": random.choice(headlines),
            "source": random.choice(["Reuters", "Bloomberg", "CNBC", "WSJ"]),
            "date": (datetime.now() - timedelta(days=random.randint(0, 7))).strftime("%Y-%m-%d"),
            "sentiment": random.choice(["positive", "neutral", "positive", "negative"]),
            "relevance_score": round(random.uniform(0.7, 1.0), 2)
        }
        for _ in range(min(max_results, 5))
    ]


@news_tools.action(
    name="get_analyst_ratings",
    description="Get analyst ratings and price targets for a stock"
)
async def get_analyst_ratings(symbol: str) -> dict:
    """Fetch analyst consensus ratings."""
    return {
        "symbol": symbol.upper(),
        "consensus": random.choice(["Strong Buy", "Buy", "Hold"]),
        "num_analysts": random.randint(15, 40),
        "price_target_avg": round(random.uniform(150, 250), 2),
        "price_target_high": round(random.uniform(250, 350), 2),
        "price_target_low": round(random.uniform(100, 150), 2),
        "ratings_breakdown": {
            "strong_buy": random.randint(5, 15),
            "buy": random.randint(5, 15),
            "hold": random.randint(2, 10),
            "sell": random.randint(0, 3),
            "strong_sell": random.randint(0, 2)
        }
    }

Step A.3: Create the Specialist Agents

Now create the two specialist agents that do the actual analysis:

# agents/analysts.py
"""Specialist agents for market analysis and news research."""
from bedsheet import Agent
from bedsheet.llm import AnthropicClient

from tools.stock_tools import market_tools, news_tools


def create_market_analyst() -> Agent:
    """Create the Market Analyst agent."""
    agent = Agent(
        name="MarketAnalyst",
        instruction="""You are an expert market analyst specializing in technical and fundamental analysis.

Your responsibilities:
1. Analyze stock prices and trends using get_stock_price and get_historical_prices
2. Evaluate fundamental metrics using get_key_metrics
3. Identify patterns and provide data-driven insights

When analyzing a stock:
- Always fetch current price first
- Look at historical trends for context
- Consider key metrics for valuation
- Be specific with numbers and percentages
- Clearly state bullish or bearish outlook

Keep your analysis concise but data-rich. Focus on actionable insights.""",
        model_client=AnthropicClient(),
    )
    agent.add_action_group(market_tools)
    return agent


def create_news_researcher() -> Agent:
    """Create the News Researcher agent."""
    agent = Agent(
        name="NewsResearcher",
        instruction="""You are a financial news analyst who monitors market sentiment and news.

Your responsibilities:
1. Search for relevant news using search_news
2. Analyze sentiment and market implications
3. Check analyst ratings and price targets using get_analyst_ratings
4. Identify catalysts and risks from news

When researching a stock:
- Search for recent news about the company
- Analyze sentiment across sources
- Check what analysts are saying
- Highlight any significant events or catalysts
- Note any red flags or concerns

Provide a balanced view with both positive and negative factors.""",
        model_client=AnthropicClient(),
    )
    agent.add_action_group(news_tools)
    return agent

Step A.4: Create the Supervisor

Now create the Investment Advisor supervisor that coordinates the specialists:

# agents/advisor.py
"""Investment Advisor - the main supervisor agent."""
from bedsheet import Supervisor
from bedsheet.llm import AnthropicClient
from bedsheet.memory import InMemory

from agents.analysts import create_market_analyst, create_news_researcher


def create_investment_advisor() -> Supervisor:
    """Create the Investment Advisor supervisor."""
    # Create specialist agents
    market_analyst = create_market_analyst()
    news_researcher = create_news_researcher()

    # Create the supervisor
    advisor = Supervisor(
        name="InvestmentAdvisor",
        instruction="""You are a senior investment advisor who coordinates analysis from specialists.

## Your Team
- **MarketAnalyst**: Technical and fundamental stock analysis
- **NewsResearcher**: News sentiment and analyst ratings

## How to Handle Requests

For stock analysis requests:
1. Delegate to BOTH specialists IN PARALLEL:
   delegate(delegations=[
       {"agent_name": "MarketAnalyst", "task": "Analyze [STOCK] - price, trends, metrics"},
       {"agent_name": "NewsResearcher", "task": "Research [STOCK] - news, sentiment, ratings"}
   ])

2. Wait for both responses

3. Synthesize into a recommendation:
   - Overall rating (Strong Buy / Buy / Hold / Sell / Strong Sell)
   - Key data points from both analyses
   - Risk factors to consider
   - Suggested action with rationale

## Important Guidelines
- Always use parallel delegation for efficiency
- Never make up data - only use what your team provides
- Be balanced - present both opportunities and risks
- End with a clear, actionable recommendation
- Disclose that this is not financial advice""",
        model_client=AnthropicClient(),
        memory=InMemory(),
        collaborators=[market_analyst, news_researcher],
        collaboration_mode="supervisor",
    )
    return advisor


# Export for deployment introspection
advisor = create_investment_advisor()

Step A.5: Customize the Configuration

The bedsheet init command created a basic bedsheet.yaml. Now customize it to support all three deployment targets:

# bedsheet.yaml (customize the generated file)
name: investment-advisor
version: "1.0.0"
description: Multi-agent investment advisor with market analysis and news research

agents:
  - name: advisor
    module: agents.advisor
    class_name: InvestmentAdvisor
    description: Coordinates market analysis and news research for investment recommendations

target: local

environments:
  - dev
  - staging
  - prod

targets:
  local:
    port: 8080
    hot_reload: true

  gcp:
    project: your-gcp-project-id    # <-- Replace with your project
    region: us-central1
    model: claude-sonnet-4-5@20250929

  aws:
    region: us-east-1
    lambda_memory: 512
    bedrock_model: anthropic.claude-sonnet-4-5-v2:0
    style: serverless

Step A.6: Install Dependencies

The bedsheet init command already created requirements.txt. Install the dependencies:

pip install -r requirements.txt

Step A.7: Test the Agent Locally (Python)

# test_advisor.py
"""Quick test script for the investment advisor."""
import asyncio
from agents.advisor import create_investment_advisor
from bedsheet.events import (
    CompletionEvent, DelegationEvent,
    CollaboratorStartEvent, CollaboratorCompleteEvent,
    ToolCallEvent
)


async def main():
    advisor = create_investment_advisor()

    print("=" * 60)
    print("Investment Advisor - Test Run")
    print("=" * 60)
    print()

    query = "Analyze NVDA stock and give me a recommendation"
    print(f"Query: {query}")
    print("-" * 60)

    async for event in advisor.invoke("test-session", query):
        if isinstance(event, DelegationEvent):
            agents = [d["agent_name"] for d in event.delegations]
            print(f"\nπŸ“‹ Delegating to: {', '.join(agents)}")

        elif isinstance(event, CollaboratorStartEvent):
            print(f"\nπŸ”„ [{event.agent_name}] Starting analysis...")

        elif isinstance(event, ToolCallEvent):
            print(f"   πŸ”§ Calling {event.tool_name}()")

        elif isinstance(event, CollaboratorCompleteEvent):
            print(f"βœ… [{event.agent_name}] Complete")

        elif isinstance(event, CompletionEvent):
            print("\n" + "=" * 60)
            print("RECOMMENDATION")
            print("=" * 60)
            print(event.response)


if __name__ == "__main__":
    asyncio.run(main())
# Run the test
export ANTHROPIC_API_KEY=your-key-here
python test_advisor.py
============================================================ Investment Advisor - Test Run ============================================================ Query: Analyze NVDA stock and give me a recommendation ------------------------------------------------------------ πŸ“‹ Delegating to: MarketAnalyst, NewsResearcher πŸ”„ [MarketAnalyst] Starting analysis... πŸ”§ Calling get_stock_price() πŸ”§ Calling get_historical_prices() πŸ”§ Calling get_key_metrics() βœ… [MarketAnalyst] Complete πŸ”„ [NewsResearcher] Starting analysis... πŸ”§ Calling search_news() πŸ”§ Calling get_analyst_ratings() βœ… [NewsResearcher] Complete ============================================================ RECOMMENDATION ============================================================ ## NVDA Stock Analysis Summary **Overall Rating: BUY** ⭐⭐⭐⭐ ### Market Analysis - Current Price: $721.33 (+2.22% today) - 30-Day Trend: Bullish with strong momentum - P/E Ratio: 28.5 (premium but justified by growth) - Market Cap: $1.8 trillion ### News & Sentiment - Recent headlines largely positive (4/5 positive sentiment) - Analyst consensus: Strong Buy (32 analysts) - Average price target: $850 (18% upside) ### Recommendation NVDA shows strong technical momentum backed by positive fundamentals... ⚠️ This is not financial advice. Please consult a licensed advisor.

BTest Locally with Docker

Now let's deploy our agent locally using Docker:

Step B.1: Generate Local Deployment

bedsheet generate --target local --output deploy/local
Generated 7 files for target: local deploy/local/ β”œβ”€β”€ docker-compose.yaml β”œβ”€β”€ Dockerfile β”œβ”€β”€ app.py β”œβ”€β”€ requirements.txt β”œβ”€β”€ Makefile β”œβ”€β”€ .env.example └── .github/workflows/ci.yaml

Step B.2: Configure and Run

cd deploy/local

# Set up environment
cp .env.example .env
echo "ANTHROPIC_API_KEY=your-key-here" >> .env

# Start the agent
make run
Starting investment-advisor... Building Docker image... Starting containers... βœ“ Agent running at http://localhost:8080 Press Ctrl+C to stop

Step B.3: Test the API

# In another terminal
curl -X POST http://localhost:8080/invoke \
  -H "Content-Type: application/json" \
  -d '{
    "session_id": "user-123",
    "input": "Analyze AAPL stock"
  }'
{ "response": "## AAPL Stock Analysis...", "session_id": "user-123", "events_count": 12 }
Hot Reload Enabled

With hot_reload: true, you can edit your agent code and see changes immediately without restarting Docker.

CDeploy to Google Cloud

Now let's deploy to GCP using Terraform and Cloud Run:

Step C.1: Prerequisites

# Install required tools
brew install terraform  # or your package manager
brew install google-cloud-sdk

# Authenticate
gcloud auth login
gcloud auth application-default login

# Set your project
gcloud config set project your-gcp-project-id

Step C.2: Generate GCP Deployment

cd ../..  # Back to project root
bedsheet generate --target gcp --output deploy/gcp
Generated 13 files for target: gcp deploy/gcp/ β”œβ”€β”€ agent/ β”‚ β”œβ”€β”€ agent.py # ADK-compatible code β”‚ └── __init__.py β”œβ”€β”€ terraform/ β”‚ β”œβ”€β”€ main.tf # Cloud Run, IAM, Secrets β”‚ β”œβ”€β”€ variables.tf β”‚ β”œβ”€β”€ outputs.tf β”‚ └── terraform.tfvars.example β”œβ”€β”€ Dockerfile β”œβ”€β”€ cloudbuild.yaml β”œβ”€β”€ requirements.txt β”œβ”€β”€ Makefile β”œβ”€β”€ .env.example └── .github/workflows/ β”œβ”€β”€ ci.yaml └── deploy.yaml

Step C.3: Review Generated ADK Code

Let's look at the generated agent.py:

# deploy/gcp/agent/agent.py (generated)
"""Investment Advisor - ADK-compatible agent."""
from google.adk.agents import LlmAgent, SequentialAgent
from google.adk.tools import FunctionTool

# === Market Tools ===
def get_stock_price(symbol: str) -> dict:
    """Get the current stock price and daily change."""
    # Implementation from your tools/stock_tools.py
    ...

def get_historical_prices(symbol: str, days: int = 30) -> dict:
    """Get historical price data for technical analysis."""
    ...

# === News Tools ===
def search_news(query: str, max_results: int = 5) -> list:
    """Search for recent news articles about a company."""
    ...

def get_analyst_ratings(symbol: str) -> dict:
    """Get analyst ratings and price targets."""
    ...

# === Create Tool Objects ===
market_tools = [
    FunctionTool(func=get_stock_price),
    FunctionTool(func=get_historical_prices),
    FunctionTool(func=get_key_metrics),
]

news_tools = [
    FunctionTool(func=search_news),
    FunctionTool(func=get_analyst_ratings),
]

# === Specialist Agents ===
market_analyst = LlmAgent(
    name="MarketAnalyst",
    model="claude-sonnet-4-5@20250929",
    instruction="""You are an expert market analyst...""",
    tools=market_tools,
)

news_researcher = LlmAgent(
    name="NewsResearcher",
    model="claude-sonnet-4-5@20250929",
    instruction="""You are a financial news analyst...""",
    tools=news_tools,
)

# === Supervisor (using SequentialAgent for orchestration) ===
root_agent = SequentialAgent(
    name="InvestmentAdvisor",
    sub_agents=[market_analyst, news_researcher],
)

Step C.4: Configure Terraform

cd deploy/gcp/terraform

# Copy and edit variables
cp terraform.tfvars.example terraform.tfvars
# terraform.tfvars
project_id = "your-gcp-project-id"
region     = "us-central1"

# Secrets (store in Secret Manager, not here!)
# anthropic_api_key is pulled from Secret Manager

Step C.5: Deploy Infrastructure

# Initialize Terraform
terraform init

# Preview what will be created
terraform plan
Terraform will perform the following actions: + google_cloud_run_v2_service.agent + google_service_account.agent_sa + google_secret_manager_secret.anthropic_key + google_cloud_run_v2_service_iam_member.invoker Plan: 4 to add, 0 to change, 0 to destroy.
# Apply (creates resources)
terraform apply
Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: service_url = "https://investment-advisor-abc123-uc.a.run.app"

Step C.6: Store Your API Key

# Add your Anthropic API key to Secret Manager
echo -n "sk-ant-your-key" | gcloud secrets versions add anthropic-api-key --data-file=-

Step C.7: Test the Deployed Agent

# Get the service URL
SERVICE_URL=$(terraform output -raw service_url)

# Test it
curl -X POST "$SERVICE_URL/invoke" \
  -H "Content-Type: application/json" \
  -d '{"session_id": "test", "input": "Analyze GOOGL stock"}'
{ "response": "## GOOGL Stock Analysis\n\n**Rating: BUY**\n\n...", "session_id": "test" }

Step C.8: Development with ADK Dev UI

For development and testing, you can use the ADK Dev UI which provides a rich interactive interface:

# Run ADK Dev UI locally
cd deploy/gcp
make dev-ui-local
Starting ADK Dev UI locally... Open http://localhost:8000 in your browser

The ADK Dev UI includes:

To deploy the Dev UI to Cloud Run (separate from production):

# Deploy with Dev UI to Cloud Run (creates separate service)
make dev-ui
Deploying to Cloud Run with ADK Dev UI... This creates a SEPARATE service (myagent-dev) for development testing. Your production deployment (myagent) is NOT affected.
Dev UI vs Production

make dev-ui creates a separate Cloud Run service (*-dev) with the ADK Dev UI enabled. Use make deploy-terraform for production deployments with full IaC control.

Multi-Environment Deployment

Use Terraform workspaces for dev/staging/prod:

terraform workspace new staging
terraform workspace select staging
terraform apply -var-file=environments/staging.tfvars

DDeploy to AWS

Finally, let's deploy the same agent to AWS using CDK and Bedrock:

Step D.1: Prerequisites

# Install AWS CDK
npm install -g aws-cdk

# Configure AWS credentials
aws configure

# Verify access
aws sts get-caller-identity

Step D.2: Generate AWS Deployment

cd ../..  # Back to project root
bedsheet generate --target aws --output deploy/aws
Generated 13 files for target: aws deploy/aws/ β”œβ”€β”€ app.py # CDK app entry β”œβ”€β”€ cdk.json β”œβ”€β”€ stacks/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── agent_stack.py # Bedrock + Lambda β”œβ”€β”€ lambda/ β”‚ β”œβ”€β”€ handler.py # Action group handler β”‚ β”œβ”€β”€ __init__.py β”‚ └── requirements.txt β”œβ”€β”€ schemas/ β”‚ └── openapi.yaml # Action group schema β”œβ”€β”€ requirements.txt β”œβ”€β”€ Makefile β”œβ”€β”€ .env.example └── .github/workflows/ β”œβ”€β”€ ci.yaml └── deploy.yaml

Step D.3: Review Generated CDK Stack

# deploy/aws/stacks/agent_stack.py (generated)
"""CDK Stack for Investment Advisor Bedrock Agent."""
from aws_cdk import (
    Stack,
    aws_bedrock as bedrock,
    aws_lambda as lambda_,
    aws_iam as iam,
    Duration,
)
from constructs import Construct


class InvestmentAdvisorStack(Stack):
    def __init__(self, scope: Construct, id: str, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # Lambda for action groups
        action_handler = lambda_.Function(
            self, "ActionHandler",
            runtime=lambda_.Runtime.PYTHON_3_11,
            handler="handler.handler",
            code=lambda_.Code.from_asset("lambda"),
            timeout=Duration.seconds(30),
            memory_size=512,
        )

        # Bedrock Agent
        agent = bedrock.CfnAgent(
            self, "InvestmentAdvisor",
            agent_name="investment-advisor",
            instruction="""You are a senior investment advisor...""",
            foundation_model="anthropic.claude-sonnet-4-5-v2:0",
            agent_resource_role_arn=self.create_agent_role().role_arn,
        )

        # Action Groups (tools)
        bedrock.CfnAgentActionGroup(
            self, "MarketTools",
            agent_id=agent.attr_agent_id,
            action_group_name="MarketTools",
            action_group_executor=bedrock.CfnAgentActionGroup.ActionGroupExecutorProperty(
                lambda_=action_handler.function_arn
            ),
            api_schema=bedrock.CfnAgentActionGroup.APISchemaProperty(
                s3=bedrock.CfnAgentActionGroup.S3IdentifierProperty(
                    s3_bucket_name=self.schema_bucket.bucket_name,
                    s3_object_key="openapi.yaml"
                )
            ),
        )

Step D.4: Review Generated Lambda Handler

# deploy/aws/lambda/handler.py (generated)
"""Lambda handler for Bedrock action groups."""
from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()

# Your tool implementations
def get_stock_price(symbol: str) -> dict:
    """Implementation from tools/stock_tools.py"""
    ...

def search_news(query: str, max_results: int = 5) -> list:
    """Implementation from tools/stock_tools.py"""
    ...


@logger.inject_lambda_context
@tracer.capture_lambda_handler
def handler(event: dict, context: LambdaContext) -> dict:
    """Route Bedrock action group requests to tool implementations."""
    api_path = event.get("apiPath")
    parameters = {p["name"]: p["value"] for p in event.get("parameters", [])}

    logger.info(f"Handling action: {api_path}")

    # Route to appropriate tool
    if api_path == "/get_stock_price":
        result = get_stock_price(parameters["symbol"])
    elif api_path == "/search_news":
        result = search_news(
            parameters["query"],
            int(parameters.get("max_results", 5))
        )
    # ... other routes ...
    else:
        return {"error": f"Unknown action: {api_path}"}

    return {
        "messageVersion": "1.0",
        "response": {
            "actionGroup": event["actionGroup"],
            "apiPath": api_path,
            "httpMethod": "POST",
            "httpStatusCode": 200,
            "responseBody": {"application/json": {"body": str(result)}}
        }
    }

Step D.5: Review Generated OpenAPI Schema

# deploy/aws/schemas/openapi.yaml (generated)
openapi: 3.0.0
info:
  title: Investment Advisor Actions
  version: 1.0.0
  description: Tools for market analysis and news research

paths:
  /get_stock_price:
    post:
      operationId: get_stock_price
      summary: Get current stock price and daily change
      description: Fetches real-time stock price data for a ticker symbol
      parameters:
        - name: symbol
          in: query
          required: true
          schema:
            type: string
            description: Stock ticker symbol (e.g., AAPL, GOOGL)
      responses:
        '200':
          description: Stock price data
          content:
            application/json:
              schema:
                type: object
                properties:
                  symbol:
                    type: string
                  price:
                    type: number
                  change:
                    type: number
                  change_pct:
                    type: number

  /search_news:
    post:
      operationId: search_news
      summary: Search for recent news articles
      description: Searches financial news sources for articles about a company
      parameters:
        - name: query
          in: query
          required: true
          schema:
            type: string
        - name: max_results
          in: query
          required: false
          schema:
            type: integer
            default: 5
      responses:
        '200':
          description: List of news articles
          content:
            application/json:
              schema:
                type: array
                items:
                  type: object

Step D.6: Deploy to AWS

cd deploy/aws

# Install CDK dependencies
pip install -r requirements.txt

# Bootstrap CDK (first time only)
cdk bootstrap

# Deploy
cdk deploy
InvestmentAdvisorStack: deploying... βœ… InvestmentAdvisorStack Outputs: InvestmentAdvisorStack.AgentId = ABCD1234EF InvestmentAdvisorStack.AgentAliasId = TSTALIASID

Step D.7: Test the Bedrock Agent

# Using AWS CLI
aws bedrock-agent-runtime invoke-agent \
  --agent-id ABCD1234EF \
  --agent-alias-id TSTALIASID \
  --session-id test-session \
  --input-text "Analyze MSFT stock and give me a recommendation"
{ "completion": "## MSFT Stock Analysis\n\n**Rating: STRONG BUY**\n\n### Technical Analysis\n- Current Price: $378.91 (+1.12%)\n- 30-Day Trend: Bullish\n...", "sessionId": "test-session" }

AWS Debug UI

The AWS target includes a Debug UI for testing and debugging your Bedrock agent locally. The Debug UI proxies requests to the Bedrock Agent Runtime API and provides real-time tracing.

# Start the Debug UI
cd deploy/aws/debug-ui
export BEDROCK_AGENT_ID=ABCD1234EF
export BEDROCK_AGENT_ALIAS=TSTALIASID
pip install -r requirements.txt
python server.py

# Open http://localhost:8080 in your browser
Debug UI Features
  • Chat Interface - Send messages and see streaming responses
  • Events Panel - Real-time trace events (thinking, tool calls, results)
  • Multi-Agent Tracing - See collaborator_start/complete for sub-agent calls
  • Collapsible Events - Click to expand full JSON details

Step D.8: Multi-Environment with CDK Contexts

# Deploy to different environments
cdk deploy --context environment=dev
cdk deploy --context environment=staging
cdk deploy --context environment=prod
Same Agent, Any Cloud

You've now deployed the exact same agent to both GCP and AWS! The Bedsheet CLI extracted your agent's architecture and generated cloud-native artifacts for each platform.

Tutorial Complete!

You've successfully built a multi-agent investment advisor and deployed it to:

βœ…
Local Docker
Development & Testing
βœ…
Google Cloud
Terraform + Cloud Run
βœ…
AWS
CDK + Bedrock

Key Takeaways

  • Write your agent once using Bedsheet's Python API
  • Use bedsheet generate to compile for any target
  • Generated code is production-ready with CI/CD, IaC, and best practices
  • Multi-environment support via Terraform workspaces (GCP) or CDK contexts (AWS)
  • Your agent's architecture is preserved across all platforms

Generated Files Reference

Local Target

File Purpose
docker-compose.yamlContainer orchestration
DockerfileAgent container image
app.pyFastAPI application
requirements.txtPython dependencies
MakefileConvenience commands
.env.exampleEnvironment template

GCP Target

File Purpose
agent/agent.pyADK-compatible agent code
terraform/main.tfCloud Run, IAM, Secrets
terraform/variables.tfInput variables
terraform/outputs.tfService URL, etc.
DockerfileContainer image
cloudbuild.yamlCloud Build config
.github/workflows/*CI/CD pipelines

AWS Target

File Purpose
app.pyCDK app entry point
stacks/agent_stack.pyBedrock + Lambda stack
lambda/handler.pyAction group handler
schemas/openapi.yamlAction group schema
cdk.jsonCDK configuration
.github/workflows/*CI/CD pipelines