How to Call Custom Tools from Python Agent via Arcade

How to Call Custom Tools from Python Agent via Arcade

Arcade.dev Team's avatar
Arcade.dev Team
OCTOBER 27, 2025
6 MIN READ
THOUGHT LEADERSHIP
Rays decoration image
Ghost Icon

Python agents execute custom tools through Arcade's API to interact with external services, internal APIs, and business logic. This guide covers tool creation, agent integration, and production deployment.

Prerequisites

Before starting, ensure you have:

  • Python 3.10 or higher
  • Arcade account with API key
  • Virtual environment for Python dependencies

Install Arcade SDK

Install the core SDK for building custom tools:

pip install arcade-ai

For agent integrations using the Python client:

pip install arcadepy

Set your API key as an environment variable:

export ARCADE_API_KEY="your_api_key_here"

Get your API key from the Arcade quickstart guide.

Create a Custom Toolkit

Generate Toolkit Structure

Create a new toolkit using the Arcade CLI:

arcade new my_toolkit
cd my_toolkit

This generates:

  • pyproject.toml with dependency configuration
  • tools/ directory for tool definitions
  • tests/ directory with test templates
  • evals/ directory for evaluation files
  • Makefile with development commands

Define Custom Tools

Create a Python file in my_toolkit/tools/:

from typing import Annotated
from arcade.sdk import tool

@tool
def process_data(
    data_source: Annotated[str, "URL or path to data source"],
    operation: Annotated[str, "Operation to perform: filter, transform, aggregate"],
    parameters: Annotated[dict, "Operation-specific parameters"]
) -> dict:
    """Process data from a source with specified operation."""
    # Implementation here
    result = perform_operation(data_source, operation, parameters)
    return {"status": "complete", "result": result}

The @tool decorator registers the function with Arcade. Type annotations define the schema for agent interaction.

Tools with API Integration

Build tools that call external APIs:

from typing import Annotated
from arcade.sdk import tool, ToolContext
import httpx

@tool
async def fetch_api_data(
    context: ToolContext,
    endpoint: Annotated[str, "API endpoint path"],
    method: Annotated[str, "HTTP method: GET, POST, PUT, DELETE"] = "GET",
    payload: Annotated[dict, "Request payload"] = None
) -> dict:
    """Execute API request with authentication."""
    api_key = context.get_secret("API_KEY")
    base_url = context.get_secret("API_BASE_URL")

    async with httpx.AsyncClient() as client:
        response = await client.request(
            method=method,
            url=f"{base_url}{endpoint}",
            headers={"Authorization": f"Bearer {api_key}"},
            json=payload
        )
        response.raise_for_status()
        return response.json()

The ToolContext provides access to secrets and logging without exposing credentials to agents.

Run Tools Locally

Start Local Worker

Run the toolkit worker for development:

arcade serve --reload

Options:

  • -reload: Auto-restart on code changes
  • -port 8002: Specify port (default: 8002)
  • -host 127.0.0.1: Specify host

The worker exposes tools at http://localhost:8002.

Register with Arcade Engine

Connect your local worker to Arcade:

  1. Expose local worker using ngrok, Tailscale, or Cloudflare Tunnel
  2. Add worker in Arcade Dashboard
  3. Configure worker ID and secret
  4. Enable worker for your account

Your custom tools now appear in the Arcade catalog alongside hosted tools.

Execute Custom Tools from Python Agents

Direct Execution

Call tools using the Arcade Python client:

from arcadepy import Arcade

client = Arcade()

response = client.tools.execute(
    tool_name="MyToolkit.ProcessData",
    input={
        "data_source": "https://example.com/data.csv",
        "operation": "filter",
        "parameters": {"column": "status", "value": "active"}
    },
    user_id="user@example.com"
)

print(response.output.value)

Tool names follow the pattern: {ToolkitName}.{FunctionName}.

Async Execution

Use AsyncArcade for non-blocking operations:

from arcadepy import AsyncArcade
import asyncio

async def execute_tool():
    client = AsyncArcade()

    response = await client.tools.execute(
        tool_name="MyToolkit.FetchApiData",
        input={
            "endpoint": "/v1/users",
            "method": "GET"
        },
        user_id="user@example.com"
    )

    return response.output.value

result = asyncio.run(execute_tool())

Async execution prevents blocking when tools perform network requests or database operations.

List Available Tools

Query available tools in a toolkit:

from arcadepy import Arcade

client = Arcade()

tools = client.tools.list(toolkit="my_toolkit", limit=50)

for tool in tools.items:
    print(f"Name: {tool.name}")
    print(f"Description: {tool.description}")
    print(f"Parameters: {tool.parameters}\n")

This returns complete tool schemas including parameter types and descriptions.

Add Authentication to Tools

OAuth Integration

Build tools that require user authorization:

from arcade.sdk import tool, ToolContext
from arcade.sdk.auth import OAuth2

@tool(
    requires_auth=OAuth2(
        id="github",
        scopes=["repo", "user"]
    )
)
async def manage_repository(
    context: ToolContext,
    repo: Annotated[str, "Repository name in format owner/repo"],
    action: Annotated[str, "Action: star, fork, watch"],
) -> dict:
    """Manage GitHub repository on behalf of user."""
    token = context.authorization.token

    async with httpx.AsyncClient() as client:
        response = await client.put(
            f"https://api.github.com/user/starred/{repo}",
            headers={
                "Authorization": f"Bearer {token}",
                "Accept": "application/vnd.github.v3+json"
            }
        )
        response.raise_for_status()
        return {"status": "success", "action": action}

Arcade handles the OAuth flow. Tokens are injected at runtime through ToolContext.

Handle Authorization in Agents

Check authorization status before tool execution:

from arcadepy import Arcade

client = Arcade()
user_id = "user@example.com"

# Request authorization
auth_response = client.tools.authorize(
    tool_name="MyToolkit.ManageRepository",
    user_id=user_id
)

if auth_response.status != "completed":
    print(f"Authorize here: {auth_response.url}")
    client.auth.wait_for_completion(auth_response)

# Execute tool after authorization
response = client.tools.execute(
    tool_name="MyToolkit.ManageRepository",
    input={
        "repo": "arcadeai/arcade-ai",
        "action": "star"
    },
    user_id=user_id
)

More details in the authorized tool calling guide.

Integrate with Agent Frameworks

OpenAI Agents

Use custom tools with OpenAI Agents framework:

from agents import Agent, Runner
from arcadepy import AsyncArcade
from agents_arcade import get_arcade_tools

async def run_agent():
    client = AsyncArcade()

    # Load custom toolkit
    tools = await get_arcade_tools(
        client,
        toolkits=["my_toolkit"]
    )

    agent = Agent(
        name="Data Processing Agent",
        instructions="Process data using available tools.",
        model="gpt-4o-mini",
        tools=tools
    )

    result = await Runner.run(
        starting_agent=agent,
        input="Filter active users from the dataset",
        context={"user_id": "user@example.com"}
    )

    print(result.final_output)

import asyncio
asyncio.run(run_agent())

Learn more in the OpenAI Agents integration guide.

LangChain Integration

Connect custom tools to LangChain agents:

from arcadepy import AsyncArcade
from langchain_arcade import ArcadeToolManager

manager = ArcadeToolManager(api_key="your_api_key")

# Load custom toolkit
tools = manager.get_tools(toolkits=["my_toolkit"])

# Use with LangChain agent
from langchain.agents import create_react_agent, AgentExecutor
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

response = agent_executor.invoke({
    "input": "Process the latest data from the API"
})

More details in the LangChain integration guide.

Advanced Tool Patterns

Error Handling

Implement robust error handling in custom tools:

from arcade.sdk import tool, ToolContext
from arcade.sdk.errors import RetryableToolError, ToolError
from typing import Annotated

@tool
async def resilient_api_call(
    context: ToolContext,
    endpoint: Annotated[str, "API endpoint"]
) -> dict:
    """API call with retry logic."""

    try:
        async with httpx.AsyncClient() as client:
            response = await client.get(endpoint)
            response.raise_for_status()
            return response.json()

    except httpx.HTTPStatusError as e:
        if e.response.status_code >= 500:
            raise RetryableToolError(
                f"Server error: {e.response.status_code}",
                retry_after_ms=5000
            )
        else:
            raise ToolError(f"Client error: {e.response.status_code}")

    except httpx.RequestError as e:
        raise RetryableToolError(f"Network error: {str(e)}")

Learn more about error handling.

Progress Reporting

Track progress for long-running operations:

from arcade.sdk import tool, ToolContext
from typing import Annotated

@tool
async def process_large_file(
    context: ToolContext,
    file_url: Annotated[str, "File URL"]
) -> dict:
    """Process large file with progress updates."""

    await context.report_progress(0, "Starting download")
    data = await download_file(file_url)

    await context.report_progress(30, "Processing records")
    results = await process_records(data)

    await context.report_progress(80, "Generating report")
    report = generate_report(results)

    await context.report_progress(100, "Complete")
    return report

Batch Execution

Execute multiple tool calls concurrently:

from arcadepy import AsyncArcade
import asyncio

async def batch_process():
    client = AsyncArcade()
    user_id = "user@example.com"

    tasks = [
        client.tools.execute(
            tool_name="MyToolkit.FetchApiData",
            input={"endpoint": f"/v1/users/{user_id}"},
            user_id=user_id
        )
        for user_id in range(1, 11)
    ]

    results = await asyncio.gather(*tasks)
    return [r.output.value for r in results]

data = asyncio.run(batch_process())

Deploy to Production

Cloud Deployment

Deploy toolkits to Arcade Cloud using Arcade Deploy:

arcade deploy

Create a worker.toml configuration:

[[worker]]
[worker.config]
id = "my-worker"
secret = "your_worker_secret"

[worker.local_source]
packages = ["./my_toolkit"]

Arcade handles hosting, load balancing, and monitoring.

Self-Hosted Deployment

Run Arcade Engine with custom tools in your infrastructure:

# docker-compose.yml
version: '3.8'
services:
  arcade-engine:
    image: ghcr.io/arcadeai/engine:latest
    environment:
      - ARCADE_API_KEY=${ARCADE_API_KEY}
    ports:
      - "9099:9099"

  custom-worker:
    build: ./my_toolkit
    environment:
      - ARCADE_WORKER_SECRET=${WORKER_SECRET}
    ports:
      - "8002:8002"

Learn more about local deployment.

Test and Evaluate Tools

Write Evaluations

Create test suites for custom tools:

# evals/eval_my_toolkit.py
import pytest
from arcade.sdk.eval import ToolEvaluation

@pytest.mark.asyncio
async def test_process_data():
    eval = ToolEvaluation(
        tool_name="MyToolkit.ProcessData",
        inputs={
            "data_source": "test_data.csv",
            "operation": "filter",
            "parameters": {"column": "status", "value": "active"}
        },
        expected_output_type="dict"
    )

    result = await eval.run()
    assert result.success
    assert "result" in result.output

Run evaluations before deployment:

arcade evals run

More information in the evaluation guide.

Test with CLI Chat

Test tools interactively:

arcade chat --toolkit my_toolkit

This starts an interactive session where you can test tool execution with natural language prompts.

Best Practices

Tool Design

  • Use descriptive, action-oriented names
  • Provide detailed parameter descriptions
  • Return structured data (dictionaries, lists)
  • Validate inputs early
  • Handle edge cases explicitly

Security

  • Store API keys in environment variables
  • Use context.get_secret() for credentials
  • Never log authentication tokens
  • Validate user permissions before privileged operations
  • Implement rate limiting for expensive operations

Performance

  • Use async/await for I/O operations
  • Implement caching for frequently accessed data
  • Return early when possible
  • Use connection pooling for databases
  • Batch API calls when supported

Error Messages

  • Return clear, actionable error messages
  • Include relevant context in errors
  • Use appropriate error types (RetryableToolError, ToolError)
  • Log errors for debugging

Monitoring and Debugging

Access execution logs in the Arcade Dashboard:

  • Tool invocation timestamps
  • Input parameters and outputs
  • Execution duration
  • Error traces

Logs are searchable by user ID, tool name, and timestamp for 30 days.

Resources

Custom tools enable Python agents to execute real-world actions through secure, authenticated integrations. The patterns in this guide provide the foundation for building production-ready tool integrations.

SHARE THIS POST

RECENT ARTICLES

Rays decoration image
THOUGHT LEADERSHIP

Enterprise MCP Guide For Retail Banking & Payments: Use Cases, Best Practices, and Trends

The global payments industry processes $2.0 quadrillion in value flows annually, generating $2.5 trillion in revenue. Yet despite decades of digital transformation investment, critical banking operations,anti-money laundering investigation, KYC onboarding, payment reconciliation,remain largely manual. Model Context Protocol (MCP) represents the infrastructure breakthrough that enables financial institutions to move beyond chatbot pilots to production-grade AI agents that take multi-user authoriz

Rays decoration image
THOUGHT LEADERSHIP

Enterprise MCP Guide For Capital Markets & Trading: Use Cases, Best Practices, and Trends

Capital markets technology leaders face a critical infrastructure challenge: scattered AI pilots, disconnected integrations, and fragmented, domain-specific systems that turn engineers into human APIs manually stitching together trading platforms, market data feeds, and risk management tools. The Model Context Protocol (MCP) represents a fundamental shift from this costly one-off integration approach to a universal standardization layer that acts as the backbone for AI-native financial enterpris

Rays decoration image
THOUGHT LEADERSHIP

Enterprise MCP Guide For InsurTech: Use Cases, Best Practices, and Trends

The insurance industry faces a pivotal transformation moment. Model Context Protocol (MCP) has moved from experimental technology to production infrastructure, with 16,000+ active servers deployed across enterprises and millions of weekly SDK downloads. For InsurTech leaders, the question is no longer whether to adopt MCP, but how to implement it securely and effectively. Arcade's platform provides the MCP runtime for secure, multi-user authorization so AI agents can act on behalf of users acros

Blog CTA Icon

Get early access to Arcade, and start building now.