LangGraph agents need more than pre-built integrations to solve real business problems. They require custom tools that connect to your internal APIs, databases, and proprietary systems. Arcade provides the infrastructure to build, serve, and call these custom tools from LangGraph agents with minimal overhead.
This guide covers the complete workflow from creating custom tools to calling them from LangGraph, including authentication handling, deployment strategies, and production patterns.
Prerequisites
Before you begin, ensure you have:
- Python 3.8 or higher installed
- An Arcade API key
- OpenAI API key for LangGraph examples
- Basic familiarity with async/await patterns in Python
Overview the Arcade Tool Architecture
Arcade's architecture separates tool creation from tool execution through three core components:
Tool Development Kit (TDK): A Python library for creating tools using the @tool decorator. The TDK handles input/output schema generation, type validation, and error handling automatically.
Worker: A service that hosts and executes your tools. Workers run locally during development or deploy to Arcade's cloud, your VPC, or on-premises infrastructure.
Engine: Routes tool calls from AI agents to the appropriate worker, manages authentication, and handles authorization flows.
This separation means you write tools once and serve them to any agent framework, including LangGraph, without framework-specific modifications.
Creating Custom Tools with the TDK
Setting Up Your Development Environment
Install the required packages to build and serve custom tools:
# Install Arcade CLI and Tool Development Kit
pip install arcade-ai
# Or install just the TDK if you're building tools only
pip install arcade-tdk
Creating Your First Custom Tool
The simplest way to create a custom tool uses Arcade's @tool decorator. Here's a basic example:
from typing import Annotated
from arcade_tdk import tool
@tool
def calculate_revenue(
units_sold: Annotated[int, "Number of units sold"],
price_per_unit: Annotated[float, "Price per unit in dollars"]
) -> Annotated[float, "Total revenue calculated"]:
"""
Calculate total revenue from units sold and price per unit.
Examples:
calculate_revenue(100, 49.99) -> 4999.00
calculate_revenue(50, 29.99) -> 1499.50
"""
return units_sold * price_per_unit
The @tool decorator automatically:
- Extracts the function name as the tool name
- Uses the docstring as the tool description
- Generates JSON schema from type annotations
- Validates input parameters at runtime
- Handles common errors and exceptions
Building a Complete Toolkit
For production use, organize related tools into a toolkit. Use the Arcade CLI to scaffold a new toolkit project:
# Create a new toolkit project
uv tool run --from arcade-ai arcade new sales_analytics
# Navigate to the project
cd sales_analytics
This generates a complete project structure:
sales_analytics/
├── arcade_sales_analytics/
│ ├── __init__.py
│ └── tools/
│ ├── __init__.py
│ └── operations.py
├── tests/
├── pyproject.toml
├── worker.toml
└── Makefile
Add your custom tools to arcade_sales_analytics/tools/operations.py:
from typing import Annotated, List, Dict
from arcade_tdk import tool
import httpx
@tool
def fetch_sales_data(
start_date: Annotated[str, "Start date in YYYY-MM-DD format"],
end_date: Annotated[str, "End date in YYYY-MM-DD format"],
region: Annotated[str, "Sales region code"]
) -> Annotated[List[Dict], "Sales records for the date range"]:
"""
Fetch sales data from internal CRM for a specific date range and region.
This tool connects to your internal sales API and retrieves transaction
records filtered by date range and geographic region.
"""
# Your API integration logic here
response = httpx.get(
f"https://internal-api.company.com/sales",
params={
"start": start_date,
"end": end_date,
"region": region
},
timeout=30.0
)
response.raise_for_status()
return response.json()
@tool
def calculate_commission(
sales_amount: Annotated[float, "Total sales amount in dollars"],
commission_rate: Annotated[float, "Commission rate as decimal (e.g., 0.15 for 15%)"]
) -> Annotated[Dict, "Commission breakdown"]:
"""
Calculate sales commission based on amount and rate.
Returns commission amount and net amount after commission.
"""
commission = sales_amount * commission_rate
net_amount = sales_amount - commission
return {
"gross_amount": sales_amount,
"commission_rate": commission_rate,
"commission": commission,
"net_amount": net_amount
}
Update the package initialization to expose your tools:
# arcade_sales_analytics/tools/__init__.py
from arcade_sales_analytics.tools.operations import (
fetch_sales_data,
calculate_commission
)
__all__ = ["fetch_sales_data", "calculate_commission"]
Integrating Custom Tools with LangGraph
Installing LangChain Integration
Arcade provides a dedicated package for LangChain and LangGraph integration:
pip install langchain-arcade langchain-openai langgraph
Loading Custom Tools into LangGraph
The ArcadeToolManager provides methods to fetch your custom tools and convert them to LangGraph-compatible format:
import os
from langchain_arcade import ArcadeToolManager
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
# Initialize the Arcade tool manager
arcade_api_key = os.environ.get("ARCADE_API_KEY")
openai_api_key = os.environ.get("OPENAI_API_KEY")
tool_manager = ArcadeToolManager(api_key=arcade_api_key)
# Fetch your custom toolkit
tools = tool_manager.get_tools(toolkits=["sales_analytics"])
# Create language model with tools bound
model = ChatOpenAI(model="gpt-4o", api_key=openai_api_key)
bound_model = model.bind_tools(tools)
# Set up memory for stateful conversations
memory = MemorySaver()
# Create ReAct agent with your custom tools
graph = create_react_agent(
model=bound_model,
tools=tools,
checkpointer=memory
)
Calling Specific Custom Tools
You can also load individual tools instead of entire toolkits:
from langchain_arcade import ArcadeToolManager
tool_manager = ArcadeToolManager(api_key=arcade_api_key)
# Fetch specific tools by name
tools = tool_manager.get_tools(
tools=["SalesAnalytics.FetchSalesData", "SalesAnalytics.CalculateCommission"]
)
print(f"Loaded {len(tools)} custom tools")
for tool in tool_manager.tools:
print(f"- {tool.name}: {tool.description}")
Running Your LangGraph Agent
Execute your agent with custom tools:
config = {
"configurable": {
"thread_id": "sales-analysis-001",
"user_id": "{arcade_user_id}" # Required for authorization
}
}
user_input = {
"messages": [
("user", "Calculate my commission for $50,000 in sales at 15% rate")
]
}
# Stream the agent's response
for chunk in graph.stream(user_input, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Serving Custom Tools Locally
During development, run your custom tools locally while your LangGraph agent accesses them through the Arcade Engine.
Starting the Local Worker
Install your toolkit and start the worker:
# Install toolkit in development mode
make install
# Start the worker
arcade worker serve --reload
Your worker starts on http://localhost:8002. Visit http://localhost:8002/worker/health to verify it's running.
Exposing Your Local Worker
Use a tunneling service to make your local worker accessible to the Arcade Engine:
# Using ngrok
ngrok http 8002
# Using cloudflared
cloudflared tunnel --url http://localhost:8002
# Using tailscale funnel
tailscale funnel 8002
Registering Your Worker with Arcade Engine
Navigate to the Workers page in the Arcade dashboard and register your worker with the public URL from your tunneling service.
Your custom tools now appear in the Arcade Playground and are available to your LangGraph agents.
Deploying Custom Tools to Production
Using Arcade Deploy
Arcade Deploy handles infrastructure for hosting your custom tools in production. Configure deployment in worker.toml:
[[worker]]
[worker.config]
id = "sales-analytics-prod"
enabled = true
timeout = 30
retries = 3
secret = "${env:ARCADE_WORKER_SECRET}"
[worker.local_source]
packages = ["./sales_analytics"]
Deploy with a single command:
arcade deploy
Arcade Deploy:
- Builds your worker container
- Deploys to Arcade's cloud infrastructure
- Registers the worker with the Engine
- Provides a production URL
View deployment status:
arcade worker list
Output shows your deployed worker:
┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┓
┃ ID ┃ Cloud Deployed ┃ Engine Registered ┃ Enabled ┃
┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━┩
│ sales-analytics...│ True │ True │ True │
└───────────────────┴────────────────┴───────────────────┴─────────┘
Hybrid Deployment
For compliance requirements or private resource access, deploy workers in your infrastructure while using Arcade's cloud Engine:
[[worker]]
[worker.config]
id = "sales-analytics-hybrid"
secret = "${env:ARCADE_WORKER_SECRET}"
[worker.local_source]
packages = ["./sales_analytics"]
Run the worker in your environment:
arcade worker serve --host 0.0.0.0 --port 8002
Expose through your corporate VPN or private network and register the internal URL with Arcade Engine.
Benefits of hybrid deployment:
- Access private databases and APIs
- Meet data residency requirements
- Use custom dependencies or configurations
- Maintain data security within your infrastructure
Advanced Integration Patterns
Building Custom Authorization Flows
Some custom tools require user-specific authorization. Implement OAuth flows for your tools:
from typing import Annotated
from arcade_tdk import tool
from arcade_tdk.auth import OAuth2
@tool(
requires_auth=OAuth2(
provider_id="custom_crm",
scopes=["read:sales", "read:customers"]
)
)
def fetch_customer_data(
context: ToolContext,
customer_id: Annotated[str, "Customer identifier"]
) -> Annotated[Dict, "Customer record"]:
"""
Fetch customer data from CRM using user-specific credentials.
This tool requires the user to authorize access to the CRM system.
"""
if not context.authorization or not context.authorization.token:
raise ValueError("User must authorize CRM access")
# Use the user-specific token
credentials = Credentials(context.authorization.token)
# Your API call with user credentials
response = httpx.get(
f"https://crm.company.com/api/customers/{customer_id}",
headers={"Authorization": f"Bearer {credentials.token}"}
)
return response.json()
Handling Authorization in LangGraph
Create a custom LangGraph that handles authorization interrupts:
from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph.prebuilt import ToolNode
# Initialize tools with auth requirements
tool_manager = ArcadeToolManager(api_key=arcade_api_key)
tools = tool_manager.get_tools(toolkits=["sales_analytics"])
tool_node = ToolNode(tools)
# Build workflow graph
workflow = StateGraph(MessagesState)
def call_agent(state):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [*messages, response]}
def should_continue(state: MessagesState):
last_message = state["messages"][-1]
if last_message.tool_calls:
tool_name = last_message.tool_calls[0]["name"]
if tool_manager.requires_auth(tool_name):
return "authorization"
return "tools"
return END
# Add nodes and edges
workflow.add_node("agent", call_agent)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["authorization", "tools", END])
workflow.add_edge("tools", "agent")
# Compile with memory
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)
Error Handling Patterns
Arcade's TDK provides structured error handling:
from arcade_tdk import tool
from arcade_tdk.errors import RetryableToolError, FatalToolError
@tool
def fetch_data_with_validation(
resource_id: Annotated[str, "Resource identifier"]
) -> Annotated[Dict, "Resource data"]:
"""
Fetch resource data with validation and retry logic.
"""
if not resource_id or len(resource_id) < 5:
raise RetryableToolError(
"Invalid resource ID format",
additional_prompt_content="Please provide a valid resource ID (minimum 5 characters)"
)
try:
response = httpx.get(f"https://api.example.com/resource/{resource_id}")
response.raise_for_status()
return response.json()
except httpx.HTTPStatusError as e:
if e.response.status_code == 404:
raise RetryableToolError(
f"Resource {resource_id} not found",
additional_prompt_content="Please check the resource ID and try again"
)
raise FatalToolError(f"API error: {str(e)}")
The TDK automatically converts common exceptions (httpx, requests) into appropriate Arcade errors, minimizing boilerplate code.
Testing Custom Tools
Unit Testing Tools
Test your tools independently before integration:
import pytest
from arcade_sales_analytics.tools.operations import calculate_commission
def test_calculate_commission():
result = calculate_commission(
sales_amount=10000.0,
commission_rate=0.15
)
assert result["gross_amount"] == 10000.0
assert result["commission"] == 1500.0
assert result["net_amount"] == 8500.0
assert result["commission_rate"] == 0.15
def test_calculate_commission_zero_rate():
result = calculate_commission(
sales_amount=5000.0,
commission_rate=0.0
)
assert result["commission"] == 0.0
assert result["net_amount"] == 5000.0
Testing Tool Integration
Use Arcade's evaluation framework to test tool behavior with LLMs:
from arcade_evals import ToolEval, EvalResult
eval_cases = [
{
"input": "Calculate commission for $50,000 at 15%",
"expected_tool": "SalesAnalytics.CalculateCommission",
"expected_params": {
"sales_amount": 50000.0,
"commission_rate": 0.15
}
}
]
# Run evaluations
results = []
for case in eval_cases:
eval_result = ToolEval(
toolkit="sales_analytics",
test_case=case
).run()
results.append(eval_result)
# Check results
for result in results:
print(f"Test: {result.passed}")
print(f"Tool called: {result.tool_name}")
print(f"Parameters: {result.parameters}")
Production Considerations
Monitoring and Observability
Track tool execution metrics in production:
from arcade_tdk import tool
import time
import logging
logger = logging.getLogger(__name__)
@tool
def monitored_tool(
param: Annotated[str, "Input parameter"]
) -> Annotated[str, "Result"]:
"""Tool with built-in monitoring."""
start_time = time.time()
try:
# Your tool logic
result = process_data(param)
# Log success metrics
duration = time.time() - start_time
logger.info(f"Tool executed successfully in {duration:.2f}s")
return result
except Exception as e:
# Log failure metrics
duration = time.time() - start_time
logger.error(f"Tool failed after {duration:.2f}s: {str(e)}")
raise
Rate Limiting and Quotas
Implement rate limiting for external API calls:
from arcade_tdk import tool
from arcade_tdk.errors import UpstreamRateLimitError
import httpx
@tool
def rate_limited_api_call(
query: Annotated[str, "Search query"]
) -> Annotated[Dict, "API response"]:
"""Call external API with rate limit handling."""
try:
response = httpx.get(
"https://api.external.com/search",
params={"q": query},
timeout=10.0
)
response.raise_for_status()
return response.json()
except httpx.HTTPStatusError as e:
if e.response.status_code == 429:
retry_after = e.response.headers.get("Retry-After", "60")
raise UpstreamRateLimitError(
f"Rate limit exceeded. Retry after {retry_after} seconds"
)
raise
Scaling Worker Deployments
For high-traffic applications, deploy multiple workers:
# worker.toml
[[worker]]
[worker.config]
id = "sales-worker-1"
secret = "${env:WORKER_SECRET}"
[worker.local_source]
packages = ["./sales_analytics"]
[[worker]]
[worker.config]
id = "sales-worker-2"
secret = "${env:WORKER_SECRET}"
[worker.local_source]
packages = ["./sales_analytics"]
Deploy all workers with a single command:
arcade deploy
The Arcade Engine automatically load balances requests across available workers.
Complete Working Example
Here's a complete example integrating custom tools with LangGraph:
import os
from typing import Annotated
from langchain_arcade import ArcadeToolManager
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
from arcade_tdk import tool
# Define custom tools
@tool
def get_quarterly_revenue(
quarter: Annotated[str, "Quarter in format Q1, Q2, Q3, or Q4"],
year: Annotated[int, "Year as 4-digit number"]
) -> Annotated[float, "Revenue for the quarter"]:
"""Get total revenue for a specific quarter and year."""
# Simulated data - replace with actual database query
revenue_data = {
("Q1", 2024): 2500000.0,
("Q2", 2024): 2750000.0,
("Q3", 2024): 3100000.0,
("Q4", 2024): 3400000.0
}
return revenue_data.get((quarter, year), 0.0)
@tool
def compare_quarters(
quarter1: Annotated[str, "First quarter"],
year1: Annotated[int, "Year for first quarter"],
quarter2: Annotated[str, "Second quarter"],
year2: Annotated[int, "Year for second quarter"]
) -> Annotated[dict, "Comparison results"]:
"""Compare revenue between two quarters."""
rev1 = get_quarterly_revenue(quarter1, year1)
rev2 = get_quarterly_revenue(quarter2, year2)
difference = rev1 - rev2
percent_change = (difference / rev2 * 100) if rev2 != 0 else 0
return {
f"{quarter1} {year1}": rev1,
f"{quarter2} {year2}": rev2,
"difference": difference,
"percent_change": percent_change
}
# Initialize Arcade and load tools
arcade_api_key = os.environ.get("ARCADE_API_KEY")
openai_api_key = os.environ.get("OPENAI_API_KEY")
tool_manager = ArcadeToolManager(api_key=arcade_api_key)
tools = tool_manager.get_tools(toolkits=["revenue_analytics"])
# Create LangGraph agent
model = ChatOpenAI(model="gpt-4o", api_key=openai_api_key)
bound_model = model.bind_tools(tools)
memory = MemorySaver()
graph = create_react_agent(
model=bound_model,
tools=tools,
checkpointer=memory
)
# Execute agent query
config = {
"configurable": {
"thread_id": "revenue-analysis",
"user_id": "analyst-001"
}
}
user_query = {
"messages": [
("user", "Compare Q3 2024 revenue to Q2 2024 and tell me the percent change")
]
}
# Stream results
for chunk in graph.stream(user_query, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Resources and Next Steps
Continue building with Arcade:
- Arcade Documentation - Complete platform documentation
- Tool Development Kit Reference - API reference for building tools
- Arcade GitHub Examples - Sample projects and integrations
- LangChain Integration Guide - Detailed LangChain integration patterns
- Arcade Toolkits Registry - Pre-built toolkits for common use cases
- Arcade CLI Documentation - Command-line tool reference
Your custom tools are now production-ready and callable from any LangGraph agent. The same tools work across other agent frameworks including CrewAI, OpenAI Agents, and Google ADK without modification.



