How to Use MCP with LangGraph through Arcade

How to Use MCP with LangGraph through Arcade

Arcade.dev Team's avatar
Arcade.dev Team
OCTOBER 16, 2025
6 MIN READ
THOUGHT LEADERSHIP
Rays decoration image
Ghost Icon

Model Context Protocol (MCP) standardizes how AI models interact with tools and external systems. LangGraph enables building stateful, graph-based AI workflows. When combined through Arcade's authentication-first platform, developers can build production-ready AI agents that actually take actions—not just suggest them.

This guide shows you exactly how to integrate MCP with LangGraph using Arcade's infrastructure, solving the critical authentication challenges that prevent most AI projects from reaching production.

Why MCP Authentication Matters for LangGraph Agents

MCP was initially designed for local resources only, requiring considerable technical knowledge to integrate with clients. 99% of MCP servers today are built for single-user use, even hosted ones. This limitation creates a fundamental problem for LangGraph agents that need to:

  • Access user-specific data across multiple services
  • Handle OAuth flows for enterprise applications
  • Scale beyond single-user prototypes
  • Pass security reviews for production deployment

At a minimum, MCP servers need to support HTTP, MCP authorization, and support multi-user authorization. Without proper authentication, your LangGraph agents remain limited to local tools or insecure API key configurations.

Setting Up Arcade's MCP Integration with LangGraph

Prerequisites

Before starting, ensure you have:

  • An Arcade API key
  • Python 3.8+ or Node.js 16+
  • LangGraph and required dependencies installed
  • Access to the tools you want to integrate (Gmail, Slack, GitHub, etc.)

Installation

For Python projects:

pip install arcadepy arcade-mcp langchain-mcp-adapters langgraph

For JavaScript/TypeScript projects:

npm install @arcadeai/arcadejs @langchain/core langgraph

Environment Configuration

Set up your environment variables:

export ARCADE_API_KEY="your_arcade_api_key"
export OPENAI_API_KEY="your_openai_api_key"  # Or your preferred LLM provider

Implementing MCP Tools in LangGraph

Basic Integration Pattern

Arcade offers methods to convert tools into Zod schemas, which is essential since LangGraph defines tools using Zod. The toZod method simplifies this integration and makes it easier to use Arcade's tools with LangGraph.

Here's how to integrate Arcade tools into your LangGraph application:

JavaScript Implementation

import { Arcade } from "@arcadeai/arcadejs";
import { executeOrAuthorizeZodTool, toZod } from "@arcadeai/arcadejs/lib";
import { tool } from "@langchain/core/tools";
import { createReactAgent } from "langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

// Initialize Arcade client
const arcade = new Arcade();

// Get Arcade tools for specific toolkit
const googleToolkit = await arcade.tools.list({
  toolkit: "gmail",
  limit: 30
});

// Convert to Zod schemas for LangGraph
const arcadeTools = toZod({
  tools: googleToolkit.items,
  client: arcade,
  userId: "user_123" // Your application's user ID
});

// Create LangGraph agent with Arcade tools
const agent = createReactAgent(
  new ChatOpenAI({ modelName: "gpt-4" }),
  arcadeTools
);

Python Implementation

from arcadepy import Arcade
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

# Initialize Arcade client
arcade = Arcade(api_key=os.environ.get("ARCADE_API_KEY"))

# Get Gmail toolkit
gmail_tools = await arcade.tools.list(toolkit="gmail", limit=30)

# Create agent with tools
agent = create_react_agent(
    ChatOpenAI(model="gpt-4"),
    tools=gmail_tools.items
)

# Execute with user context
response = await agent.ainvoke({
    "messages": [{"role": "user", "content": "Send an email to team@example.com"}]
})

Handling Multi-User Authentication

The Authentication Flow

Arcade can now connect to any MCP server supporting the new streamable HTTP transport. This means you can seamlessly combine MCP tools and Arcade's tools in your agent or AI app.

When building multi-user LangGraph agents, each user needs their own authentication context:

from arcadepy import Arcade
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode

class MultiUserAgent:
    def __init__(self):
        self.arcade = Arcade()
        self.user_tools = {}

    async def initialize_user_tools(self, user_id: str):
        """Initialize tools for a specific user"""

        # Check if user needs authorization
        auth_response = await self.arcade.tools.authorize(
            tool_name="Gmail.SendEmail",
            user_id=user_id
        )

        if auth_response.status != "completed":
            return {
                "authorization_required": True,
                "url": auth_response.url,
                "message": "Complete OAuth to access Gmail"
            }

        # Load user-specific tools
        tools = await self.arcade.tools.list(
            toolkit="gmail",
            user_id=user_id
        )

        self.user_tools[user_id] = tools.items
        return {"authenticated": True}

    async def create_user_graph(self, user_id: str):
        """Create LangGraph workflow for specific user"""

        # Get user-specific tools
        tools = self.user_tools.get(user_id, [])

        # Build graph with user context
        graph = StateGraph(MessagesState)
        model = ChatOpenAI(model="gpt-4").bind_tools(tools)
        tool_node = ToolNode(tools)

        graph.add_node("agent", lambda state: {"messages": [model.invoke(state["messages"])]})
        graph.add_node("tools", tool_node)
        graph.add_edge(START, "agent")
        graph.add_edge("tools", END)

        return graph.compile()

Connecting to MCP Servers via Arcade

Using Arcade's MCP Server

Arcade provides a demonstration server implementing Model Control Protocol (MCP) with the Streamable HTTP transport. You can connect your LangGraph agents to Arcade's MCP server or any compatible MCP server:

from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent

# Configure MCP client with multiple servers
client = MultiServerMCPClient({
    "arcade": {
        "url": "https://api.arcade.dev/v1/mcps/arcade-anon/mcp",
        "transport": "streamable_http"
    },
    "custom_tools": {
        "url": "http://localhost:8000/mcp",
        "transport": "streamable_http"
    }
})

# Get tools from all configured servers
tools = await client.get_tools()

# Create agent with combined tools
agent = create_react_agent(
    "gpt-4",
    tools
)

Building Custom MCP Servers with Arcade

You can create your own MCP servers using Arcade's MCP framework:

#!/usr/bin/env python3
import sys
from typing import Annotated
from arcade_mcp_server import MCPApp

app = MCPApp(name="my_server", version="1.0.0")

@app.tool
def process_data(
    data: Annotated[str, "Data to process"],
    user_id: Annotated[str, "User ID for context"]
) -> str:
    """Process data with user context"""
    # Your tool logic here
    return f"Processed {data} for user {user_id}"

if __name__ == "__main__":
    transport = sys.argv[1] if len(sys.argv) > 1 else "http"
    app.run(transport=transport, host="127.0.0.1", port=8000)

Production Deployment Patterns

Combining Multiple Tool Sources

For production LangGraph applications, you'll often need to combine tools from different sources:

import { Arcade } from "@arcadeai/arcadejs";
import { MultiServerMCPClient } from "langchain-mcp-adapters";
import { createReactAgent } from "langgraph/prebuilt";

async function buildProductionAgent(userId) {
  const tools = [];

  // Get Arcade authenticated tools
  const arcade = new Arcade();
  const arcadeTools = await arcade.tools.list({
    toolkit: "gmail",
    userId: userId
  });
  tools.push(...arcadeTools.items);

  // Add MCP server tools
  const mcpClient = new MultiServerMCPClient({
    "internal_tools": {
      url: process.env.INTERNAL_MCP_SERVER,
      transport: "streamable_http"
    }
  });
  const mcpTools = await mcpClient.getTools();
  tools.push(...mcpTools);

  // Create agent with combined toolset
  return createReactAgent(model, tools);
}

Error Handling and Recovery

Implement robust error handling for authentication failures:

from langgraph.errors import NodeInterrupt

async def execute_with_auth_handling(graph, user_input, config):
    try:
        async for chunk in graph.astream(user_input, config):
            yield chunk
    except NodeInterrupt as exc:
        # Handle authentication required
        if "authorization_required" in str(exc):
            auth_url = extract_auth_url(exc)
            yield {
                "type": "auth_required",
                "url": auth_url,
                "message": "Please authorize access"
            }
        else:
            raise

Advanced Integration Patterns

Dynamic Tool Loading

Load tools dynamically based on user permissions and requirements:

class DynamicToolManager:
    def __init__(self):
        self.arcade = Arcade()
        self.tool_cache = {}

    async def get_tools_for_task(self, user_id: str, task_type: str):
        cache_key = f"{user_id}:{task_type}"

        if cache_key in self.tool_cache:
            return self.tool_cache[cache_key]

        # Determine required toolkits
        toolkit_map = {
            "email": ["gmail"],
            "project": ["linear", "github"],
            "communication": ["slack", "discord"],
            "documentation": ["notion", "google_drive"]
        }

        tools = []
        for toolkit in toolkit_map.get(task_type, []):
            toolkit_tools = await self.arcade.tools.list(
                toolkit=toolkit,
                user_id=user_id
            )
            tools.extend(toolkit_tools.items)

        self.tool_cache[cache_key] = tools
        return tools

Streaming Responses with Tool Calls

Handle streaming responses while maintaining tool execution visibility:

async function streamWithTools(graph, userInput, config) {
  const stream = await graph.stream(userInput, config);

  for await (const chunk of stream) {
    if (chunk.messages) {
      const lastMessage = chunk.messages[chunk.messages.length - 1];

      if (lastMessage.tool_calls) {
        console.log("Executing tools:", lastMessage.tool_calls);
      }

      yield {
        type: "message",
        content: lastMessage.content
      };
    }
  }
}

Best Practices

Tool Selection

Pick only the tools you need. Avoid using all tools at once. Be mindful of duplicate or overlapping functionality.

  • Load tools specific to the task at hand
  • Avoid loading entire toolkits when only specific tools are needed
  • Cache tool configurations for frequently used combinations

Security Considerations

  • Never expose Arcade API keys to client-side code
  • Implement proper session management for user authentication
  • Use environment-specific configurations for development vs production
  • Monitor tool usage and implement rate limiting where appropriate

Performance Optimization

class OptimizedToolLoader:
    def __init__(self, max_cache_size=100):
        self.arcade = Arcade()
        self.cache = OrderedDict()
        self.max_cache_size = max_cache_size

    async def get_tools(self, user_id: str, toolkit: str):
        cache_key = f"{user_id}:{toolkit}"

        # LRU cache implementation
        if cache_key in self.cache:
            self.cache.move_to_end(cache_key)
            return self.cache[cache_key]

        tools = await self.arcade.tools.list(
            toolkit=toolkit,
            user_id=user_id
        )

        # Maintain cache size
        if len(self.cache) >= self.max_cache_size:
            self.cache.popitem(last=False)

        self.cache[cache_key] = tools.items
        return tools.items

Testing Your Integration

Test your MCP-LangGraph integration using Arcade's demo server:

# Test connection to Arcade's MCP demo server
curl -X POST https://mcp-http-demo.arcade.dev/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "initialize",
    "params": {
      "protocolVersion": "2025-03-26",
      "capabilities": {},
      "clientInfo": {
        "name": "LangGraphClient",
        "version": "1.0.0"
      }
    }
  }'

Troubleshooting Common Issues

Authentication Failures

If users encounter authentication issues:

  1. Verify the Arcade API key is correctly set
  2. Check OAuth redirect URLs are properly configured
  3. Ensure user IDs are consistent across sessions
  4. Monitor token expiration and implement refresh logic

Resolving Tool Loading Problems

When tools aren't appearing:

  1. Verify toolkit names match Arcade's available toolkits
  2. Check user authorization status for each toolkit
  3. Ensure proper error handling for unauthorized tools
  4. Use the limit parameter appropriately when listing tools

Performance Issues

For slow tool execution:

  1. Implement caching for frequently used tool configurations
  2. Load tools asynchronously where possible
  3. Use connection pooling for MCP server connections
  4. Monitor and optimize network latency

Next Steps

With your MCP-LangGraph integration through Arcade now operational, you can:

The combination of MCP's standardization, LangGraph's workflow capabilities, and Arcade's authentication infrastructure enables you to build AI agents that move beyond prototypes to production-ready systems that securely access real enterprise tools and data.

Start building your authenticated LangGraph agents today with your Arcade API key.

SHARE THIS POST

RECENT ARTICLES

Rays decoration image
THOUGHT LEADERSHIP

How to Set Up Multi-User Authentication with MCP for Gmail

Multi-user authentication represents one of the most challenging aspects of deploying AI agents in production. This guide demonstrates how to implement secure, scalable multi-user Gmail authentication using Arcade.dev’s Model Context Protocol (MCP) support, enabling AI agents to access Gmail on behalf of multiple users simultaneously. The authentication gap in MCP servers Model Context Protocol emerged as a standard for AI-tool interaction, but most open-source MCP servers default to single

Rays decoration image
THOUGHT LEADERSHIP

How to Query Postgres from LangGraph via Arcade (MCP)

Building AI agents that interact with databases presents significant technical challenges. Authentication, connection management, and secure query execution often become roadblocks that prevent agents from reaching production. This guide shows you how to leverage Arcade's Model Context Protocol (MCP) support to connect LangGraph agents with Postgres databases, enabling secure database operations without managing advanced infrastructure. The Database Integration Challenge for AI Agents Tradi

Rays decoration image
THOUGHT LEADERSHIP

How to Connect LangGraph to Slack with Arcade (MCP)

Building production-ready AI agents that interact with Slack requires solving multiple technical challenges: secure authentication, token management, user context isolation, and seamless integration with orchestration frameworks. This guide demonstrates how to connect LangGraph agents to Slack using Arcade's Model Context Protocol (MCP) implementation, enabling your agents to send messages, create channels, and interact with workspaces on behalf of multiple users. Prerequisites and Setup Requ

Blog CTA Icon

Get early access to Arcade, and start building now.