How to Set Up Multi-User Authentication with MCP for Gmail

How to Set Up Multi-User Authentication with MCP for Gmail

Arcade.dev Team's avatar
Arcade.dev Team
OCTOBER 16, 2025
7 MIN READ
THOUGHT LEADERSHIP
Rays decoration image
Ghost Icon

Multi-user authentication represents one of the most challenging aspects of deploying AI agents in production. This guide demonstrates how to implement secure, scalable multi-user Gmail authentication using Arcade.dev’s Model Context Protocol (MCP) support, enabling AI agents to access Gmail on behalf of multiple users simultaneously.

The authentication gap in MCP servers

Model Context Protocol emerged as a standard for AI-tool interaction, but most open-source MCP servers default to single-user configurations. These servers require hardcoded API keys or personal access tokens, making them unsuitable for production applications where AI agents must act on behalf of different users. Arcade.dev bridges this gap by extending MCP with enterprise-grade OAuth 2.0 authentication, transforming single-user MCP servers into production-ready, multi-user systems.

The platform serves as both an MCP server and a bridge to other MCP servers, using the HTTP transport. This architecture enables AI models to access Gmail and other authenticated services through standardized tool definitions while maintaining strict security boundaries where LLMs never see authentication tokens and authentication logic remains completely isolated from the AI model.

Implementing multi-user OAuth flows

Dynamic user authentication pattern

This Python class demonstrates the core pattern for managing multiple users’ Gmail authentication simultaneously. The class handles per-user OAuth flows, maintains session state, and executes Gmail actions with proper user context isolation. You’ll need your Arcade API key configured and should have basic familiarity with async/await patterns in Python.

import os
from datetime import datetime
from typing import Dict, Any
from arcadepy import Arcade
class MultiUserGmailManager:
    def __init__(self):
        self.client = Arcade(api_key=os.environ.get("ARCADE_API_KEY"))
        self.user_sessions: Dict[str, Any] = {}
    async def authenticate_user(self, user_id: str) -> Dict[str, Any]:
        """Handle OAuth flow for a specific user"""        # Check if Gmail.SendEmail requires authorization        auth_response = await self.client.tools.authorize(
            tool_name="Gmail.SendEmail",
            user_id=user_id
        )
        if auth_response.status != "completed":
            # User needs to complete OAuth flow            return {
                "authorization_required": True,
                "url": auth_response.url,
                "message": "Complete authorization to access Gmail"            }
        # Store user session        self.user_sessions[user_id] = {
            "authenticated": True,
            "timestamp": datetime.now()
        }
        return {"authenticated": True}
    async def execute_gmail_action(self, user_id: str, action: str, params: Dict):
        """Execute Gmail actions with user-specific context"""        # Ensure user is authenticated        if user_id not in self.user_sessions:
            return await self.authenticate_user(user_id)
        # Execute tool with user context        response = await self.client.tools.execute(
            tool_name=f"Gmail.{action}",
            input=params,
            user_id=user_id
        )
        return response.output

Successful authentication will return {"authenticated": True}, while pending auth returns a dictionary with authorization_required: True and a URL for the user to visit. The most common gotcha is forgetting to handle the authorization_required response—always check this before attempting tool execution. Next, implement the OAuth callback handling for web applications.

Gmail toolkit integration patterns

Configuring Gmail-specific scopes

This example shows how to create a custom Gmail tool with specific OAuth scopes using Arcade’s Tool Development Kit. The function demonstrates multi-user email sending with automatic token management and proper credential isolation. You’ll need the Google Gmail API Python client installed (pip install google-api-python-client) and understanding of OAuth scope requirements for different Gmail operations.

from arcade_tdk import ToolContext, tool
from arcade_tdk.auth import Google
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
@tool(
    requires_auth=Google(
        scopes=[
            "https://www.googleapis.com/auth/gmail.send",
            "https://www.googleapis.com/auth/gmail.compose",
            "https://www.googleapis.com/auth/gmail.readonly"        ]
    )
)
async def multi_user_gmail_send(
    context: ToolContext,
    to: str,
    subject: str,
    body: str,
    user_id: str) -> Dict[str, Any]:
    """Send email with multi-user context"""    # Token is automatically managed per user    if not context.authorization or not context.authorization.token:
        raise ValueError("User not authorized for Gmail access")
    credentials = Credentials(context.authorization.token)
    gmail_service = build("gmail", "v1", credentials=credentials)
    # Arcade manages access/refresh token rotation server-side; never log or persist tokens in application code.    message = create_message(to, subject, body)
    result = gmail_service.users().messages().send(
        userId="me",
        body=message
    ).execute()
    return {"message_id": result['id'], "status": "sent"}

A successful tool execution returns a dictionary with message_id and status: "sent". The tool will automatically handle token refresh behind the scenes. Watch for insufficient scope errors if you try to perform actions not covered by your defined scopes. Next, build a complete production-ready Gmail agent with session management and caching.

Building a production Gmail agent

This Python class implements a complete multi-user Gmail agent with toolset caching and intelligent request routing. The agent handles authentication checking, tool execution, and maintains performance through user-specific caching. You’ll need the Arcade Python client installed (pip install arcadepy) and should be familiar with async/await patterns and dictionary-based caching strategies.

# Python equivalent of the JS MultiUserGmailAgentimport os
from typing import Dict, Any
from arcadepy import Arcade
class MultiUserGmailAgent:
    def __init__(self):
        self.arcade = Arcade(api_key=os.getenv("ARCADE_API_KEY"))
        self.user_toolsets: Dict[str, Any] = {}
    async def initialize_user_tools(self, user_id: str):
        # Check cache        if user_id in self.user_toolsets:
            return self.user_toolsets[user_id]
        # Fetch Gmail toolkit        gmail_toolkit = await self.arcade.tools.list(toolkit="gmail", limit=30, user_id=user_id)
        # Create user-specific toolset        user_tools = {
            "tools": gmail_toolkit.items,
            "client": self.arcade,
            "user_id": user_id
        }
        # Cache for performance        self.user_toolsets[user_id] = user_tools
        return user_tools
    async def process_email_request(self, user_id: str, request: Dict[str, Any]):
        tools = await self.initialize_user_tools(user_id)
        try:
            # Execute appropriate Gmail tool            result = await self.arcade.tools.execute(
                tool_name=self.determine_gmail_tool(request),
                input=self.parse_request_parameters(request),
                user_id=user_id
            )
            return {"success": True, "data": result}
        except Exception as e:
            # If your SDK raises a typed "authorization_required" error, handle here            if getattr(e, "type", "") == "authorization_required":
                return {"success": False, "authRequired": True, "authUrl": getattr(e, "url", "")}
            raise    def determine_gmail_tool(self, request: Dict[str, Any]) -> str:
        tool_map = {
            "send": "Gmail.SendEmail",
            "draft": "Gmail.WriteDraftEmail",
            "list": "Gmail.ListEmails",
            "search": "Gmail.SearchThreads"        }
        return tool_map.get(request.get("action"), "Gmail.ListEmails")
    def parse_request_parameters(self, request: Dict[str, Any]) -> Dict[str, Any]:
        # Adapt as needed; keep parity with your JS example's intent        return request.get("parameters", {})

Successful requests return {"success": True, "data": ...} with the actual Gmail operation results. Authorization-required scenarios return {"success": False, "authRequired": True, "authUrl": "..."} for you to redirect users. The cache significantly improves performance but watch memory usage with many concurrent users. Next, implement secure token management for production deployments.

Security configuration for production

Token management architecture

This Python class demonstrates secure token storage and management patterns that prevent credential exposure while handling automatic token refresh. The implementation shows in-memory storage for simplicity but should use encrypted persistent storage (Redis, database) in production. You’ll need to implement the encrypt/decrypt methods using a library like cryptography and have a secure encryption key management strategy.

from datetime import datetime
from typing import Dict
class SecureTokenManager:
    def __init__(self, encryption_key: str):
        self.encryption_key = encryption_key
        self.token_store = {}  # In production, use persistent storage    def store_user_token(self, user_id: str, token: str):
        """Store encrypted token with automatic refresh handling"""        encrypted_token = self.encrypt(token)
        self.token_store[user_id] = {
            'token': encrypted_token,
            'timestamp': datetime.now(),
            'refresh_count': 0        }
    def get_user_context(self, user_id: str) -> Dict:
        """Retrieve user authentication context"""        if user_id not in self.token_store:
            return {'authenticated': False}
        token_data = self.token_store[user_id]
        # Check token age and refresh if needed        if self.needs_refresh(token_data):
            return {'needs_refresh': True}
        return {
            'authenticated': True,
            'token': self.decrypt(token_data['token'])
        }
    def needs_refresh(self, token_data: Dict) -> bool:
        """Determine if token needs refresh"""        token_age = datetime.now() - token_data['timestamp']
        return token_age.total_seconds() > 3500  # Refresh before expiry

Successful token storage and retrieval will return proper authentication states without exposing raw tokens. The most critical gotcha is implementing strong encryption—weak encryption compromises your entire security model. For production, replace the in-memory store with Redis or encrypted database storage. Next, configure comprehensive security boundaries for authentication isolation.

Implementing authorization boundaries

This YAML configuration establishes comprehensive security boundaries for production deployments, ensuring authentication logic remains completely isolated from AI models and user tokens are never exposed. The configuration assumes you have secure key management in place and have planned your OAuth redirect URLs. You’ll need a cryptographically secure 32-byte encryption key and properly configured HTTPS endpoints.

# Production security configurationsecurity:  encryption_keys:    - ${env:ENCRYPTION_KEY}  # 32-byte key for production  authentication:    isolation_mode: strict    token_exposure: never    audit_logging: enabled  oauth:    state_validation: true    pkce_enabled: true    redirect_urls:      - https://yourdomain.com/api/oauth/callback  rate_limiting:    enabled: true    max_auth_attempts: 5    window_minutes: 15

Successful configuration will show “Security boundaries initialized” in logs with no warnings about weak encryption or missing keys. Common issues include using development keys in production or misconfigured redirect URLs that break OAuth flows. Make sure your redirect URLs exactly match those registered in Google Cloud Console. Next, implement comprehensive error handling for authentication edge cases.

Performance optimization techniques

Caching strategies for multi-user scenarios

This Python caching class optimizes performance for multi-user scenarios by intelligently caching user toolsets while maintaining security boundaries and implementing proper cache expiration. The cache uses LRU (Least Recently Used) eviction and TTL (Time To Live) patterns to balance performance with memory usage. You’ll need the Arcade SDK configured and should monitor memory usage in production to adjust cache size limits appropriately.

# Python equivalent of the JS UserContextCache with simple LRU/TTLimport time
from collections import OrderedDict
class UserContextCache:
    def __init__(self, max_cache_size: int = 1000, ttl_ms: int = 3600000):
        self.cache = OrderedDict()
        self.max_cache_size = max_cache_size
        self.ttl_ms = ttl_ms
    def set_cached_tools(self, user_id: str, tools):
        # LRU eviction        if user_id in self.cache:
            self.cache.move_to_end(user_id)
        self.cache[user_id] = {"tools": tools, "timestamp": int(time.time() * 1000)}
        if len(self.cache) > self.max_cache_size:
            self.cache.popitem(last=False)
    def get_cached_tools(self, user_id: str):
        entry = self.cache.get(user_id)
        if not entry:
            return None        now = int(time.time() * 1000)
        if now - entry["timestamp"] > self.ttl_ms:
            # TTL expired            del self.cache[user_id]
            return None        # refresh LRU position        self.cache.move_to_end(user_id)
        return entry["tools"]
    async def warm_cache(self, arcade, user_ids):
        for uid in user_ids:
            toolkit = await arcade.tools.list(toolkit="gmail", user_id=uid)
            self.set_cached_tools(uid, toolkit.items)

Successful caching shows significant performance improvements (sub-100ms responses for cached users vs 1-2s for fresh requests) and consistent memory usage within your defined limits. The biggest gotcha is cache invalidation—user permission changes won’t be reflected until cache expiry. Monitor cache hit rates and adjust TTL based on your authentication flow frequency. With this caching layer, your multi-user MCP Gmail integration is ready for production scale.

Conclusion

Setting up multi-user authentication with MCP for Gmail on Arcade.dev transforms single-user MCP limitations into production-ready, enterprise-grade systems. The platform’s authentication-first architecture, combined with OAuth 2.0 flows and comprehensive token management, enables AI agents to securely access Gmail on behalf of multiple users without exposing credentials to LLMs.

The implementation patterns demonstrated here—from dynamic user toolset loading to secure token management and production scaling strategies—provide the foundation for building AI applications that pass security reviews and scale to thousands of users. By leveraging Arcade’s MCP server capabilities with proper authentication boundaries, developers can focus on building AI agent functionality while the platform handles the complex authentication orchestration required for production deployments.

SHARE THIS POST

RECENT ARTICLES

Rays decoration image
THOUGHT LEADERSHIP

How to Use MCP with LangGraph through Arcade

Model Context Protocol (MCP) standardizes how AI models interact with tools and external systems. LangGraph enables building stateful, graph-based AI workflows. When combined through Arcade's authentication-first platform, developers can build production-ready AI agents that actually take actions—not just suggest them. This guide shows you exactly how to integrate MCP with LangGraph using Arcade's infrastructure, solving the critical authentication challenges that prevent most AI projects from

Rays decoration image
THOUGHT LEADERSHIP

How to Query Postgres from LangGraph via Arcade (MCP)

Building AI agents that interact with databases presents significant technical challenges. Authentication, connection management, and secure query execution often become roadblocks that prevent agents from reaching production. This guide shows you how to leverage Arcade's Model Context Protocol (MCP) support to connect LangGraph agents with Postgres databases, enabling secure database operations without managing advanced infrastructure. The Database Integration Challenge for AI Agents Tradi

Rays decoration image
THOUGHT LEADERSHIP

How to Connect LangGraph to Slack with Arcade (MCP)

Building production-ready AI agents that interact with Slack requires solving multiple technical challenges: secure authentication, token management, user context isolation, and seamless integration with orchestration frameworks. This guide demonstrates how to connect LangGraph agents to Slack using Arcade's Model Context Protocol (MCP) implementation, enabling your agents to send messages, create channels, and interact with workspaces on behalf of multiple users. Prerequisites and Setup Requ

Blog CTA Icon

Get early access to Arcade, and start building now.