Production-Ready MCP: Why Security Standards Matter for AI Tool Infrastructure

Production-Ready MCP: Why Security Standards Matter for AI Tool Infrastructure

Wils Dawson's avatar
Wils Dawson
JUNE 30, 2025
2 MIN READ
THOUGHT LEADERSHIP
Rays decoration image
Ghost Icon

After eight years building authentication systems at Okta, followed by stints at Kong and ngrok working on developer tools and API gateways, I've seen how to build systems that are secure by default. Now at Arcade.dev, I'm watching the MCP ecosystem struggle to get there.

The Model Context Protocol has incredible potential for enabling AI agents to interact with real-world systems. But there's a gap between experimental implementations and production-ready infrastructure that most developers aren't addressing.

The Current State of MCP Security

The MCP specification (as of June 18, 2025) defines authentication between clients and servers. This is essential, but it's only part of the story. The spec handles:

  • Local transport (stdio): Suitable for development and single-user scenarios
  • Remote transport (HTTP): Requires OAuth-based authorization

This foundation is solid. The challenge comes when MCP servers need to interact with external APIs and services—which is arguably the entire point of building them.

The Authorization Gap

Here's the critical issue: when your MCP server needs to access third-party APIs (Google Drive, Slack, Salesforce), you face an architectural decision with significant security implications.

The Anti-Pattern: Embedding admin-level credentials in your MCP server. This forces the server to reimplement the authorization logic of every system it touches. It's not just a security risk—it's an engineering nightmare that doesn't scale.

The Solution: User-specific authorization flows. The MCP server obtains tokens scoped to individual users, inheriting their permissions from the downstream systems. This is what our PR #475 addresses—enabling secure token exchange without exposing credentials to clients or LLMs.

Why Standards Compliance Matters

The temptation to bypass security standards is strong, especially during rapid prototyping. But consider the implications:

  1. Interoperability: Non-compliant servers won't work with Claude Desktop, Cursor, VS Code, or other standard MCP clients
  2. Security vulnerabilities: Improper token handling exposes attack vectors that standard OAuth flows prevent
  3. Scalability issues: What works for one user breaks at scale without proper session management and authorization
  4. Audit requirements: Enterprise deployments often require SOC 2 compliance and security attestations, forcing you into complex rebuilds

Production Readiness Beyond Security

Security is foundational, but production-ready MCP deployments require:

  • Observability: Detailed logging and monitoring of tool calls and data access
  • Scalability: Multi-instance deployment with proper session handling
  • Error handling: Graceful degradation when downstream services fail
  • Rate limiting: Protection against abuse and unexpected usage patterns
  • Audit trails: Compliance with data governance requirements

The Path Forward

The MCP community is at an inflection point. We can either implement secure standards now or become irrelevant when something else does. It has to be easy to do the secure and scalable thing. At Arcade.dev, we're building infrastructure that makes security and production-readiness the default, not an afterthought.

This isn't about gatekeeping or adding unnecessary complexity. It's about learning from decades of API development and applying those lessons to the next generation of agentic AI infrastructure.

The future of AI agents depends on their ability to safely and reliably interact with real-world systems. That future requires more than just functional code—it requires infrastructure built on proven security principles.


Arcade.dev provides production-ready infrastructure for AI tool-calling, with built-in authentication, authorization, and enterprise-grade security. Learn more in our documentation or join our Discord community.

SHARE THIS POST

RECENT ARTICLES

THOUGHT LEADERSHIP

5 Takeaways from the 2026 State of AI Agents Report

AI agents have moved quickly from experimentation to real-world deployment. Over the past year, organizations have gone from asking whether agents work to figuring out how to deploy enterprise AI agents reliably at scale. The 2026 State of AI Agents Report from the Claude team captures this shift clearly. Drawing on insights from teams building with modern LLM agents—including those powered by models from providers like Anthropic—the report offers a grounded view of how agentic systems are bein

THOUGHT LEADERSHIP

What It’s Actually Like to Use Docker Sandboxes with Claude Code

We spend a lot of time thinking about how to safely give AI agents access to real systems. Some of that is personal curiosity, and some of it comes from the work we do at Arcade building agent infrastructure—especially the parts that tend to break once you move past toy demos. So when Docker released Docker Sandboxes, which let AI coding agents run inside an isolated container instead of directly on your laptop, we wanted to try it for real. Not as a demo, but on an actual codebase, doing the k

THOUGHT LEADERSHIP

Docker Sandboxes Are a Meaningful Step Toward Safer Coding Agents — Here’s What Still Matters

Docker recently announced Docker Sandboxes, a lightweight, containerized environment designed to let coding agents work with your project files without exposing your entire machine. It’s a thoughtful addition to the ecosystem and a clear sign that agent tooling is maturing. Sandboxing helps solve an important problem: agents need room to operate. They install packages, run code, and modify files — and giving them that freedom without exposing your laptop makes everyone sleep a little better. B

Blog CTA Icon

Get early access to Arcade, and start building now.