How to choose the best Agentic Framework, Part 2: Agentic Delegation

How to choose the best Agentic Framework, Part 2: Agentic Delegation

Mateo Torres's avatar
Mateo Torres
JUNE 24, 2025
4 MIN READ
TUTORIALS
Rays decoration image
Ghost Icon

In the previous post in this series, we explored Human-in-the-Loop. Here, we’re exploring Handoffs, which I prefer to call “Agentic Delegation”

This post is a companion to a video, I encourage you to watch it!

Here’s the experiment setup

I’m using the same agentic system. I implemented the same system using three different Frameworks:

  • LangGraph
  • OpenAI’s Agents SDK
  • Google’s Agent Development Kit (ADK)

In all cases, the agent uses a “supervisor” architecture, where a single agent receives most user prompts, and ultimately decides whether to delegate a task to other, more specialized agents. In this case, I have one Google Agent, capable of reading and sending emails. Also, a Slack agent, capable of reading and sending messages on Slack. I enforce explicit HITL approval in all “send” tools.

And of course, since these tools are real integrations, I implemented them using Arcade.dev

What is Agentic Delegation?

In multi-agent systems, there are several ways of organizing the different agents so they collaborate (or compete) to achieve their tasks. One of these ways is agentic delegation (also known as handoffs) which is simply the idea of having one agent delegate a task to another agent in the system based on its own internal criteria.

This is far from the only mechanism to distribute and coordinate tasks between agents, but it’s gaining popularity in the emerging LLM-based multi-agent systems.

How does each framework approach Agentic Delegation?

What’s pretty universal about how the frameworks approach this is that they always involve a tool that will transfer control to another agent, often referred to as a “sub-agent”. I personally like this pattern because r relies on well-established and controllable primitives to implement fine-grained control in these agents. The practical differences in how these are implemented are:

  • The degree of transparency of the handoff
  • The degree of control of the context involved

Everything else is pretty much just a function call.

As I stated before, I think using tool calling as the mechanism for agentic delegation is the right thing to do. And I was very happy to see these 3 frameworks implement it just like that. From that point of view, all of them passed the test, and I will now put on my nitpicking hat and highlight the differences.

Google’s Agent Development Kit

This framework approaches handoffs by implementing a tool that gets the target agent as an argument, as well as the invocation context, containing the context starting from the user prompt. 

I think this implementation works for 90% of cases, but is not flexible enough to handle cases where the context may grow due to the complexity of the prompt. For example, if my prompt requires tens of calls to multiple tools to be added into the context for summarization, and then delegating to an agent to email that summary, I don’t see in ADK a way to say “only send the summary to the email agent”. This is potentially wasteful, but I admit this is an edge case.

OpenAI’s Agents SDK

This framework models Handoffs Explicitly, and offers two distinct approaches to them:

  • Handoffs: This is a tool call where the control of the flow is fully delegated to the target agent, and the entire context is passed to it. Responses to the user will now come from this agent, unless it delegates through a subsequent handoff.
  • Agent as tools: This is an explicit tool wrapping an agent, and the conversation flow is not transferred to the receiving agent. The agent will receive generated input coming from the calling agent. It is expected that it responds to the calling agent, rather than to the user.

This offers a greater level of versatility to the agent builder. Now, we can decide with some granularity what is sent to the receiving agent by the way in which we connect it to the calling agent. The ergonomics of it are still immature in my opinion, as I can envision some cases where I want to explicitly store elements thinking of specific agents in the topology, and this framework will fight me to get that level of control. But I’d way it covers 95%+ of all agentic orchestration cases.

LangGraph

Again, this is the framework that offers what I consider to be the most complete experience. It offers convenience functions like create_supervisor, which is excellent at implementing something equivalent to OpenAI’s Agents SDK handoffs. The context can be controlled with a similar level of granularity using the output_mode parameter.

What makes LangGraph my favorite framework once again, is that I’m able to construct the raw graph myself, and add specific elements to the graph state at any point in the flow. 

So, which one is the best framework then?

Compared to Human-in-the-Loop, there’s not a clear cut winner in this case.

Yes, I prefer LangGraph more than the other frameworks for very fine-grained control. But this is not true for most agentic projects. For agents that require less control over the context, using LangGraph or OpenAI’s Agent SDK will be equivalent, and you’re unlikely to regret making either choice. If you don’t mind sacrificing all control over the context to the orchestration framework, Google ADK will serve you well!

Try it today!

The code and resources for this experiment are open-source.

You will need:

Happy building!

SHARE THIS POST

RECENT ARTICLES

Rays decoration image
MCP

Building MCP Together: Arcade's Contribution to Secure Agent Auth

Your AI agent needs to search Gmail for that weekly report. You've built an MCP server, the tool definition, everything's wired up perfectly. One problem: there's no secure path in the protocol to get the OAuth 2.0 bearer token your agent needs to call the Gmail API. This is the gap between MCP's design and production reality. While the protocol handles client-server authentication beautifully, it completely lacks a mechanism for servers to securely obtain third-party credentials. At Arcade.dev

THOUGHT LEADERSHIP

Production-Ready MCP: Why Security Standards Matter for AI Tool Infrastructure

After eight years building authentication systems at Okta, followed by stints at Kong and ngrok working on developer tools and API gateways, I've seen how to build systems that are secure by default. Now at Arcade.dev, I'm watching the MCP ecosystem struggle to get there. The Model Context Protocol has incredible potential for enabling AI agents to interact with real-world systems. But there's a gap between experimental implementations and production-ready infrastructure that most developers ar

THOUGHT LEADERSHIP

The Agent Hierarchy of Needs: Why Your AI Can't Actually Do Anything (Yet)

Your AI can summarize documents you feed it, answer questions about your uploaded PDFs, and explain concepts from its training data. But ask it to pull your actual Q4 revenue from NetSuite, check real customer satisfaction scores, or update a deal in Salesforce? Suddenly it's just guessing—or worse, hallucinating numbers that sound plausible but aren't your data. This disconnect between AI's intelligence and its ability to access real data and take action is why less than 30% of AI projects hav

Blog CTA Icon

Get early access to Arcade, and start building now.