How to choose the best Agentic Framework, Part 2: Agentic Delegation

How to choose the best Agentic Framework, Part 2: Agentic Delegation

Mateo Torres's avatar
Mateo Torres
JUNE 24, 2025
4 MIN READ
TUTORIALS
Rays decoration image
Ghost Icon

In the previous post in this series, we explored Human-in-the-Loop. Here, we’re exploring Handoffs, which I prefer to call “Agentic Delegation”

This post is a companion to a video, I encourage you to watch it!

Here’s the experiment setup

I’m using the same agentic system. I implemented the same system using three different Frameworks:

  • LangGraph
  • OpenAI’s Agents SDK
  • Google’s Agent Development Kit (ADK)

In all cases, the agent uses a “supervisor” architecture, where a single agent receives most user prompts, and ultimately decides whether to delegate a task to other, more specialized agents. In this case, I have one Google Agent, capable of reading and sending emails. Also, a Slack agent, capable of reading and sending messages on Slack. I enforce explicit HITL approval in all “send” tools.

And of course, since these tools are real integrations, I implemented them using Arcade.dev

What is Agentic Delegation?

In multi-agent systems, there are several ways of organizing the different agents so they collaborate (or compete) to achieve their tasks. One of these ways is agentic delegation (also known as handoffs) which is simply the idea of having one agent delegate a task to another agent in the system based on its own internal criteria.

This is far from the only mechanism to distribute and coordinate tasks between agents, but it’s gaining popularity in the emerging LLM-based multi-agent systems.

How does each framework approach Agentic Delegation?

What’s pretty universal about how the frameworks approach this is that they always involve a tool that will transfer control to another agent, often referred to as a “sub-agent”. I personally like this pattern because r relies on well-established and controllable primitives to implement fine-grained control in these agents. The practical differences in how these are implemented are:

  • The degree of transparency of the handoff
  • The degree of control of the context involved

Everything else is pretty much just a function call.

As I stated before, I think using tool calling as the mechanism for agentic delegation is the right thing to do. And I was very happy to see these 3 frameworks implement it just like that. From that point of view, all of them passed the test, and I will now put on my nitpicking hat and highlight the differences.

Google’s Agent Development Kit

This framework approaches handoffs by implementing a tool that gets the target agent as an argument, as well as the invocation context, containing the context starting from the user prompt. 

I think this implementation works for 90% of cases, but is not flexible enough to handle cases where the context may grow due to the complexity of the prompt. For example, if my prompt requires tens of calls to multiple tools to be added into the context for summarization, and then delegating to an agent to email that summary, I don’t see in ADK a way to say “only send the summary to the email agent”. This is potentially wasteful, but I admit this is an edge case.

OpenAI’s Agents SDK

This framework models Handoffs Explicitly, and offers two distinct approaches to them:

  • Handoffs: This is a tool call where the control of the flow is fully delegated to the target agent, and the entire context is passed to it. Responses to the user will now come from this agent, unless it delegates through a subsequent handoff.
  • Agent as tools: This is an explicit tool wrapping an agent, and the conversation flow is not transferred to the receiving agent. The agent will receive generated input coming from the calling agent. It is expected that it responds to the calling agent, rather than to the user.

This offers a greater level of versatility to the agent builder. Now, we can decide with some granularity what is sent to the receiving agent by the way in which we connect it to the calling agent. The ergonomics of it are still immature in my opinion, as I can envision some cases where I want to explicitly store elements thinking of specific agents in the topology, and this framework will fight me to get that level of control. But I’d way it covers 95%+ of all agentic orchestration cases.

LangGraph

Again, this is the framework that offers what I consider to be the most complete experience. It offers convenience functions like create_supervisor, which is excellent at implementing something equivalent to OpenAI’s Agents SDK handoffs. The context can be controlled with a similar level of granularity using the output_mode parameter.

What makes LangGraph my favorite framework once again, is that I’m able to construct the raw graph myself, and add specific elements to the graph state at any point in the flow. 

So, which one is the best framework then?

Compared to Human-in-the-Loop, there’s not a clear cut winner in this case.

Yes, I prefer LangGraph more than the other frameworks for very fine-grained control. But this is not true for most agentic projects. For agents that require less control over the context, using LangGraph or OpenAI’s Agent SDK will be equivalent, and you’re unlikely to regret making either choice. If you don’t mind sacrificing all control over the context to the orchestration framework, Google ADK will serve you well!

Try it today!

The code and resources for this experiment are open-source.

You will need:

Happy building!

SHARE THIS POST

RECENT ARTICLES

Rays decoration image
THOUGHT LEADERSHIP

Enterprise MCP Guide For Retail Banking & Payments: Use Cases, Best Practices, and Trends

The global payments industry processes $2.0 quadrillion in value flows annually, generating $2.5 trillion in revenue. Yet despite decades of digital transformation investment, critical banking operations,anti-money laundering investigation, KYC onboarding, payment reconciliation,remain largely manual. Model Context Protocol (MCP) represents the infrastructure breakthrough that enables financial institutions to move beyond chatbot pilots to production-grade AI agents that take multi-user authoriz

Rays decoration image
THOUGHT LEADERSHIP

Enterprise MCP Guide For Capital Markets & Trading: Use Cases, Best Practices, and Trends

Capital markets technology leaders face a critical infrastructure challenge: scattered AI pilots, disconnected integrations, and fragmented, domain-specific systems that turn engineers into human APIs manually stitching together trading platforms, market data feeds, and risk management tools. The Model Context Protocol (MCP) represents a fundamental shift from this costly one-off integration approach to a universal standardization layer that acts as the backbone for AI-native financial enterpris

Rays decoration image
THOUGHT LEADERSHIP

Enterprise MCP Guide For InsurTech: Use Cases, Best Practices, and Trends

The insurance industry faces a pivotal transformation moment. Model Context Protocol (MCP) has moved from experimental technology to production infrastructure, with 16,000+ active servers deployed across enterprises and millions of weekly SDK downloads. For InsurTech leaders, the question is no longer whether to adopt MCP, but how to implement it securely and effectively. Arcade's platform provides the MCP runtime for secure, multi-user authorization so AI agents can act on behalf of users acros

Blog CTA Icon

Get early access to Arcade, and start building now.