The Birth of Machine Experience Engineering

The Birth of Machine Experience Engineering

Renato Byrro's avatar
Renato Byrro
FEBRUARY 6, 2025
3 MIN READ
THOUGHT LEADERSHIP
Rays decoration image
Ghost Icon

LLMs acting on behalf of humans and interacting with real-world systems isn't theoretical anymore - with the advent of function calling, it is now a reality. And with Arcade, function calling becomes a superpower that connects LLMs to authorized APIs, user services, and complex systems. With this shift, we're seeing the emergence of a new software practice: Machine Experience Engineering (MX Engineering).

The Current Reality

For AI models to handle our emails, schedule appointments, and manage documents, they need agents and tools connected to our digital lives. But here's the key shift: when an LLM calls a tool, it's not just another I/O instrument. The LLM is the consumer. The user. And it comes with its own computational reasoning patterns and behaviors.

This fundamental change means software engineers are now building for a new class of users - massive matrices capable of reasoning. This isn't just an API integration problem - it's a new engineering paradigm.

The Engineering Challenge

As engineers, we have deep intuition about how humans interact with our systems. When we develop a CLI, REST API, or Python package, we intuitively know how other developers will shoot themselves in the foot. We build guardrails for those exact scenarios.

But LLMs don't reason like we do. The kinds of mistakes they make are not intuitive to us. Every interface we have today was designed with human users in mind, yet we're asking these systems to consume them all and act on our behalf.

"This is not going to end well," you might think. And you'd be right.

Why Traditional Interfaces Fall Short

Tools designed for LLMs will rarely map 1:1 to third-party REST API endpoints or any existing interface. Doing so would set LLMs up for failure. Instead, we need to ask: "What jobs will the LLM need to get done in this context?"

The tool's name, its interface, each argument we expose, and how the response is structured - everything needs careful consideration. Inside and around our tools, we have to do whatever software and networking gymnastics necessary to make the LLM successful in its job.

MX Engineering Principles

Just as we have Frontend Developers, Designers, and UX Engineers dedicated to human user experience, we need MX Engineers focused on machine experience. This means putting ourselves in the LLMs' shoes, diving deep into their way of 'thinking', and anticipating how they'll stumble.

Every day at Arcade, we're learning something new that allows us to mature our practices and understand which patterns work. Tools developed with this mindset consistently outperform naive implementations. Anyone serious about building LLM-powered products wants well-designed tools like this.

The Reality of Tool Design

It's easy to build integrations. But designing great tools for LLMs is hard. MX Engineering deals with multi-faceted problems and requires creativity and deep thought. We have to place guardrails so that LLMs have just enough freedom to do what is right for the job, while preventing common failure modes.

Through practical implementation, we're discovering patterns that work and those that don't. This isn't theoretical - these decisions directly impact whether an LLM can reliably perform its tasks.

Looking Forward

MX Engineering is rapidly evolving, with new patterns and practices emerging daily. For engineers fascinated by LLMs, agents, and tools, this is an extraordinary moment in software development. We're not just building integrations - we're creating a new engineering discipline.

The difference between simple integrations and true MX Engineering becomes clearer every day. As we continue to understand how LLMs interact with our tools, we're laying the foundation for the next generation of AI systems. If you're passionate about this emerging field like we are, check out our open roles. We're looking for engineers ready to help define its future.

SHARE THIS POST

RECENT ARTICLES

THOUGHT LEADERSHIP

5 Takeaways from the 2026 State of AI Agents Report

AI agents have moved quickly from experimentation to real-world deployment. Over the past year, organizations have gone from asking whether agents work to figuring out how to deploy enterprise AI agents reliably at scale. The 2026 State of AI Agents Report from the Claude team captures this shift clearly. Drawing on insights from teams building with modern LLM agents—including those powered by models from providers like Anthropic—the report offers a grounded view of how agentic systems are bein

THOUGHT LEADERSHIP

What It’s Actually Like to Use Docker Sandboxes with Claude Code

We spend a lot of time thinking about how to safely give AI agents access to real systems. Some of that is personal curiosity, and some of it comes from the work we do at Arcade building agent infrastructure—especially the parts that tend to break once you move past toy demos. So when Docker released Docker Sandboxes, which let AI coding agents run inside an isolated container instead of directly on your laptop, we wanted to try it for real. Not as a demo, but on an actual codebase, doing the k

THOUGHT LEADERSHIP

Docker Sandboxes Are a Meaningful Step Toward Safer Coding Agents — Here’s What Still Matters

Docker recently announced Docker Sandboxes, a lightweight, containerized environment designed to let coding agents work with your project files without exposing your entire machine. It’s a thoughtful addition to the ecosystem and a clear sign that agent tooling is maturing. Sandboxing helps solve an important problem: agents need room to operate. They install packages, run code, and modify files — and giving them that freedom without exposing your laptop makes everyone sleep a little better. B

Blog CTA Icon

Get early access to Arcade, and start building now.