Lately, my work has me thinking a lot about what it really means to build useful AI agents, especially in the context of designing systems where humans and machines operate effectively together.
That’s what caught my attention about Delivrd. It’s a car-buying concierge service that, at first glance, looks like it could be automated. It collects preferences, gathers data, negotiates, and presents options.

Sounds like an agent, right?
But Deliverd isn't an agentic process. It’s a real person with deep car sales experience, someone who’s been on the other side of the deal and knows how dealerships operate from the inside. That background makes a difference when navigating ambiguity, dealing with inconsistent systems, and getting straight answers from salespeople who often withhold key information.
And that disconnect is exactly why I’m writing this.
The car buying process is a sharp illustration of where the abstraction of AI agents breaks down. It looks agent-friendly with structured inputs, and repeatable steps. But behind that flow is a deeply human mess.
For anyone working on agent design, it’s a revealing case study, because the process simply wouldn't work if you tried to automate it with AI today.

What Delivrd Actually Does
Let’s break Delivrd down.
When a user contacts Delivrd, the process looks something like this:
Collect requirements: Budget, model preferences, location, timing
Search public inventory: Cars.com, Autotrader, dealer sites
Screen for quality: VIN lookups, accident history, pricing outlier
Contact dealerships: Phone calls, emails, persistent follow-up
Negotiate price: They ask for "out-the-door" pricing and push for better terms
Summarize options: Clear report to the buyer with best choices
Coordinate next steps: Delivery, paperwork, scheduling test drives
This is a classic agent pipeline, right?
The Fantasy: How AI Agents Could Handle It
You could imagine a system built with a few LLMs, a rules engine, and API access to inventory sites, and we'd likely break it down into distinct AI agents:
Search Agent pulls current listings based on constraints.
Filtering Agent runs VINs, flags red flags, checks market price deltas.
Negotiation Agent sends emails to sales managers asking for best prices.
Summarizing Agent packages the findings into a simple dashboard.
Scheduling Agent books appointments or delivery.
And sure, with decent tools and some fine-tuning, you could probably automate a lot of this today.
So why hasn’t that happened?

The Reality: Where It Falls Apart
Data isn’t trustworthy. Public inventory often lies. Cars listed aren’t actually available. Prices don’t include fees. Discounts aren't visible unless you ask.
↑ AI has no way to validate this unless it calls a human.
Dealerships aren’t built for agents. Most dealers won’t respond to a bot. They barely respond to people. They rely on friction, urgency, and emotional pressure to close.
↑ An AI trying to play by the rules will be ignored.
Communication is messy. Sales reps switch constantly. Emails go unread. Phones matter. Tone matters. Urgency matters.
↑The act of a human calling with a confident voice still beats any automated interaction in this context.
AI has no authority. "I’m representing a buyer ready to move today" means something when a human says it.
↑ An AI saying the same thing is just spam.
No incentive to change. Dealers profit from confusion. They don’t want a clean, API-accessible pricing system. The current model is extractive by design.
↑ Agents disrupt that, which makes them a threat, not a customer.
What This Teaches Us About Agents in the Real World
Delivrd isn’t an agent, it’s a human navigating the kind of workflow we often imagine automating with AI.
And if you really want to understand this human element, just check out Delivrd's YouTube channel where Founder Tomi Mikula can be seen haggling with car salespeople over the phone, navigating tone, leverage, and pushback in real time.
So, while my developer colleagues will remind me the data parsing and structured reasoning are solvable problems here. The parts that matter most still aren't. Human behavior is messy, inconsistent, and intentionally opaque.
Until that changes, agents might help you find a car. But doing something as humanly complex as closing a deal still takes someone who can navigate ambiguity, read intent, and manage dynamic conversations—skills no system has mastered (yet).
P.S. This isn’t just about car buying. Any process shaped by interpersonal communication, opaque information, or shifting incentives will continue to be the hardest areas for an agentic system to tackle.
So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.
As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?
The Agentic Teammate: Enhancing Knowledge Work
In this new world, AI agents become our teammates, offering powerful capabilities:
Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.
Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.
Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.
Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.
So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.
As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?
The Agentic Teammate: Enhancing Knowledge Work
In this new world, AI agents become our teammates, offering powerful capabilities:
Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.
Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.
Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.
Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.
Addressing the Challenges: Gaps in Human-Agent Collaboration
All this autonomous help is great, sure – but it's not without its challenges.
Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:
So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.
As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?
The Agentic Teammate: Enhancing Knowledge Work
In this new world, AI agents become our teammates, offering powerful capabilities:
Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.
Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.
Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.
Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.
Probabilistic Operations
AI agents work with probabilities, leading to inconsistent outcomes and misinterpretations of intent.
Trust Over Time
Humans tend to trust AI teammates less than human teammates, making it crucial to build that trust over time.
Gaps in Contextual Understanding
AI agents often share raw data instead of contextual states, and may miss human nuances like team dynamics and intuition.
Challenges in Mental Models
Evolving AI systems can be difficult for humans to understand and keep up with, as the AI's logic may not align with human mental models.
The Solution:
Five Design Principles for Human-Agent Collaboration
Put Humans in the Driver's Seat
Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.
Make the Invisible Visible
The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.
Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.
Collaborate, Don't Just Automate
Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.
Earn Trust Through Consistency:
Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.
Empowering Users with Control
Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.
Designing Tomorrow's Human-Agent Collaboration At Outshift
These principles are the foundation for building effective partnerships between humans and AI at Outshift.
Building Confidence Through Clarity
Surface AI reasoning, displaying: Confidence Levels, realistic expectations, and the extent of changes to enable informed decision-making.
Always Try To Amplify Human Potential
Actively collaborate through simulations and come to an effective outcome together.
Let Users Stay In Control When It Matters
Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.
Take It One Interaction at a Time
See agent actions in context and observe agent performance in network improvement.
Addressing the Challenges: Gaps in Human-Agent Collaboration
All this autonomous help is great, sure – but it's not without its challenges.
Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:
Addressing the Challenges: Gaps in Human-Agent Collaboration
All this autonomous help is great, sure – but it's not without its challenges.
Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:
The Solution:
Five Design Principles for Human-Agent Collaboration
What to Consider:
Five Design Principles for Human-Agent Collaboration
Put Humans in the Driver's Seat
Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.
Make the Invisible Visible
The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.
Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.
Collaborate, Don't Just Automate
Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.
Earn Trust Through Consistency:
Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.
Designing Tomorrow's Human-Agent Collaboration At Outshift
These principles are the foundation for building effective partnerships between humans and AI at Outshift.
Empowering Users with Control
Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.
Building Confidence Through Clarity
Surface AI reasoning, displaying:
Confidence levels
Realistic Expectations
Extent of changes to enable informed decision-making
Addressing the Challenges: Gaps in Human-Agent Collaboration
All this autonomous help is great, sure – but it's not without its challenges.
Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:
The Solution:
Five Design Principles for Human-Agent Collaboration
What to Consider:
Five Design Principles for Human-Agent Collaboration
Put Humans in the Driver's Seat
Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.
Make the Invisible Visible
The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.
Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.
Collaborate, Don't Just Automate
Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.
Earn Trust Through Consistency:
Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.
Always Try To Amplify Human Potential
Actively collaborate through simulations and come to an effective outcome together.
Let Users Stay In Control When It Matters
Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.
Designing Tomorrow's Human-Agent Collaboration At Outshift
These principles are the foundation for building effective partnerships between humans and AI at Outshift.
Take It One Interaction At A Time
See agent actions in context and observe agent performance in network improvement.
As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.
Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.
Designing Tomorrow's Human-Agent Collaboration At Outshift
These principles are the foundation for building effective partnerships between humans and AI at Outshift.