Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Designing for States, Not Screens: What OpenAI’s Agent Push Means for UX

Designing for States, Not Screens: What OpenAI’s Agent Push Means for UX

Designing for States, Not Screens: What OpenAI’s Agent Push Means for UX

Articles

·

5 min


For me, two announcements stood out from OpenAI’s 2025 DevDay:

  • Apps inside ChatGPT, which allow developers to embed fully interactive apps within the conversational interface.


  • AgentKit, a toolkit for composing, deploying, and evaluating agents with connectors, workflows, and visual authoring tools.


Looking at them together, they point to a single idea:

OpenAI is building the operating system for agentic interaction. Apps and tools will no longer live beside the agent; they’ll live inside it.


From Screens to States

In the good ol’ days of traditional UX, we designed flows, pages, and transitions between static views. Agentic systems design has replaced that with something more fluid: State.

Every step of an agent’s reasoning, every handoff to a tool, every change in context, every uncertainty, is a state.

I think we all know our job as designers today isn’t just to make things look good or just flow smoothly; it’s to help people understand what’s happening and give them clear ways to shape or redirect it.

Picture a travel planning agent (like the one OpenAI is demoing for Agent Builder).

A user might say:

“Plan a three city trip across Europe in July with museum stops and a tight budget.”

The agent gets to work, comparing destinations, mapping itineraries, and balancing cost and timing.

Then the user pivots:

“Actually, drop Berlin, add Lyon instead.”

The interface doesn’t reset. It updates in place.

The map shifts, suggestions adjust, and a friendly note confirms the change, you’re still in your “trip plan,” just with a new twist.



That’s stateful design: preserving continuity, staying transparent, and helping people feel in control even as the system evolves behind the scenes.


Designing for Agentic Continuity

When conversation, computation, and UI are all fused - UX becomes the bridge between what’s understood and what’s visible. Our work shifts from arranging layouts to really making the invisible visible, giving users a sense of where they are in the process and how to steer it.

OpenAI’s direction reinforces the focus we’ve already had on human agentic communication. Component systems need to:

  • Show dynamic modes clearly. Components like chat bubbles, tool cards, or tables should signal whether the agent is thinking, executing, paused, waiting, or handling an error.

  • Handle smooth transitions. Switching between chat and embedded modules such as maps, tables, or forms should feel natural, not like hopping between apps.

  • Support partial results. Agents should share progress as they go, not disappear into processing. Users should always know the system is “alive.”

  • Treat fallback and recovery as design patterns. When something fails, show what happened and offer next steps. Don’t hide it behind a spinner.

Agentic continuity is about keeping the conversation human, fluid, forgiving, and clear. It also sets the stage for how we think about collaboration. As the agent becomes a participant in the experience, designers must make its reasoning, actions, and boundaries visible so people can stay oriented and involved.


Grounding in Human Agent Design Principles

At Outshift, we’ve been exploring how humans and agents can collaborate through our HAX principles: Control, Clarity, Recovery, Collaboration, and Traceability.

They’re not theoretical; they’re the foundation for making this new agentic world usable and trustworthy.

Principle

What It Means

How It Applies Now

Control

Users should always feel in charge

Every agent action, from invoking a tool to booking a service, needs clear stop, cancel, or edit options. Users should steer, not spectate.

Clarity

The system’s intent and state should be obvious

Replace vague “thinking…” messages with clear indicators of what’s happening and why. Context builds confidence.

Recovery

People need easy ways to fix mistakes

Let users roll back, retry, or tweak input without losing context. Treat recovery as part of the flow, not an afterthought

Collaboration

Agents are partners, not assistants

Build for co-creation: ask clarifying questions, show reasoning, invite input. It’s a dialogue, not a command line.

Traceability

Every decision should be explainable

Give people visibility into what data, tools, and reasoning paths were used. Transparency is how agents earn trust.

Together, these shift the UX to become a shared workspace where humans and agents think and act together.


Where OpenAI Is Headed and What Designers Need to Consider

OpenAI’s roadmap, built on AgentKit, ChatKit, and the Apps SDK, lays the foundation for a new interaction model where state, not screen, defines the experience.

For designers, that means:

  • Building state aware component systems that adapt as the agent works.

  • Embedding control and clarity into every moment of interaction.

  • Designing for error, ambiguity, and recovery as part of the journey.

  • Treating traceability and collaboration as essential design primitives.

So, as OpenAI pushes apps and tools to move inside the agent, it reinforces that we need to always design for contextual understandability so users can follow what’s happening and maintain trust through shifting states, not screens.


For me, two announcements stood out from OpenAI’s 2025 DevDay:

  • Apps inside ChatGPT, which allow developers to embed fully interactive apps within the conversational interface.


  • AgentKit, a toolkit for composing, deploying, and evaluating agents with connectors, workflows, and visual authoring tools.


Looking at them together, they point to a single idea:

OpenAI is building the operating system for agentic interaction. Apps and tools will no longer live beside the agent; they’ll live inside it.


From Screens to States

In the good ol’ days of traditional UX, we designed flows, pages, and transitions between static views. Agentic systems design has replaced that with something more fluid: State.

Every step of an agent’s reasoning, every handoff to a tool, every change in context, every uncertainty, is a state.

I think we all know our job as designers today isn’t just to make things look good or just flow smoothly; it’s to help people understand what’s happening and give them clear ways to shape or redirect it.

Picture a travel planning agent (like the one OpenAI is demoing for Agent Builder).

A user might say:

“Plan a three city trip across Europe in July with museum stops and a tight budget.”

The agent gets to work, comparing destinations, mapping itineraries, and balancing cost and timing.

Then the user pivots:

“Actually, drop Berlin, add Lyon instead.”

The interface doesn’t reset. It updates in place.

The map shifts, suggestions adjust, and a friendly note confirms the change, you’re still in your “trip plan,” just with a new twist.



That’s stateful design: preserving continuity, staying transparent, and helping people feel in control even as the system evolves behind the scenes.


Designing for Agentic Continuity

When conversation, computation, and UI are all fused - UX becomes the bridge between what’s understood and what’s visible. Our work shifts from arranging layouts to really making the invisible visible, giving users a sense of where they are in the process and how to steer it.

OpenAI’s direction reinforces the focus we’ve already had on human agentic communication. Component systems need to:

  • Show dynamic modes clearly. Components like chat bubbles, tool cards, or tables should signal whether the agent is thinking, executing, paused, waiting, or handling an error.

  • Handle smooth transitions. Switching between chat and embedded modules such as maps, tables, or forms should feel natural, not like hopping between apps.

  • Support partial results. Agents should share progress as they go, not disappear into processing. Users should always know the system is “alive.”

  • Treat fallback and recovery as design patterns. When something fails, show what happened and offer next steps. Don’t hide it behind a spinner.

Agentic continuity is about keeping the conversation human, fluid, forgiving, and clear. It also sets the stage for how we think about collaboration. As the agent becomes a participant in the experience, designers must make its reasoning, actions, and boundaries visible so people can stay oriented and involved.


Grounding in Human Agent Design Principles

At Outshift, we’ve been exploring how humans and agents can collaborate through our HAX principles: Control, Clarity, Recovery, Collaboration, and Traceability.

They’re not theoretical; they’re the foundation for making this new agentic world usable and trustworthy.

Principle

What It Means

How It Applies Now

Control

Users should always feel in charge

Every agent action, from invoking a tool to booking a service, needs clear stop, cancel, or edit options. Users should steer, not spectate.

Clarity

The system’s intent and state should be obvious

Replace vague “thinking…” messages with clear indicators of what’s happening and why. Context builds confidence.

Recovery

People need easy ways to fix mistakes

Let users roll back, retry, or tweak input without losing context. Treat recovery as part of the flow, not an afterthought

Collaboration

Agents are partners, not assistants

Build for co-creation: ask clarifying questions, show reasoning, invite input. It’s a dialogue, not a command line.

Traceability

Every decision should be explainable

Give people visibility into what data, tools, and reasoning paths were used. Transparency is how agents earn trust.

Together, these shift the UX to become a shared workspace where humans and agents think and act together.


Where OpenAI Is Headed and What Designers Need to Consider

OpenAI’s roadmap, built on AgentKit, ChatKit, and the Apps SDK, lays the foundation for a new interaction model where state, not screen, defines the experience.

For designers, that means:

  • Building state aware component systems that adapt as the agent works.

  • Embedding control and clarity into every moment of interaction.

  • Designing for error, ambiguity, and recovery as part of the journey.

  • Treating traceability and collaboration as essential design primitives.

So, as OpenAI pushes apps and tools to move inside the agent, it reinforces that we need to always design for contextual understandability so users can follow what’s happening and maintain trust through shifting states, not screens.


For me, two announcements stood out from OpenAI’s 2025 DevDay:

  • Apps inside ChatGPT, which allow developers to embed fully interactive apps within the conversational interface.


  • AgentKit, a toolkit for composing, deploying, and evaluating agents with connectors, workflows, and visual authoring tools.


Looking at them together, they point to a single idea:

OpenAI is building the operating system for agentic interaction. Apps and tools will no longer live beside the agent; they’ll live inside it.


From Screens to States

In the good ol’ days of traditional UX, we designed flows, pages, and transitions between static views. Agentic systems design has replaced that with something more fluid: State.

Every step of an agent’s reasoning, every handoff to a tool, every change in context, every uncertainty, is a state.

I think we all know our job as designers today isn’t just to make things look good or just flow smoothly; it’s to help people understand what’s happening and give them clear ways to shape or redirect it.

Picture a travel planning agent (like the one OpenAI is demoing for Agent Builder).

A user might say:

“Plan a three city trip across Europe in July with museum stops and a tight budget.”

The agent gets to work, comparing destinations, mapping itineraries, and balancing cost and timing.

Then the user pivots:

“Actually, drop Berlin, add Lyon instead.”

The interface doesn’t reset. It updates in place.

The map shifts, suggestions adjust, and a friendly note confirms the change, you’re still in your “trip plan,” just with a new twist.



That’s stateful design: preserving continuity, staying transparent, and helping people feel in control even as the system evolves behind the scenes.


Designing for Agentic Continuity

When conversation, computation, and UI are all fused - UX becomes the bridge between what’s understood and what’s visible. Our work shifts from arranging layouts to really making the invisible visible, giving users a sense of where they are in the process and how to steer it.

OpenAI’s direction reinforces the focus we’ve already had on human agentic communication. Component systems need to:

  • Show dynamic modes clearly. Components like chat bubbles, tool cards, or tables should signal whether the agent is thinking, executing, paused, waiting, or handling an error.

  • Handle smooth transitions. Switching between chat and embedded modules such as maps, tables, or forms should feel natural, not like hopping between apps.

  • Support partial results. Agents should share progress as they go, not disappear into processing. Users should always know the system is “alive.”

  • Treat fallback and recovery as design patterns. When something fails, show what happened and offer next steps. Don’t hide it behind a spinner.

Agentic continuity is about keeping the conversation human, fluid, forgiving, and clear. It also sets the stage for how we think about collaboration. As the agent becomes a participant in the experience, designers must make its reasoning, actions, and boundaries visible so people can stay oriented and involved.


Grounding in Human Agent Design Principles

At Outshift, we’ve been exploring how humans and agents can collaborate through our HAX principles: Control, Clarity, Recovery, Collaboration, and Traceability.

They’re not theoretical; they’re the foundation for making this new agentic world usable and trustworthy.

Principle

What It Means

How It Applies Now

Control

Users should always feel in charge

Every agent action, from invoking a tool to booking a service, needs clear stop, cancel, or edit options. Users should steer, not spectate.

Clarity

The system’s intent and state should be obvious

Replace vague “thinking…” messages with clear indicators of what’s happening and why. Context builds confidence.

Recovery

People need easy ways to fix mistakes

Let users roll back, retry, or tweak input without losing context. Treat recovery as part of the flow, not an afterthought

Collaboration

Agents are partners, not assistants

Build for co-creation: ask clarifying questions, show reasoning, invite input. It’s a dialogue, not a command line.

Traceability

Every decision should be explainable

Give people visibility into what data, tools, and reasoning paths were used. Transparency is how agents earn trust.

Together, these shift the UX to become a shared workspace where humans and agents think and act together.


Where OpenAI Is Headed and What Designers Need to Consider

OpenAI’s roadmap, built on AgentKit, ChatKit, and the Apps SDK, lays the foundation for a new interaction model where state, not screen, defines the experience.

For designers, that means:

  • Building state aware component systems that adapt as the agent works.

  • Embedding control and clarity into every moment of interaction.

  • Designing for error, ambiguity, and recovery as part of the journey.

  • Treating traceability and collaboration as essential design primitives.

So, as OpenAI pushes apps and tools to move inside the agent, it reinforces that we need to always design for contextual understandability so users can follow what’s happening and maintain trust through shifting states, not screens.


So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Probabilistic Operations

AI agents work with probabilities, leading to inconsistent outcomes and misinterpretations of intent.

Trust Over Time

Humans tend to trust AI teammates less than human teammates, making it crucial to build that trust over time.

Gaps in Contextual Understanding

AI agents often share raw data instead of contextual states, and may miss human nuances like team dynamics and intuition.

Challenges in Mental Models

Evolving AI systems can be difficult for humans to understand and keep up with, as the AI's logic may not align with human mental models.

The Solution:
Five Design Principles for Human-Agent Collaboration

  1. Put Humans in the Driver's Seat

Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.

  1. Make the Invisible Visible

The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.

  1. Ensure Accountability

Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.

  1. Collaborate, Don't Just Automate

Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.

  1. Earn Trust Through Consistency:

Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Building Confidence Through Clarity

Surface AI reasoning, displaying: Confidence Levels, realistic expectations, and the extent of changes to enable informed decision-making.

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction at a Time

See agent actions in context and observe agent performance in network improvement.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

The Solution:
Five Design Principles for Human-Agent Collaboration

What to Consider:
Five Design Principles for Human-Agent Collaboration

  1. Put Humans in the Driver's Seat

Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.

  1. Make the Invisible Visible

The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.

  1. Ensure Accountability

  1. Ensure Accountability

Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.

  1. Collaborate, Don't Just Automate

Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.

  1. Earn Trust Through Consistency:

Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Building Confidence Through Clarity

Surface AI reasoning, displaying:

Confidence levels

Realistic Expectations

Extent of changes to enable informed decision-making

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

The Solution:
Five Design Principles for Human-Agent Collaboration

What to Consider:
Five Design Principles for Human-Agent Collaboration

  1. Put Humans in the Driver's Seat

Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.

  1. Make the Invisible Visible

The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.

  1. Ensure Accountability

  1. Ensure Accountability

Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.

  1. Collaborate, Don't Just Automate

Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.

  1. Earn Trust Through Consistency:

Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Take It One Interaction At A Time

See agent actions in context and observe agent performance in network improvement.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Content

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff