Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Mapping User Journeys for Emerging Technology

Mapping User Journeys for Emerging Technology

Mapping User Journeys for Emerging Technology

In emerging technology, journey mapping needs to shift from identifying existing workflows to hypothesis building and exploration. This helps designers envision future possibilities, test assumptions, and guide users toward discovering value in solutions they couldn’t imagine.

Thoughts

·

3 min


When designing for emerging technology—especially in zero-to-one products— journey mapping shifts from being a tool for identifying existing pain points to a method for hypothesis building and exploration.

For a lot of what we're working on there are no established workflows, no well-worn paths, and, often, no users who can articulate what they need. That’s because when you’re designing something entirely new—say, an AI-de[ployment tool—you’re creating a future users can’t yet imagine.

My team and i call this, designing for the unknown, and it demands a different mindset and approach, especially to journey mapping.

If you’ve ever tried to ask potential users about a future technology—“How would you use an AI agent to do X?”—you'll likely hear a response that fits their limited mental model. That’s because customer research falls short when the problem space is unfamiliar or the solutions are hard to imagine. In emerging tech, journey mapping becomes a way to explore and test hypotheses about what users might need, how they might interact with your product, and where value can emerge.

When the future is undefined, journey mapping becomes a way to collaboratively imagine possibilities and validate assumptions.


Why All Journey Mapping Isn't The Same

In established products, user journey maps highlight existing workflows and pain points—things users already do. But in emerging technology, there are few, if any, existing workflows. Users don’t know what’s possible. They can’t point to a process because it doesn’t yet exist.

The risk here is building a product based on assumptions that feel “obvious” but don’t actually align with real user behavior. I wrote about this in Designing for the Unknown: Why Customer Research Falls Short in Emerging Technology, asking customers to describe a future they can’t yet imagine almost always leads to unhelpful or misleading insights.

So the goal of journey mapping in these cases is not to document current tasks but to imagine, test, and validate the workflows and interactions of the future.


A Framework for Hypothesis-Driven Journey Mapping

1. Always Understand The Problem Space

In zero-to-one products, everything starts with the problem. You might not know the exact workflows, but you can define the user’s underlying goals, frustrations, and constraints.

What problem are we trying to solve?
Who experiences this problem? (Early hypotheses about your user personas.)
Why hasn’t it been solved yet?

Example: If you’re designing an AI-powered network triage tool, the problem might be that network teams struggle to predict and mitigate system issues before they happen.

Tip: Frame problems as opportunities. Instead of asking users what they want, ask what’s hard, what’s broken, and what an ideal outcome might look like.

2. Create Hypothetical User Journeys

Here’s where you shift from documenting existing behavior to designing future workflows. Start by brainstorming hypothetical scenarios:

What tasks would users try to accomplish with your product?
What new workflows might emerge if the technology solved their problems?
What moments could make or break the user’s success?

At this stage, your journeys are educated guesses—working hypotheses that you’ll test later.

Example Hypothesis: For an AI-powered tool, “Users will trust the system to autonomously surface problems and recommend solutions without requiring manual data analysis.”

3. Test Hypotheses with Prototypes

Lay out the hypothesis in stages of the journey, imagining how users might interact with your product. We build low-fidelity prototypes at this point that simulate the workflows we've imagined:

Now go see how your audience responds to your hypothesis

Can users understand the journey?
Do they trust the interactions? (Especially critical for AI-based systems.)
Are there steps that feel unnecessary or confusing?

And instead of asking users “What do you need?” focus on observing their reactions to the prototype. Watch for hesitation, confusion, or unexpected workarounds.

Example: In an AI diagnostics tool, do users trust the system’s suggestions, or do they hunt around looking for a way to double-check its conclusions?

4. Iterate and Refine

Emerging tech is inherently ambiguous. Your first hypothesis won’t be perfect, it may even suck and that’s fine. Use insights from testing to refine your journeys:

What assumptions were wrong?
What steps need to be reimagined?
Where did users experience friction or lack trust?

Over time, your hypothetical journeys will evolve into validated workflows that deliver real value.

Journey Map to Explore and Just Go For It

When building products in emerging technology, user journey mapping is less about documenting the present and more about imagining the future. It’s a tool for exploration, hypothesis building, and iteration—a way to align teams and test ideas before you invest in building complex systems.

Users can’t always articulate what they need in a future they can’t yet envision. Your job as a designer is to surface possibilities, create tangible prototypes, and guide users toward discovering what’s valuable—together.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Building Confidence Through Clarity

Surface AI reasoning, displaying: Confidence Levels, realistic expectations, and the extent of changes to enable informed decision-making.

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction at a Time

See agent actions in context and observe agent performance in network improvement.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

The Solution:
Five Design Principles for Human-Agent Collaboration

What to Consider:
Five Design Principles for Human-Agent Collaboration

  1. Put Humans in the Driver's Seat

Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.

  1. Make the Invisible Visible

The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.

  1. Ensure Accountability

  1. Ensure Accountability

Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.

  1. Collaborate, Don't Just Automate

Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.

  1. Earn Trust Through Consistency:

Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Building Confidence Through Clarity

Surface AI reasoning, displaying:

Confidence levels

Realistic Expectations

Extent of changes to enable informed decision-making

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction At A Time

See agent actions in context and observe agent performance in network improvement.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff