Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Why Customer Research Falls Short in Emerging Technology

Why Customer Research Falls Short in Emerging Technology

Why Customer Research Falls Short in Emerging Technology

How do you ask customers about a future they can’t imagine? In emerging technology, customer research often falls short. Designing for the unknown requires embracing uncertainty, exploring possibilities, and rethinking how we innovate.

Thoughts

·

3 min


What happens when the thing you’re designing is so new that customers don’t even know what to ask for?

How do you ask customers about a future they can’t imagine?

There's no question that In well-established markets and products, customer research is a powerful tool and it definitely helps refine existing products, prioritize fixes, and identify opportunities for incremental innovation. But when it comes to designing entirely new products or features, especially in areas fast-evolving as AI—traditional customer research can lead us off a cliff.

The reason is simple: customers excel at articulating their current needs and frustrations, but they struggle to imagine solutions to problems they don’t fully understand yet. I have seen this first hand and the gap becomes even more pronounced when the technology is so new that its potential impact—and the problems it can solve—are not yet clear.


The Problem with "Customer-Led" Innovation in Emerging Areas

Lets look at AI tools - its moving fast, but still in its early stages, both the market and the technology are just starting to be understood. Asking customers for input in scenarios regarding AI innovation can feel like asking someone to design a car before they’ve ever seen a wheel. The feedback you get tends to lead to a safe, incremental solution that doesn’t fully leverage the transformative potential of the emerging technology.

Moreover, customers are often biased by their current experiences. They may focus on patching immediate pain points rather than exploring what’s possible. 

This creates a paradox: the feedback is real and valuable, but it’s not a reliable guide to solving the larger, more systemic problems that these technologies can address.


Asking customers for input in these scenarios can feel like asking someone to design a car before they’ve ever seen a wheel. We should stay away from asking customers what they need or want. 


A Better Approach: Problem Space Exploration

In emerging technology, we need a different kind of research—one that emphasizes exploration over prescription. Rather than asking customers what they need or want, we should start by understanding the problem space:

Identify Unmet Needs: Look for gaps that customers might not articulate but that are evident through observation, data analysis, or industry trends.

Focus on Jobs to Be Done: Understand the broader objectives that customers are trying to achieve, even if they can’t envision how technology could help them get there.

Map the Ecosystem: Study how workflows, tools, and systems interact to uncover opportunities for disruption or augmentation.


Hypothesis-Driven Innovation

Once the problem space is well understood, the next step is to formulate hypotheses. Take some educated guesses about how the technology can address the needs and challenges uncovered in the research. 

Articulate a Clear Hypothesis: Define the problem you’re solving, the target audience, and the unique value your product will bring.

Prototype Early: Build prototypes—not to perfect the solution, but to test assumptions and gather feedback.

Test and Iterate: Use insights from user testing to refine your understanding of the problem and validate or pivot your hypothesis.

This approach shifts the focus from asking customers for solutions to engaging them as co-creators in the exploration process. It acknowledges that while customers may not know what they want, they can help you discover what works once you’ve presented them with possibilities.


Embracing Uncertainty

Designing in the context of emerging technology requires comfort with ambiguity. It’s about exploring the unknown, not following a predetermined path. While this can be challenging, it’s also where the most exciting innovations happen.

As design leaders, we must advocate for a process that embraces exploration, experimentation, and iteration. This means shifting from a customer-led approach to one that is grounded in curiosity and creativity.

So when it comes to customer research – Instead of seeking validation, we should focus on discovery. Instead of asking customers to define the solution, we should collaborate with them to uncover the problem.

Bottomline, the path to innovation does not start with interview answers but with questions. By prioritizing problem space exploration and hypothesis-driven design, we can navigate the uncertainty of new markets and create products that solve meaningful problems—ones that customers may not even know they have yet.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Building Confidence Through Clarity

Surface AI reasoning, displaying: Confidence Levels, realistic expectations, and the extent of changes to enable informed decision-making.

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction at a Time

See agent actions in context and observe agent performance in network improvement.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

The Solution:
Five Design Principles for Human-Agent Collaboration

What to Consider:
Five Design Principles for Human-Agent Collaboration

  1. Put Humans in the Driver's Seat

Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.

  1. Make the Invisible Visible

The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.

  1. Ensure Accountability

  1. Ensure Accountability

Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.

  1. Collaborate, Don't Just Automate

Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.

  1. Earn Trust Through Consistency:

Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Building Confidence Through Clarity

Surface AI reasoning, displaying:

Confidence levels

Realistic Expectations

Extent of changes to enable informed decision-making

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction At A Time

See agent actions in context and observe agent performance in network improvement.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff