Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Designing for the Unknown:

Generative Thinking: Moving Beyond the Limits of Design Thinking

Generative Thinking: Moving Beyond the Limits of Design Thinking

Generative Thinking: Moving Beyond the Limits of Design Thinking

Generative thinking moves beyond the structured, linear approach of design thinking. It embraces the complexity of emerging technologies, focusing on iterative exploration, dynamic collaboration, and systems-level solutions to uncover transformative possibilities.

Thoughts

·

4 min

Blog cover image
Blog cover image
Blog cover image


Leading a team tasked with designing complex AI infrastructure, tooling, and solutions means navigating a ton of ambiguity, rapid evolution, and unprecedented complexity. And I’ve seen firsthand the promise—and the pitfalls—of traditional innovation frameworks, including design thinking. It's a challenge that demands more than process-driven thinking—it requires a mindset that thrives on exploration and cross-disciplinary collaboration.

This is where generative thinking has really shifted how my team and I approach design and innovation.


When Design Thinking Stopped Working

In the early days of our AI product development, we leaned heavily on design thinking. After all, it was the framework we were taught to trust: empathize, ideate, prototype, test. And for certain projects—especially those focused on user-facing experiences—it worked well. But as we delved deeper into building the infrastructure and tools that power AI systems, we started to hit walls.

Complexity Exposed the Framework’s Limits: AI solutions are rarely linear. Building tools that enable scalable AI pipelines or designing interfaces for engineers managing neural networks isn’t something you can map neatly onto a step-by-step process. Design thinking felt too rigid and formulaic for the chaos of emerging technology.

Misalignment Across Teams: Product and engineering often felt like they were on the sidelines, brought in to validate or implement ideas rather than co-create them. The separation between “designing” and “building” created inefficiencies and friction.

Incrementalism Over Transformation: Our outputs—no matter how polished—often felt like incremental improvements. While they solved immediate user problems, they didn’t challenge the status quo or push the boundaries of what was possible in our field.

These shortcomings led me to ask: How do we design for the unknown? How do we create tools and systems for technologies that are evolving faster than our processes?


Embracing Generative Thinking

Generative thinking became our answer. It wasn’t something we implemented overnight; it was a mindset we developed organically as we struggled to adapt to the demands of AI development. Here’s what it means for us—and why it works.Design Thinking Principles

At its core, design thinking embraces human-centered design, putting the needs and experiences of users at the forefront. It encourages empathy, understanding the context of users and their challenges.

  1. Exploration Over Definition: Generative thinking freed us from the constraints of predefined outcomes. Instead of narrowing in on a single problem to solve, we started by exploring possibilities—using generative AI tools to augment our creativity and collaborative workshops to surface ideas from every corner of the team. For example, when designing a new AI model management tool, we didn’t start with assumptions about user workflows. Instead, we used systems thinking to map out how data scientists, engineers, and decision-makers interacted with the ecosystem. This allowed us to uncover pain points and opportunities we hadn’t considered.


  2. Deep Collaboration With Product and Engineering: Like design thinking, generative thinking only works when everyone has a seat at the table. In our team, design, product, and engineering work together from day one. Engineers share the technical constraints and possibilities that shape our ideas, while product managers keep us aligned with business goals. This cross-pollination ensures that the solutions we generate are both innovative and feasible.


  3. Leveraging Generative AI: In the past, we might use placeholder content like lorem ipsum in prototypes, which often left product and engineering teams guessing about the real impact of a design. Today, generative tools allow us to fill those gaps with meaningful, contextual outputs. By using the right prompts, we can quickly generate realistic content, workflows, or scenarios that clarify our intent, enabling better understanding, alignment, and collaboration across teams.


  4. Focus on Systems Thinking: AI products don’t exist in isolation; they’re part of complex ecosystems. Generative thinking forces us to look beyond the immediate problem and understand how each component interacts within the broader system. This holistic perspective has been critical in designing infrastructure tools that not only solve current challenges but also scale for future needs.


Real Value Not Buzz Words

The value of generative thinking has transformed how we work and what we deliver. For my team, it has meant faster innovation, stronger alignment, and better outcomes.

By generating and testing ideas in parallel, we’ve dramatically shortened our innovation cycles. On one project, we cut the prototyping timeline in half by leveraging generative tools to explore multiple workflows simultaneously. This approach not only sped up the process but also helped us uncover stronger solutions earlier.

Generative thinking has also improved alignment across teams. Collaboration is no longer just smoother—it’s more meaningful. Everyone feels ownership over the solutions we’re building, creating a stronger sense of buy-in and driving better execution.

Perhaps most importantly, generative thinking has given us the freedom to aim higher. Instead of optimizing for the status quo, we’re now designing tools that challenge how AI teams work. The solutions we create make their processes faster, more intuitive, and more impactful.


A Better Mindset for Emerging Technology

Generative thinking isn’t a rejection of design thinking—it’s an evolution. It recognizes that the challenges of emerging technologies demand more flexibility, creativity, and collaboration than traditional frameworks provide. As a design leader, my role isn’t just to guide my team through the creative process; it’s to create the conditions where generative thinking can thrive, where design, product, and engineering come together to build not just tools, but the future.

So, when I hear skepticism about yet another "buzzword," I get it. But generative thinking isn’t just talk—it’s how we’re delivering real, measurable value in one of the most complex, fast-moving fields out there. And that’s worth paying attention to.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

So, that gives us a world where AI agents can discover and authenticate one another, share complex information securely, and adapt to uncertainty while collaborating across different domains. And users will be working with agents that will pursue complex goals with limited direct supervision, acting autonomously on behalf of them.

As a design team, we are actively shaping how we navigate this transformation. And one key question keeps emerging: How do we design AI experiences that empower human-machine teams, rather than just automate them?

The Agentic Teammate: Enhancing Knowledge Work

In this new world, AI agents become our teammates, offering powerful capabilities:

Knowledge Synthesis: Agents aggregate and analyze data from multiple sources, offering fresh perspectives on problems.

Scenario Simulation: Agents can create hypothetical scenarios and test them in a virtual environment, allowing knowledge workers to experiment and assess risks.

Constructive Feedback: Agents critically evaluate human-proposed solutions, identifying flaws and offering constructive feedback.

Collaboration Orchestration: Agents work with other agents to tackle complex problems, acting as orchestrators of a broader agentic ecosystem.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Building Confidence Through Clarity

Surface AI reasoning, displaying: Confidence Levels, realistic expectations, and the extent of changes to enable informed decision-making.

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction at a Time

See agent actions in context and observe agent performance in network improvement.

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

Addressing the Challenges: Gaps in Human-Agent Collaboration

All this autonomous help is great, sure – but it's not without its challenges.

Autonomous agents have fundamental gaps that we need to address to ensure successful collaboration:

The Solution:
Five Design Principles for Human-Agent Collaboration

What to Consider:
Five Design Principles for Human-Agent Collaboration

  1. Put Humans in the Driver's Seat

Users should always have the final say, with clear boundaries and intuitive controls to adjust agent behavior. An example of this is Google Photos' Memories feature which allows users to customize their slideshows and turn the feature off completely.

  1. Make the Invisible Visible

The AI's reasoning and decision-making processes should be transparent and easy to understand, with confidence levels or uncertainty displayed to set realistic expectations. North Face's AI shopping assistant exemplifies this by guiding users through a conversational process and providing clear recommendations.

  1. Ensure Accountability

  1. Ensure Accountability

Anticipate edge cases to provide clear recovery steps, while empowering users to verify and adjust AI outcomes when needed. ServiceNow's Now Assist AI is designed to allow customer support staff to easily verify and adjust AI-generated insights and recommendations.

  1. Collaborate, Don't Just Automate

Prioritize workflows that integrate human and AI capabilities, designing intuitive handoffs to ensure smooth collaboration. Aisera HR Agents demonstrate this by assisting with employee inquiries while escalating complex issues to human HR professionals.

  1. Earn Trust Through Consistency:

Build trust gradually with reliable results in low-risk use cases, making reasoning and actions transparent. ServiceNow's Case Summarization tool is an example of using AI in a low-risk scenario to gradually build user trust in the system's capabilities.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Designing Tomorrow's Human-Agent Collaboration At Outshift

These principles are the foundation for building effective partnerships between humans and AI at Outshift.

Empowering Users with Control

Establishing clear boundaries for AI Agents to ensure they operate within a well-defined scope.

Building Confidence Through Clarity

Surface AI reasoning, displaying:

Confidence levels

Realistic Expectations

Extent of changes to enable informed decision-making

Always Try To Amplify Human Potential

Actively collaborate through simulations and come to an effective outcome together.

Let Users Stay In Control When It Matters

Easy access to detailed logs and performance metrics for every agent action, enabling the review of decisions, workflows, and ensure compliance. Include clear recovery steps for seamless continuity.

Take It One Interaction At A Time

See agent actions in context and observe agent performance in network improvement.

As we refine our design principles and push the boundaries of innovation, integrating advanced AI capabilities comes with a critical responsibility. For AI to become a trusted collaborator—rather than just a tool—we must design with transparency, clear guardrails, and a focus on building trust. Ensuring AI agents operate with accountability and adaptability will be key to fostering effective human-agent collaboration. By designing with intention, we can shape a future where AI not only enhances workflows and decision-making but also empowers human potential in ways that are ethical, reliable, and transformative.

Because in the end, the success of AI won’t be measured by its autonomy alone—but by how well it works with us to create something greater than either humans or machines could achieve alone.

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff

Follow the Future of Design

No spam, just some good stuff