Designing an AI service for a global enterprise with human-centred logic

Nicola Wilson | 5 min read

In 2024, I was the design lead on the product design of a conversational AI experience for a global enterprise. The brief? Strengthen, not replace, a core service with an AI agent.

At first, it felt like stepping into the unknown: emerging tech, shifting expectations, layers of automation. But in practice, the process was grounded in the fundamentals of product and service design: understand business and user pain points and opportunities, map possible solutions grounded in deep research, pack away complexity, design against a clear hierarchy of needs, test with real users, and refine.

This Q&A shares the thinking, frameworks, and patterns that helped shape the experience, to help anyone designing AI-led products with people at the centre.

How did you identify the right audience for an AI agent?

That was the first critical step. We weren’t trying to retrofit AI for the sake of innovation. The brief demanded a sharper lens: which audience segment had repeatable needs that could be reliably supported by an AI agent without undermining the value of existing high-value services?

We ran discovery workshops with internal stakeholders to map out different user types and service expectations. From this, it became clear that the opportunity wasn’t in the extremes.

At one end of the scale, you had high-value, high-complexity use cases already served by a lucrative consultative support model. At the other end, less commercially viable use cases which required a level of guidance that risked pulling the beta product into more complex, cost-inefficient territory.

The sweet spot sat in the middle: a large audience with time pressures, limited funds and smaller requirements that warrant a degree of support at a lower price point. That gave us our focus: design a beta product that empowers a core customer without trying to do everything for everyone.

How did you approach replicating a service traditionally delivered by consultants?

We started by listening. We analysed real conversations between service teams and customers, mapping the full end-to-end journey of the multiple engagements. From these sessions, we surfaced a recurring and subtly structured rhythm to how our skilled professionals gather information, build trust, and steer decisions. Patterns emerged in their communication models: starting with broad mapping, reframing questions to reveal underlying priorities, repeating back key points to confirm mutual understanding.

These patterns formed the foundations of what became our conversation node framework, a structure for shaping intent, flow, branching logic, expected input, and assistant response. Each node was carefully designed to move the user forward while staying flexible enough to adapt to different inputs. I developed frameworks for each key journey, which the AI team then used as the basis for conversation design across the end-to-end product.

How did you handle interface design when the main interface was a conversation?

Instead of guiding the user through visual cues like layout, spacing, and interaction states, you're designing an experience where language is the primary interface. The conversation becomes the vehicle for action, context, and decision-making. In this environment, visuals shift into a secondary role. That means relying on structured prompts and predictable patterns to create a sense of flow and progression, without overwhelming or derailing the user.

To guide users through tasks without confusion or overload, we developed a Next Best Action framework. This surfaced the most relevant next step at any given time, based on what the AI had already gathered. It helped users stay oriented and maintain momentum, like a thoughtful assistant who remembers where you left off, even after a break in the conversation.

It was equally important to define clear delivery boundaries for the AI agent. Each conversational flow was built around a single task, with structured start and end points. This helped prevent the agent from attempting to juggle multiple objectives.

Although the experience was delivered through a single interface, the product architecture relied on multiple specialised agents, each responsible for a distinct task.

The experience matched people on both sides of a system. How did you approach that?

The product experience was, at its core, about connecting two audiences, with AI acting as the interpreter, matchmaker, and guide. That meant we had to define clear matching criteria behind the scenes and communicate those decisions in a way that felt transparent and useful.

We mapped and replicated a simple prioritisation method used by our experts to identify what mattered to each user group, distinguishing between essential requirements, strong preferences, and nice-to-haves. We took that logic and worked with the AI team to explore how it could inform a weighted system, one that could score attributes across both sides of the match. This allowed the model to suggest matches with a clear rationale behind them.

And how did you deal with complexity—like conversation history or archiving?

AI needs memory. But memory also needs structure. We designed an elegant conversation archiving system that summarised key past inputs at the start of each new interaction. This gave users just enough context to re-engage without needing to re-read long transcripts or re-answer earlier questions.

It made the experience feel continuous and coherent, even across multiple sessions, while ensuring the system remained lightweight and focused.

Designing the voice-text switcher, why was that such a key part of the experience?

The initial brief from the Head of AI was to design a purely conversational interface. But as we tested early prototypes, and looked at how the major players were handling this space, we decided to create a more balanced experience which allowed the user more autonomy to switch between voice and text.

Designing for both meant thinking carefully about a seemingly small component: the voice-text switcher. We had to consider its states and transitions, how it responded when switching modes, how it signalled listening or processing, and how it behaved when a user paused or returned. It was a small feature with big implications for overall usability.

What about joy? Conversational AI can feel quite dry, how did you keep it engaging?

We wanted the experience to feel encouraging, not transactional. We wove brand elements into key interaction points: the next best action cards, the voice-text switcher, and celebratory micro-moments that marked progress. The tone of voice was embedded into the conversation framework, shaping each prompt to feel clear, human, and supportive. I have written about developing brand in a conversation interface here

Final reflections, what advice would you give to designers working on AI products?

AI added a layer of ambiguity at the start that felt overwhelming. But the same design principles still applied: know your users, map your flows, create a clear hierarchy of needs, and test your assumptions. Don’t let the hype distract you from the fundamentals. Conversation design is still design, and it relies on many of the same frameworks and learnings we’ve developed through building traditional interfaces.