Designing a responsive brand for a conversation-led AI product
Nicola Wilson | 5 min read
In 2024, I led the brand strategy and identity design for an AI product. If defining a brand means shaping how a company speaks, behaves, and looks, then the rise of AI agents has added a new, dynamic layer to that equation.
In traditional branding, when done well, the brand influences more than just visuals. It’s a form of business design, it can inform what a company sells, how it answers the phone, recruits talent, or handles complaints. Yet in practice, we tend to experience brands most consciously through fixed visual assets: a name, a logo, a colour palette, a website.
AI changes that. It pushes behavioural elements to the forefront. The brand no longer sits on the surface, it plays out in real time, through dialogue and interaction. With generative AI, there’s a kind of semi-autonomous authorship at play, responses that adapt one-to-one, in ways that can’t be entirely scripted. This marks a shift from brand as static expression to brand as active participant.
This Q&A captures some of the thinking and design decisions behind building a brand for an AI product.
How do you make representation in AI feel inclusive, not exclusive?
The original pitch featured the company’s founder as the voice and face of the product. Internally, it resonated, offering something familiar and credible to rally behind. But we quickly saw the risk in making one human the embodiment of the system.
There were immediate concerns around bias and representation. A single visual identity, especially one tied to an older white male, risked signalling a narrow audience or reinforcing legacy power structures. It risked excluding users who didn’t see themselves in that figure, and anchoring a future-facing product in a singular, historical identity.
These conversations were not taken lightly. They formed part of a broader ethical review involving company stakeholders, academic partners from Oxford University, and our design team. Together, we considered not just design implications, but broader social consequences.
Biases in AI models and outputs are a well-documented concern, especially when it comes to visual representation. From the overrepresentation of white male professionals to the reinforcement of gender stereotypes in virtual assistants, there is growing awareness that these design choices can undermine diversity, equity, and inclusion efforts.
Representation in AI branding isn’t just a creative decision, it’s a systemic one. It has long-term impact on who feels seen, supported, and included.
How do you choose the face of your AI while managing reputational risk?
There was also the issue of accountability. Representing the AI through a single, recognisable person blurred the line between author and agent. If the AI gave a bad recommendation or behaved unexpectedly, who would users blame?
In branding for AI, you’re not just designing for best-case scenarios, you’re designing for failure modes too. That meant we needed a presence that could carry the brand without overstating human authorship or personal responsibility. Something users could engage with and trust, without mistaking it for a human promise.
Can an AI be approachable without being human?
Once we stepped away from using a realistic human figure, we explored two distinct directions.
The first was anthropomorphic: giving the AI a character-led presence. We studied examples like Duolingo and Headspace, where mascots have helped build trust, drive emotional connection and deliver professional to adult audiences.
The second route was abstract. Inspired by systems like Siri and Google Assistant, we explored how motion, form, and subtle animation could express presence without implying identity, allowing us to create an inclusive, responsive brand without referencing race, age, or gender.
We ultimately chose the abstract path. The 3D visualisation of the company’s brand mark became the embodiment of the AI itself. It reinforced brand recognition and conveyed adaptability and readiness, while avoiding any risk of attributing the conversation to a specific person.
Just because you can use AI to create realistic depictions of people, should you?
We kept the founder’s story with his full identity being integrated into the ‘About Us’ section, where we described his role in shaping the company’s values and legacy.
Recent advances in AI-powered image generation and video synthesis mean it is now possible to create near-photographic avatars of real people with astonishing accuracy. These tools open new creative possibilities, but they come with a catch.
Technically, we could have chosen hyper-realistic portraits at different life stages, rendered in rich detail. But the more lifelike the imagery became, the more we encountered what is known as the uncanny valley, a psychological effect where something looks almost human, but not quite, and instead triggers discomfort.
User testing confirmed it. Instead of building trust, the hyper-real images introduced doubt. So we made a deliberate choice. A stylised depiction gave us a warm and respectful way to represent the founder, one that acknowledged his role without assigning him to the interface. It honoured the past while keeping the AI’s persona open, inclusive, and future-facing.
How do you design a brand that speaks?
When conversation is the interface, the brand shows up in rhythm, tone, and logic. That meant defining how the system would speak alongside how it looked.
We began by analysing real conversations between client and customer. Despite their warmth and empathy, these were also structured, emotionally aware exchanges that revealed patterns in how information was gathered, how users were guided, and how trust was built.
From these insights, we developed a conversation framework: each interaction was mapped to a specific node, with defined goals, branching logic, and tonal cues. These nodes formed the foundation of the user journeys within the service, ensuring the assistant could listen, respond, and move the user forward without losing clarity or empathy.
We then worked with a specialist copywriter to bring example flows to life in natural language, drawing from legacy tone guidelines. This work was fed back into the framework, creating a system that could scale across interactions while maintaining a consistent voice.
Where do you find visual brand in a minimal interface?
In a conversation-led interface, visual branding is stripped back to only what’s essential. In order to maximise visual communication in small components, we developed a flexible illustration suite that could express support, celebration, or guidance depending on context. These visuals were warm and expressive, designed to add humanity without overwhelming the interface. Whether setting expectations at sign-up or acknowledging progress mid-journey, they helped reinforce a sense of presence and care.
Landing, registration, and the About section gave us space and a more traditional interface to introduce brand elements. Inside the product, ‘Next Best Action’ cards helped orient users as they entered the platform. These components provided structured guidance, while offering a consistent surface to introduce brand. We also introduced celebratory interactions at key moments, using subtle animation and visual feedback to create rhythm and acknowledge progress. These interactions were carefully paced to maintain momentum and avoid unnecessary noise.
The voice–text switcher and avatar featured an animated version of the brand mark. As the assistant transitioned between states, the mark shifted accordingly – pulsing while listening, glowing when active, and softening when idle. These responsive cues made the assistant feel attentive without relying on human likeness.
Each of these elements combined function with expression, allowing the brand to show up through behaviour rather than surface styling.
Should you evolve a legacy brand for a future-facing AI product?
This wasn’t about inventing a brand from scratch. It was about evolving an established identity to meet the needs of a more dynamic, dialogue-led product.
We explored a range of directions in the early design phase. Some stayed close to the legacy system, rooted in flat vector graphics and bold block colours. But once we established that the assistant’s presence would be expressed through the brand mark, we began developing a 3D visual language. This opened up opportunities for fluid animation, depth, and responsiveness - qualities better suited to a product that operates in real time.
The AI product also needed to sit within a broader brand ecosystem. It drew on both the master brand and a prominent digital service line, each with distinct palettes and tones. We selected two core colours that bridged the gap, one aligned with the master’s professionalism, the other with the warmth and energy of the service brand. The introduction of gradients and lighting effects added a sense of motion and life.
The result was a visual language that respected the organisation’s legacy and broader brand architecture, while introducing a distinctive, expressive layer, designed specifically for a product that listens, adapts, and responds.
How important was creating a design system for Beta?
Designing for AI meant accepting that the boundaries were constantly moving. New capabilities, new models, and new ways of interacting were emerging in real time. We needed to deliver structure without rigidity.
To support a fast-moving beta, we established a clear, scalable design system that could anchor the product while allowing for change. It gave developers the tools to build quickly; modular components, reusable patterns, and flexible brand assets.
The design system acted as a foundation, but not a fixed one. It was built with change in mind, ready to adapt as the product and its possibilities evolved.
Reflections and takeaways
Representation carries weight
How you choose to personify your AI, if at all, has real implications. It influences how users relate to the system, who they think it’s for, and what kind of authority or empathy it appears to have.
Reputational risk is real
Tying the system to a real person can blur the lines between author and agent. When the AI makes a mistake, who is held responsible? Designing for AI means designing for edge cases and failures, not just ideal journeys.
Anthropomorphism can take many forms
Whether it’s a friendly character or an abstracted mark with lifelike behaviour, anthropomorphism can offer alternatives to literal human likeness.
Movement should be meaningful
Rather than assigning human features, specific movements and micro-interactions can make the AI feel engaged, without needing to look like a person.
Evaluate new tools with care
AI design moves fast, and the tools available are powerful. But new capability shouldn’t automatically drive creative direction. Always come back to the user's needs and the desired product outcome.
Behaviour defines brand
Use those behavioural insights to define tone, rhythm, and interaction then build the brand around them.
Small spaces require clear signals
In conversation-led UIs, there’s limited space. Animation and tone need to work harder, reinforcing intent, building trust, and keeping users oriented without visual clutter.
Understand your brand ecosystem
When AI products they sit alongside an existing service and brands, evolve elements to ensure consistency, while introducing new forms of expression suited to real-time interaction.
Make your design system work for speed
Beta phases require momentum. A modular, lightweight system enables developers to build fast while staying on-brand. Think of flexible components, clear rules, and reusable assets.
Expect everything to shift
AI tools, user expectations, and interaction models evolve constantly. What feels impossible this today might be possible tomorrow.