Editor's Note: This article was originally published in April 2024 and has been updated with new insights from Design Director Ryan Davis, stemming from WillowTree's 2 Weeks To Better docuseries on AI in Healthcare, Ep204: “UX Design: Supporting Patients & Providers.”
Artificial intelligence offers immense potential to support digital healthcare experiences for both patients and providers. However, a key challenge remains: building trustworthy AI features that ensure all stakeholders find value in AI and machine learning while feeling comfortable using this technology to support crucial health-related decision-making.
“In healthcare delivery, we think about the clinical care team experience as it connects to the patient experience,” explains WillowTree Design Director Ryan Davis. “We realize the flow of information often starts with the physician or nurse practitioner or case worker, then the question is: how does that translate to the patient?”
We had the opportunity to design a conversational AI assistant (note that AI-enabled assistants differ from legacy technology “chatbots”) for one of our healthcare clients, and I originally published this article in April 2024. More recently, we pulled back the curtain on a similar AI assistant design process for our 2 Weeks To Better (2WTB) docuseries for healthcare delivery, as shared in the episode, “UX Design: Supporting Patients and Providers.”
These projects highlighted the obstacles to building trust with AI experiences in healthcare, particularly the need to balance transparency, reliability, and security while navigating complex regulatory frameworks.
First, the sheer novelty of AI can breed skepticism, making it challenging to build trust.
Second, building trustworthy AI requires juggling transparency, reliability, and security. In heavily regulated fields like healthcare, AI systems and underlying algorithms often have complex decision-making processes to ensure security and compliance with sensitive datasets, which can throw off the delicate balance of this trust-building criteria. This complexity can make it harder for users to understand the "why" behind the AI's information and recommendations, leading to a further lack of trust.
Concerns around AI bias and hallucination mitigation also remain. Understandably, users and other healthcare stakeholders adhering to the edict of "first, do no harm" might hesitate to rely on artificial intelligence or embrace its potential if they fear inaccuracy or bias.
Finally, users tend to distrust healthcare systems by default, regardless of AI integration, often fearing that any type of engagement with healthcare coverage will impact their costs/premiums.
Below, we’ll share details about our top 5 design techniques for building trustworthy AI experiences that came out of the work with our healthcare client.
NOTE: To ensure privacy and confidentiality, we’ve replaced the client's name and branding with "WT Wellness" and are referring to their conversational AI assistant as "Willow."
We strategically leveraged design tactics to combat the biggest challenges of building trust for our AI healthcare conversational AI assistant. Clear and concise messaging, interface elements that offer a glimpse into the AI's workings, and a streamlined experience that acknowledges the system's limitations are all strategies that can overcome trust hurdles with users and result in a successful and well-received AI experience.
We've identified the following five key design techniques for building trustworthy AI experiences in healthcare, along with insights on the importance of prototyping and user testing.
In the first moments of new users adopting an AI experience, clearly explaining its benefits and security measures is crucial for patients, caregivers, and healthcare providers across the clinical care team. If the intentions, guidelines, and expectations of AI are unclear, it can create barriers to usage, especially for those skeptical of AI in healthcare settings (though our 2025 research in emergency healthcare delivery shows that a greater number of users are comfortable and even expect AI deployments in healthcare these days).
For patients, we opted to check for consent before offering the AI experience. We included a consent capture screen before their first use of the AI assistant, anticipating user questions and communicating the AI feature's value and security measures.
For healthcare providers, we recognized the importance of allowing them to consent to integrating AI into their workflows. This approach ensures that care team members are comfortable with how AI is being used to support their practice and patient care.
Additional messaging or an automated response that explicitly states whether or not using an AI assistant will trigger some action from the provider may be worthwhile.
By capturing consent from both patients and providers, we build trust around AI from the first interaction and provide the context needed to confidently engage with the conversational AI experience.
As part of good conversational AI assistant UX/UI design, flagging that AI is part of the experience from the start is essential to set expectations for both patients and healthcare providers. When users engage with a healthcare platform, they might automatically expect to interface with a human agent. Clear messaging and consistent AI iconography throughout the website or app can signal and reinforce the use of AI.
We included clear messaging, using "Powered by AI" in the eyebrow to drive home the fact that users are embarking on an AI experience.
In our recent 2WTB project, design reviewers discussed with Ryan Davis the value of including clear privacy disclosures on how the AI system is seamlessly sharing information with the patient’s clinical care team, ensuring there are no surprises or breaches of privacy. This transparency helps patients understand the flow of information and reassures providers that they're always in the loop.
By broadcasting the use of AI through verbal and visual design elements, we minimize unnecessary barriers to usage and encourage all users to feel supported and comfortable interacting with the AI feature.
Managing expectations about AI capabilities is crucial for both patients and healthcare providers. We included messaging at the start of the experience to reinforce the core use cases supported for initial release, such as finding a doctor and managing specific health needs.
Again, clear messaging sets expectations for the kinds of support the product can successfully provide, building user trust as the system delivers on its identified use cases.
A key insight from our recent work is the importance of doctor-vetted AI processes. We design systems to balance AI efficiency with human oversight. For patients, this means explaining that while AI is providing information and suggestions, these are always reviewed and approved by healthcare professionals before being released. For providers, we emphasize how AI supports their workflow without replacing their expertise.
Ryan Davis emphasizes this balance in our recent 2WTB episode:
"Here, we're imagining this chat function where someone can ask this chat in a conversational way, ‘Hey, I'm experiencing some pain here. What does this mean? Should I go back to the hospital?' And it will pull from the resources that the doctors already trusted and vetted, as well as from the conversations that they've had in the ER," in terms of whether that pain was expected versus abnormal and deserving of serious concern/action.
“That's where this really starts to take shape and become interesting, because it's not just a de facto AI interface," noted one of our senior designers. "It's going to learn based on your actual medical history, your ER visits, and your doctor's input.”
When requests fall outside the AI tool's capabilities or compliance boundaries, we've designed a seamless handoff to human customer service. This approach signals to users that the system knows its limits, fostering a sense of reliability in the experience.
In a successful AI experience, showing how the system addresses user needs is essential for building confidence. We applied UI elements that note observations and tasks adjacent to relevant chat messages. These indicators allow both patients and providers to see how the AI is interpreting conversational information and tracking essential details.
For patients, design considerations might also include showing how the AI is compiling their symptoms or health concerns. For providers, they could demonstrate how the AI is organizing patient information or suggesting relevant medical literature.
These visual cues help all users understand what the AI is doing, building trust and confidence throughout the interaction.
In healthcare, where the stakes are high, backing up AI-provided information with credible sources is crucial. We've enhanced this technique by indicating that the AI is referencing doctor-vetted resources, adding an extra layer of authority to the information provided.
Our design shows sources for AI-generated responses, with attributions appearing as snippets of text within the chat. We made sure to include clickable links to source content, allowing users to verify information independently.
For healthcare providers, this feature offers quick access to relevant medical literature or guidelines. For patients, it provides reassurance that the information they're receiving is grounded in reliable, professional medical knowledge -- not a random Reddit user's unfounded opinions.
Designing AI experiences for healthcare requires a deep understanding of both patient and provider needs, which can only be truly gauged through rigorous prototyping and user testing. As Ryan Davis explained, "Working in very low fidelity, testing, seeing the results of that, iterating on that, allows us to come up with a more refined product much quicker. We love to perform user testing even at that low stage of fidelity because, even at that point, we can still get an idea of how the work is performing."
Our design process involves creating interactive prototypes that simulate the AI experience for both patients and healthcare providers. We then conduct extensive user testing sessions, gathering feedback on everything from the AI's conversational style to the clarity of information presentation.
Our recent 2WTB exercise demonstrated the value of this approach. According to Partner, VP & Health & Wellness Lead Sydnor Gammon, "We were able to get real feedback from industry experts to think about what would this actually look like in a care setting, what would we need to tweak, what's working well that we can build on, and what would we need to change?"
Specifically, this iterative approach allowed us to:
Of course, an iterative approach also yields immediate insights that shape the final product. For instance, during 2WTB testing sessions, users provided direct feedback about interface elements like rating systems and emoji usage, allowing us to refine these features early in the process. "I tried to be very thoughtful about pulling in others for critique because that was going to be key to producing a better result even in a compressed timeline," Ryan explains.
By investing time in prototyping and user testing, we can create AI experiences that not only meet regulatory requirements but also genuinely enhance the healthcare journey for both patients and providers.
Despite the challenges of building trustworthy AI experiences in heavily regulated industries like healthcare, product owners can leverage design best practices to smooth out the rough edges created by balancing compliance and transparency in AI systems.
In summary, our experience building solutions for generative AI in healthcare and other heavily regulated industries highlights five key design techniques:
By implementing these techniques, WillowTree designers have designed AI experiences that build trust even in the most heavily regulated industries.
Watch our full 2 Weeks to Better series on AI in Healthcare, and reach out to learn how our AI Strategy & Governance leaders help clients deploy artificial intelligence ethically and safely.
One email, once a month.