Case Study

Secure Conversational AI Assistants for Financial Services

Services Engaged

Digital Strategy
Data & AI
Design

A Safe and Compliant AI Assistant in a Highly Regulated Industry

Financial services firms and other highly regulated industries provide specialized products and services requiring accuracy, reliability, and regulatory compliance. Customer service expectations keep increasing, however, and the potential productivity benefits of generative AI in financial services are too promising to ignore.

Here’s the problem: legacy finance and banking chatbots based on rigid intent mapping (think: flow charts) struggle to have natural conversations spanning multiple topics.
Meanwhile, free-form conversational AI brings risks of inappropriate, biased, or factually incorrect responses.

To balance the potential of AI capabilities with strict governance demands, a leading North American financial services firm came to WillowTree to create a next-gen chatbot experience in just eight weeks. Our successful delivery paved the way for our GenAI Jumpstart accelerator — a safe, secure, modular architecture we can adapt to any industry.

the vision

A Next-Gen Financial Chatbot

Our client, a leading innovator in financial services, wanted to leverage generative and conversational AI to create a sophisticated banking chatbot for their users. Balancing open-ended conversational abilities with strict security was paramount. They needed a solution that could:

  • Provide natural language responses to common customer questions
  • Direct users to relevant bank-specific information across their product lines
  • Maintain their brand voice and style
  • Stay within regulatory guardrails (e.g., not providing specific investment advice, etc.)
The Challenge

The Problem with Legacy Finance Chatbots

Traditional bank AI chatbots (that tend to annoy users) rely on intent mapping: a process that offers a limited set of around 250 predefined options and responds with hardcoded messages mapped to those user “intents.”

However, intent mapping is rigid and restricted. This limitation makes typical intent-based chatbots frustrating for users wanting to ask natural, open-ended finance questions that don't fit neatly into pre-scripted buckets. For instance, when customers inquire about comparing credit card products or ask for personalized account recommendations, legacy chatbots falter.

“Our product owner was hoping this effort would prove that large language models were superior to their existing intent mapping chatbots in every way, and he told us verbatim that our team blew intent mapping out of the water.”

Conner Brew
WillowTree Project Director, AI
AI Governance Evangelist

The Risks of Conversational AI in Banking

On the other hand, deploying open-ended conversational AI in banking poses serious challenges. Financial institutions deal with sensitive customer data and face complex compliance requirements from regulatory bodies like the SEC and FINRA. In general, two primary risks emerge when integrating sophisticated conversational AI assistants: Hallucination and Jailbreaking.

So, while conversational AI assistants enable personalized banking experiences, their open-ended nature differs enormously from rigid predefined chatbots limited to narrow topics. Safely adapting generative AI for regulated industries is an immense challenge.

The stakes were high: balancing sophistication and security necessitated tradeoffs and a multilayered technical approach spanning data readiness, large language models (LLMs), systems architecture, and UX/UI design.

WillowTree happily stepped up to the challenge.

our solution

WillowTree's Dual-LLM Safety System

WillowTree implemented a modular bank chatbot architecture with two key components: Retrieval Augmented Generation, and “Supervisor” LLM.

This dual-model approach contained the conversational range and specificity of our client’s products and services while allowing sophisticated discussions beyond the limits of predefined chatbot intent mapping.

Staying Within Guardrails

results

A Superior, Safe Conversational Experience

In just eight weeks, WillowTree's prototype blew older intent-mapped chatbot capabilities out of the water with its human-like conversational abilities and more expansive range of responses. Importantly, our financial services client gained confidence that generative AI could be deployed safely and responsibly within regulatory requirements.

Our eight-week effort formed the basis for our GenAI Jumpstart accelerator program, a crucial prototyping step in our client’s journey toward deploying a safe and compliant public-facing virtual AI assistant.

“Following a successful delivery with a major financial services institution in North America, we codified our GenAI Jumpstart offering to provide eight weeks of AI development around a particular use case that a client brings to us, viable across any industry.”

Charley Adams
Director of Business Development, WillowTree

Key Takeaways

  • Conversational AI presents vast opportunities but also serious risks in regulated sectors.
  • A rigorous and innovative architectural framework of AI-enabled and human-in-the-loop processes, guidelines, and oversight is required for safe deployment.
  • WillowTree's expertise in ethical AI implementation unlocks cutting-edge functionality while minimizing and mitigating risks.

“This is amazing, in terms of how fast you have put this together. Kudos to you guys.”

Client Engineering Lead

"I can't sing your praises enough. You got me to drink the kool-aid on WillowTree."

Client Product Owner
Build generative AI prototypes in just 8 weeks.
Get a Jumpstart
More Case Studies
Navigating the Digital-First Frontier in Financial Services