The potential global economic impact of generative AI is in the trillions of dollars as organizations seek increased productivity, lower costs, and better decision-making. Such outcomes would be a boon for industries like healthcare, which the World Health Organization estimates will face a shortfall of 10 million workers by 2030.
Fortunately, generative AI in healthcare is making it easier to automate routine tasks. In turn, it’s impacting health professionals’ lives for the better, allowing organizations like major hospitals to focus on more personalized care and better patient outcomes.
But you have to know where to look for these success stories. These three examples of AI in healthcare come from my Wall Street Journal bestseller, The Sound of the Future: The Coming Age of Voice Technology. Each presents a unique use case for AI in healthcare that you can use to inform your organization’s strategy.
Mass General Brigham is the parent organization behind two of the nation’s most prestigious medical institutions, Brigham and Women’s Hospital and Massachusetts General Hospital in Boston. Their story shows how deploying generative AI in healthcare can happen swiftly and effectively when the right combination of factors comes together.
With two of the nation’s top hospitals serving one of the nation’s most populous metropolitan areas, Mass General Brigham’s staff faced a surge of patients after the COVID-19 outbreak. In response, leaders set up a hotline staffed by expert nurses to answer callers’ questions (e.g., how to identify symptoms, where to get tested, and how to determine if emergency care is needed).
But the hotline was overwhelmed just hours after it launched. Demand skyrocketed, and thousands of distressed patients faced average wait times of more than 30 minutes.
Searching for ideas, leaders learned of a chatbot developed by Providence health system in Seattle — which treated some of the first American COVID-19 patients — by collaborating with Microsoft to build an online screening tool. The resulting chatbot successfully served more than 40,000 patients in its first week.
To build its own model capable of answering callers’ COVID-19 questions without live intervention from a doctor or nurse, Mass General Brigham’s team started with the screening questions developed by the Centers for Disease Control and Prevention (CDC).
The resulting AI-powered voice system could handle most callers’ key questions and even generate a preliminary health status, complete with a referral to an urgent care center, primary care physician, or emergency room if necessary.
The new chatbot reduced the flood of calls to a manageable level. Patients got the reassurance and expert guidance they needed quickly — no more long hold times.
But perhaps most significant, Mass General Brigham’s leadership demonstrated that the changing nature of health crises means a greater risk for events that grow exponentially, like the COVID-19 pandemic. AI helps major healthcare providers like Mass General Brigham respond to such events. The team reflected on its experience for Harvard Business Review, writing:
“Our economy and healthcare systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. [ … ] Moreover, traditional processes deliver decreasing returns as they scale. On the other hand, digital systems can be scaled up without such constraints, at virtually infinite rates. [ … ] We hope and anticipate that after COVID-19 settles, we will have transformed the way we deliver healthcare in the future.”
Training AI voice tools is demanding, but Mass General Brigham’s story shows there are circumstances where generative AI solutions can and should be deployed rapidly. Of course, this deployment should also be handled rigorously and ethically. Mass General Brigham accomplished that by:
WillowTree specializes in helping businesses rapidly prototype similar generative AI solutions for highly regulated industries — such as a safe conversational AI assistant for a major financial services provider — which we delivered in just eight weeks. Learn more about our GenAI Jumpstart accelerator.
At Vanderbilt University Medical Center in Nashville, practicing physician and assistant professor of pediatric endocrinology Dr. Yaa Kumah-Crystal leads the Vanderbilt Electronic Health Record (EHR) Voice Assistant initiative. Through her work incorporating voice interfaces into the hospital’s workflows, Kumah-Crystal has become a leading innovator in using voice technology to streamline and improve the delivery of medical care.
With many of her colleagues already using voice technology for specific tasks — like voice dictation for accurately recording clinical notes without the labor of typing — developing AI-powered voice tools seemed like a smart move to make many routine tasks easier.
Kumah-Crystal and her team worked with Epic Systems, a software company specializing in electronic health record management. In addition to handling more than 50% of patient information files in the United States, Epic Systems is also involved in designing and testing an array of voice-based healthcare applications. By continually getting feedback from the organizations they serve (Vanderbilt’s medical center is one of several hundred facilities across the country using Epic’s tools), each iteration is easier to use and more efficient than the previous one.
Efforts like this have led to voice tools such as V-EVA, the Vanderbilt University Medical Center voice assistant. V-EVA can provide caregivers with a summary of basic information about a specific patient in response to a voice command.
This is a significant example of generative AI leveraging multimodal design because V-EVA responds with text on a screen instead of the audio or voice response one receives with assistants like Alexa or Siri. Glancing at information on a screen is more efficient for busy doctors and nurses than listening to a spoken response, saving critical seconds.
Physicians spend much more time operating heads-up and hands-free thanks to offloading manual tasks on the fly. Voice tools now make it possible to order lab tests, place medication orders, request patient condition updates, and more, all by speaking.
But perhaps most significant is how these efficiency gains will accelerate as the AI behind Vanderbilt’s voice tools becomes smarter. Kumah-Crystal explained it like this:
“Right now, our voice assistants are like medical students — beginning to learn what it takes to be a healthcare professional, but still needing a lot of specific guidance to understand what they are supposed to do. In the future, they’re going to become much more sophisticated. For example, they’ll be empowered to listen to ambient sounds, including ongoing conversations among healthcare providers and patients, and to begin drawing conclusions about the kinds of support services the professionals need. Then, the voice assistant can take proactive steps in response. …
“Over time, I see our voice assistants becoming more proficient and experienced and finally ‘graduating’ from being like medical students to being like resident physicians or expert nurses. Eventually, the voice assistant will be a ridiculously smart and talented colleague in the room with you — a great doctor or nurse who intuits what is needed and moves quickly to provide it, with a minimum of fuss.”
Kumah-Crystal’s attitude toward Vanderbilt’s medical voice assistants is a healthy one. Integrating voice technology into your organization is an ongoing, always-iterating effort. No generative AI model is perfect at launch. Instead, it’s perfected over years of continual improvement.
Like Kumah-Crystal, embrace a builder’s mindset. It will help you see the inevitable issues that pop up as opportunities to improve your AI applications and take each piece of feedback on what works and what doesn’t with enthusiasm.
Vocable is a free augmentative and alternative communication (AAC) iPhone and iPad app that helps speech-impaired patients and their caregivers communicate. The WillowTree team developed Vocable to help one of our own: former WillowTree designer Matt Kubota’s partner Ana was diagnosed with Guillain-Barré Syndrome.
The autoimmune disorder left her largely unable to communicate except by blinking at individual letters on an alphabet poster. Matt and the WillowTree team knew they could build a better experience for Ana than the expensive, bulky, and rudimentary options available, and this was the origin of Vocable AAC.
The original iOS version of Vocable used head and face tracking via an iPhone or iPad’s front camera to help patients select letters, words, and custom phrases on the screen. This UX was a huge leap forward for people like Ana. Still, the progress showed how far there still was to go.
Speech-impaired patients and their caregivers deserved something better than back-and-forth, call-and-respond type exchanges. We wanted to continue pushing Vocable’s capabilities toward that of natural conversation.
By integrating conversational AI into Vocable, users now have broader and more contextually relevant responses when interacting with caregivers. The ChatGPT-powered conversational AI enhances Vocable’s predictive pattern detection and semantic understanding of a caregiver’s speech — the result: less canned and more natural-sounding responses from Vocable’s generative AI.
And for those using Vocable on Vision Pro, the experience becomes even more conversational. Vision Pro’s spatial computing gives Vocable far more visual real estate, embedding the user interface within their natural field of vision. Vocable also empowers users to navigate and choose responses faster, thanks to enhanced accessibility features like eye tracking.
For the 17.9 million American adults suffering from speaking difficulties, the free Vocable AAC app on an Apple Vision Pro is a far more affordable option than traditional speech-generating AAC devices, which average around $15,000 and offer fewer capabilities.
Moreover, Vocable’s capabilities extend to a broad range of nonverbal and nonspeaking individuals, including stroke and trauma survivors, people living with MS and ALS, and even individuals with autism or intubated in hospitals.
For physicians like Dr. John M. Costello of Boston Children’s Hospital, the accessibility Vocable and the Vision Pro offer are much-needed in healthcare today:
“When so few hospitals have available technology for the communication-vulnerable, it's exciting that a platform like Vocable exists. Vocable is easy to access and, most importantly, free."
Vocable is a unique example of applying conversational AI in healthcare because, in most cases, only one side can speak. That makes it inherently multimodal because users need a range of options for reviewing and choosing responses. But multimodal design is just as necessary in other applications.
Think back to Vanderbilt University Medical Center, where glancing at generative AI’s responses on a screen made nurses’ and physicians’ lives much easier. If they had to listen to the same information being read to them, it would’ve made Vanderbilt’s staff less efficient than when they started.
That’s the power of multimodal design: Generative AI’s responses reach the user in the most efficiency-enhancing and nondisruptive way possible.
WillowTree has a strong track record working with healthcare and life sciences companies to define and develop next-generation solutions. As a leading AI consulting company, our areas of expertise span: