
Schedule your workshop today

Conner [00:00:04] Through this exercise that we call "2 Weeks To Better," we challenge ourselves in the span of two weeks of iterative design sprints to take a really hairy, complicated problem and try to solve it as best as we can.
Ryan [00:00:16] I feel really excited to get back on board with another "2 Weeks To Better" to really embrace that challenge of thinking creatively in a short period of time about how we can really push design within a particular space.
Conner [00:00:27] In this case we began in the healthcare space.
Sydnor [00:00:30] Over the past two weeks, we've had this awesome group thinking about really at that moment of receiving care and post-care discharge, where could AI be applied to move the needle on the experience of both the care provider and the patient?
Conner [00:00:45] And the designer is the one who really brought it all together into a real product concept. They took those technical requirements, those opportunities identified through research, generated requirements that led to these really high fidelity concept mockups and prototypes of the doctor-facing experience and that patient-facing experience.
Ryan [00:01:03] Thanks all for taking the time to come in and do some critiquing. This is AI in healthcare and specifically thinking about how can we really improve the patient experience.
Ryan [00:01:14] How can we really support the patient after they've been released from the hospital? That was really the crux of the product. And so we're thinking about that in a couple of ways. One, how do we take the burden of recall off of the patient? Another thing is people want continued support, so if they have more questions or they need other things answered, how can we use the technology to support that? This is going to be a physician's experience, so imagine that they're looking on a tablet. Here they can see things like some of the previous conversations that have been had, any current medications that they are on, and then here we've got the dashboard where they can begin the recording process. So after the conversation is over and transcribed, it essentially gets translated by the AI into this dashboard, so you'd have a summary of the conversation or an overview of the conversation, details about medications, any instructions that were given. This is essentially the patient-facing version of what you've already seen, so this is what the doctor has already published and approved.
Karolina [00:02:13] I'm wondering if we need to add a disclosure to the patient that all of their transcripts are being shared with their primary care physician here.
Ryan [00:02:22] Yeah, we want to disclose that. I think that's an excellent point.
Nicole [00:02:24] It's a pain to have to constantly be re-explaining your experience to all the different people in the chain. So having the context and being able to have that kind of assistant with you along the way is really helpful.
Karolina [00:02:37] That's a really good pain point that you called out. I hate having to constantly be like, "I just explained this!" I can see this being implied to even caring for elder adults in your life, too. Like, if you weren't able to be with them at the ER, being able to review those transcripts with them, to be like, "Well your doctor did say this, so let's follow up!"
Ryan [00:03:00] So working in very low fidelity, testing, seeing the results of that, iterating on that, allows us to really come up with a more refined product much quicker.
Sydnor [00:03:11] I was really impressed by the thoughtfulness that went into these prototype concepts. And what was great about it was that they were so thoughtfully and specifically done, we were able to get real feedback from industry experts on them to think about what would this actually look like in a care setting? What would we need to tweak? What's working well that we can build on and what would we be need to change?
Ryan [00:03:30] We love to perform user testing even at that low stage of fidelity because at that point we can still get an idea of how the work is performing.
Kristen [00:03:38] So let's explore what happens when we give it a rating. Is this what you expected to see when you, no? Okay, tell me how it's different.
Participant 1 [00:03:47] Too basic.
Kristen [00:03:48] Too basic?
Participant 2 [00:03:49] I don't like the emojis.
Participant 1 [00:03:50] Yeah, I don't like the emojis.
Kristen [00:03:50] Okay.
Ryan [00:03:53] I tried to be very thoughtful about pulling in others for critique because that was going to be key to producing a better result even in a compressed timeline.
Paul [00:04:00] One thing we could do to add more authority to these suggested resources is potentially we add the source. We could add the Mayo Clinic link, maybe a date in order to surface-- so it gives us a sense of whether it's relevant, gives some authority from the resource.
Ryan [00:04:15] Here, we're imagining this chat function that also functions in the same way, where someone can ask this chat in a very conversational way, "Hey, I'm experiencing some pain here. What does this mean? Should I go back to the hospital?" And it's gonna pull from the resources that the doctor has already trusted and vetted, as well as from the conversation that they had in the ER.
Paul [00:04:34] That's where this really starts to take shape and become interesting because it's not just a defacto AI interface. It's going to learn based on your actual medical history, your ER visits, your doctor's input. That's brilliant.
Ryan [00:04:49] I'm really excited about how the work has developed in the last two weeks. Excited to speak to the people at UVA. The deck is ready to go, ready to present. I think it'll be great to get the insight from some professionals in this space who really can think about how this would be put into practice in real life. So having the prototype as something to react to is obviously a really great place to start. Really looking forward to seeing where it goes.