Mass General Brigham CMIO on AI: 'exciting, but a little anxiety-provoking'

For the newest in our interview series with IT leaders about artificial intelligence's potential, Dr. Rebecca Mishuris shows where MGB is getting results, from clinician burnout to patient experience – and she stresses the need for responsible AI.
By Bill Siwicki
12:16 PM

Dr. Rebecca G. Mishuris, chief medical information officer and vice president at Mass General Brigham

Photo: Dr. Rebecca G. Mishuris

Editor's Note: This is the fifth in a series of features on top voices in health IT discussing the use of artificial intelligence in healthcare. To read the first feature, on Dr. John Halamka at the Mayo Clinic, click here. To read the second interview, with Dr. Aalpen Patel at Geisinger, click here. To read the third, with Helen Waters of Meditech, click here. And to read the fourth, with Sumit Rana of Epic, click here.

Dr. Rebecca G. Mishuris has her hands full when it comes to artificial intelligence as chief medical information officer and vice president at Mass General Brigham, and assistant professor of medicine at Harvard Medical School. 

Among other use cases, the health system currently is using advanced ambient AI technology to document doctor-patient conversations and has deployed large language model AI – akin to that used by the popular ChatGPT – to draft replies to patient portal messages. 

(Watch our recent CIO Spotlight interview with Mirshuris' colleague, Mass General Brigham Chief Information Officer Dr. Adam Landman, for more on their work with generative AI.)

Further, Mass General Brigham is working with its EHR vendor, Epic, to use AI to summarize vast patient charts based on different clinical contexts, such as a new primary care physician wanting to get an overview of a new patient.

We spoke recently with Mishuris to get an in-depth look at how MGB is making use of AI – but also to discuss two top areas of concern for her: responsible use of AI and employing AI to support care teams to deal with staffing shortages and defeat clinician burnout.

Q. Broadly speaking, how can provider organizations, C-suite executives and other health IT leaders promote responsible use of AI?

A. The foundation of all of the work we're doing needs to be on top of responsibly using this technology, just like we responsibly use lots of other technologies, like electronic health records or our phones. So it really starts with understanding your principles and north stars as an organization.

What are we seeing other organizations doing in this space? Where are the regulations headed, which are currently quite a bit in flux? Then coming together with leaders in your own organization or across organizations to really define your guiding principles using artificial intelligence.

Artificial intelligence has been around for quite some time, decades. Generative AI is clearly the newest kid on the block in the AI space and brings with it some new aspects to artificial intelligence that didn't previously exist with analytic AI and is really causing us to rethink how we are approaching the use of artificial intelligence.

First remember AI is really just another tool in our toolkit for how to solve our healthcare challenges, whether that's clinician burnout, patient experience, outcomes, safety, cost and so on.

It's really just another digital tool for us to use but has with it things that are different – particularly the fact that it learns most systems that we've had to date. Artificial intelligence continues to learn, and with generative AI is taking on much more of what we have traditionally attributed to human capabilities, like generating new content.

And so as we think about those things, we have to start thinking about what are the principles we're going to use as we deploy these tools. Organizations can think about things like privacy and security, very bread-and-butter technology aspects, and think about transparency and accountability.

Now we start to think about things much more related to these learning systems rather than more traditional technology tools. Because when we think about transparency and accountability, it's what is the technology doing? We've typically had technologies that you could very clearly explain. What's the algorithm? How is it working?

When you get down to it with generative AI, at some point it becomes really hard to explain what it's doing – to the point where some developers don't understand how their technology is working anymore.

And that gets us to a place that's exciting, but also a little anxiety-provoking. Thinking about things like accountability and transparency become really important. Then we start to think about fairness and equity, right? We should always, at the root of healthcare, have equity as one of our foundational principles.

Particularly when we start to talk about these technologies and how they might work differently in different populations, we have to think about the equitable nature of deploying them. And then there are other things to think about. Chief among them in a healthcare organization is what's the benefit to the organization?

We really should only be using digital tools that have a benefit, whatever that may be, to our organization, to our patients, to our users, to healthcare development and scientific inquiry in general. But that becomes a really important aspect as we look at these tools. So getting together as your own organization, think about what your guiding principles are going to be.

And then putting that out there for your users, for your developers, for the people who are going to bring these solutions to you to evaluate – this becomes incredibly important, because then we are all working on the same foundation, we're all working from the same guiding principles, and we're also starting to demand things of vendors, of developers, so they are developing solutions that meet the needs, meet our guiding principles as healthcare organizations.

The last piece is the regulatory space. As I mentioned, there is fast change in the regulatory space related to AI. And we have to stay on top of where that is and also influence it. Many organizations are signing on to different pledges, to different commitments to responsibly using AI. 

And what does that mean for each organization for those collaborations that are developing, and how do we ensure we are doing that together in healthcare so that it's not everyone out for themselves and that we're sharing across organizations to get there?

Q. On another front, how can AI support provider organization care teams, especially with regard to clinician burnout and staffing shortage?

A. This is one of the spaces where many healthcare organizations are starting as they look to generative AI in particular, obviously even pre-pandemic, but certainly through the pandemic. And now we have incredible shortages of staff, not just our physicians and advanced practice providers, but also our nursing staff.

And so thinking about the administrative tasks those folks have been doing traditionally, that we could offload from them so they could spend more time with patients, they can spend more time thinking about the clinical care that's being delivered and less time on those administrative tasks.

It's not to replace the physician or the nurse practitioner or the nurse, but to give them time back to do the things they were trained to do that they want to be doing. Generative AI has the opportunity to really start to offload many of those administrative tasks and give people time back.

As we think about generative AI, part of the reason we're starting on this administrative side is because that's relatively low risk, low risk when it comes to patient care.

If I have generative AI that is writing a prior authorization letter for me or writing a work note for a patient who had been out sick, that's fairly low risk from a patient outcome perspective. So, as we get more and more comfortable with these technologies, we may start using them in more of the clinical care therapeutic diagnostic spaces.

But right now we are in the kind of low risk area, and that's because this is relatively new technology and we have to get used to it in healthcare and, quite honestly, be sure it is going to improve health outcomes and do it in a reliable and consistent way.

Q. Let's move over to Mass General Brigham. Please talk about your AI-powered ambient documentation technology. What is it, how does it work and what outcomes have you achieved?

A. This is a very exciting space to be in for our clinicians. I'm one of them. I'm a primary care doctor. I am equally excited about this, not just from a technology perspective, but also from my own clinical care delivery perspective. This technology, it's not ours in particular, right? We are using vendors to do this, but it is generative AI that listens to a clinical visit and then generates my clinical note and documentation for me.

So, it's a secure application on my phone. I ask the patient if they'd be willing to let me record the visit for the purposes of helping me document the visit. I turn the application on, and I put the phone down on the desk. I then turn to the patient and talk to them and have my clinical visit with them, do the exam, do whatever needs to be done.

At the end of the visit, I press stop. Thirty to 90 seconds later, depending on how long the visit was, I have a fully formed clinical note that I put into the EHR as my note. I review it, make sure it is accurate, add anything, delete anything that I want to, and then sign it off in the EHR.

That is drastically different from how I've been practicing medicine to date, where typically I would sit at a computer and type as the patient was talking so I didn't forget anything and so I could save myself a little bit of time after the visit. And then after the visit, I'd have to go in and fully complete that note. Make my gibberish notes to myself that I had put into the EHR into real sentences and sign off.

So, this technology is doing a few things. One, it's saving an incredible amount of time for our providers, who already spend an incredible amount of time in the EHR, documenting after hours, after visits have been completed. And it's giving me that time back with my patient. I look at my patients now instead of at the computer during the visit.

Again, that is a drastically different experience than I have had practicing medicine since the advent of the EHR. And it's so drastically different that patients are actually telling us that. I've had patients tell me, "Gee, Dr. Mishuris, you didn't type anything today." And I said, "Nope, remember, we talked about that app on my phone."

I don't have to type anything anymore. And it really has great promise for reducing the burden of documentation, which we know contributes to clinician burnout. We anticipate, and the future is not far off, these systems will also be able to queue up orders.

So, if I talk about increasing your medication from one dose to another, it'll be able to queue that up in the EHR for me. And all I'll have to do is sign off on it. So again, kind of easing the burden of workflow and administrative tasks and giving providers back time either in the evening with their families or to take on hobbies or with their patients.

Q. Elsewhere at Mass General Brigham, you are using large language models to draft replies to patient portal messages. So how does this work? And what have the doctors and nurses had to say about the drafts?

A. This is again in coordination with our EHR vendor. ChatGPT-4, a secure version of it. But ChatGPT is embedded into our EHR and is drafting replies to patient portal messages. So, a patient sends in a message, asks for a refill of a medication, describes symptoms, asks for a change in an appointment, asks for a letter to go on vacation, what have you.

The large language model will draft the reply to the patient in line in the EHR. Then, whoever is looking at that first, for us it's often a pool of nurses who are doing that initial triage, they'll see that draft and they'll either be able to edit it and then send it back to the patient or start anew if they didn't like the draft at all.

A few things with this. One, the technology itself is completely embedded in our existing EHR workflows. So, using it is incredibly easy. Two, it essentially forces the user to review the draft. You can't just send the draft off, you have to either edit it and send it or start anew and send it.

That is one of those checks and balances that we talked about related to the responsible use of AI, making sure there's a human in the loop in this process as we get used to this technology, as we understand how the technology evolves and learns over time. And then what it's doing again is giving people time back.

If I don't have to generate that draft myself, or my nurse doesn't have to do that, she's now got time back to call a different patient who maybe called in rather than sending a portal message or giving a patient a vaccine as they're waiting in the clinic.

What we're hearing from our physicians and nurses and advanced practice practitioners is a few things. One, they're actually very surprised at how good the drafts are. So, I think people are starting to learn how good these large language models can be. It is saving them time. They feel like they have time savings by using the drafts, and they are largely not having to edit the draft.

So, a patient writes in asking for a refill of a medication, the large language model reads that message from the patient, but also looks in the chart to see that actually you have four refills left at the pharmacy. And the drafted reply is, "We see you have four refills left of X medication at the pharmacy. Please call the pharmacy to ask for a refill. If you call the pharmacy and they tell you there are no refills, please let us know."

And with that draft, the nurse wouldn't have to do anything. She could read that, put her name at the bottom, and send it off. So again, the time savings by reducing the administrative burden of having to look through the chart for the medication, see there were refills left, write the note yourself, and then send it off on the patient side.

This is a space where we are really interested in understanding how patients perceive these messages, whether they perceive any difference at all, and what their experience is with the messages.

There was an article out of another organization that showed that patients actually felt like the large language model generated messages were more empathetic than the messages generated by the human. That is something we are looking into to see if our data is replicating that or not.

But you could understand, right, somebody who's really busy, who has to get through hundreds of these messages, might be less empathetic than a computer that's just generating these things based on us prompting the computer to please be empathetic in your response. And so those are some of the patient-side aspects that we'll be looking into.

Q. You're working with Epic on using AI to summarize clinical charts based on the case scenario; for example, a new PCP, a discharge from hospital follow-up, specialty care visit, etc. What is AI doing here, and how is this being designed to help caregivers?

A. So, this is a fairly classic use of a large language model to give it a tome of text and say please summarize this for me. In this case, the tome of text is somebody's electronic health record with all the notes, all the results, all the medications contained in that record.

And the development we're working on is to be able to say, as a patient's new PCP, please summarize their care over the last five years. As the physician discharging the patient from the hospital, please summarize their inpatient admission over the last week. As the inpatient physician who's admitting someone to the hospital, please summarize their kind of clinical context for me so that I can understand their presenting symptoms in the context of their larger medical conditions.

Again, an administrative task that actually can take an immense amount of time because you're kind of going through tomes and tomes of text, of results, of medications, to try and understand the context of this patient in their medical care. That can take just an incredible amount of time to do.

And if we could automate that using a large language model, again, we're giving providers time back to spend at the bedside with the patient, thinking about their clinical presenting scenario and how to diagnose and treat that, rather than spending time diving into records and having to pick out the one little thing here or there.

Now, the development of this is that we want to be able to give it a prompt to say as a PCP, as an infectious disease specialist, as an ER physician, or as a discharging physician, or as a nurse who's coming on to shift and has to take care of a brand-new patient. But somebody who's been in the hospital for three weeks. Please summarize what's going on with this patient.

We really want to make sure we get those summaries tailored to the context of care to actually be useful in that context of care. That's the development work we're working on. We are incredibly excited by it.

We just talked about a few different use cases for generative AI in healthcare. But there are so many more, particularly as we move from the administrative burden space into clinical care. This space is going to explode. It has exploded over the last year, and it's going to continue to do so in the coming years and drastically change how we deliver clinical care.

To watch a video of this interview with bonus content not found in this story, click here.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.