Generative AI: The ‘Wild West’ or a lever for care transformation? Leaders weigh in
2024年2月9日
Most leaders believe generative artificial intelligence is on the brink of transforming healthcare. In a recent Becker’s LinkedIn poll, 64% of hospital and health system leaders said this technology will have a huge impact on the industry in the next five years.
Despite the promise and potential, many leaders also have concerns about the accuracy of AI-generated information, as well as the implications of this technology for patient safety, data security and more.
During an October advisory call, sponsored by Elsevier’s ClinicalKey, a panel of hospital and health system leaders discussed how their organizations are approaching generative AI and the concerns that have given them reason for caution.
Hospitals and health systems have mixed feelings about generative AI
Although all participants in the advisory call were familiar with generative AI, the majority are still in the initial phases of evaluating this technology, and many are taking a cautious approach.
“One-quarter of our organization thinks we aren’t moving fast enough, another quarter is extremely skeptical and the remaining half has no clue what to do with all the information coming in,” said the senior vice president and chief clinical officer of a large Southern health system. “We serve a very challenging rural population with fairly extreme and adverse social determinants of health, so we aren’t trying to be first movers in this space.”
To be valuable, generative AI must prioritize a patient-centric approach
For many healthcare leaders, the applicability of generative AI tools to patient populations is an open question. Organizations want to ensure the data used to train these tools is representative of their communities. “I work in pediatrics, and a lot of effort is focused on adult patients,” said the chief quality officer from a Southern pediatric health system. “How applicable are these tools to our pediatric population?”
One participant noted that out of the 118 AI tools cleared by the FDA in 2021, only one described the geographic and racial breakdown of the patients that the AI program was trained on.
Patient data ownership is another concern. If patient information is fed into large language models, generative AI vendors may profit from that data.
“I’m not sure we always think through the complexities of data ownership,” said the chief nursing informatics officer and executive director of clinical professional practice at a Midwest academic medical center. “I have questions about whether third-party AI companies should make millions of dollars from patient information. Where do patients stand in that reimbursement value chain? Will this be the modern-day version of Henrietta Lacks? We need to be more thoughtful this time around about making sure patients who provide data receive benefit for their contribution.”
The vice president and chief information officer at another Southern health system agreed. “How do we even begin to obtain informed consent in this arena from patients who haven’t even begun to understand what generative AI means?” she said.
Generative AI-related mistakes & unintended consequences keep clinicians up at night
At a Great Plains academic medical center, academic and research faculty are eager to use generative AI, while clinicians are anxious about mistakes that could result from the technology.
The chief information security officer at a Midwest health system stated, “We’re trying to establish guardrails, so people can use generative AI safely. We need to ensure the integrity of the information we produce, whether it’s educational material, clinical material or treatment recommendations. People need to adopt a mindset similar to, ‘Just because you read something on the Internet, it isn’t always true.’ Generative AI is a different tool, but it’s the same idea.”
The vice president and chief information officer at a Southern health system raised another scenario that worries clinicians: “What happens when a generative AI tool recommends a diagnosis or a treatment, and the doctor takes a different route which turns out to be wrong?”
Fears about unintended consequences of generative AI are also a concern. “I think generative AI has incredible power to synthesize data, and the potential is there to make things easier,” said the chief quality officer at a Southern pediatric health system. “My concern is that there will be unintended consequences. It will help us, but how will it also hinder us?”
Governance structures & transparency are critically important for generative AI
Many perceive the current generative AI landscape as the Wild West. Myriad opportunities for innovation exist, but good oversight is also essential. In some cases, the data from generative AI tools can be erroneous but seem authoritative. Organizations recognize the need for guardrails to prevent clinicians and other staff from running into problems. This work often requires a team approach with a blend of clinicians, IT and operations staff participating.
“We have a multidisciplinary leadership group centered on informatics and technology that is guiding where we need to go with generative AI,” said the vice president of medical affairs and chief patient safety officer at a Southern health system. “Maintaining boundaries and control around it is difficult. We don’t have all the answers at this point in time.”
Visibility into AI algorithms is also needed to validate use cases, prevent biased results and generate trust.
“Clinicians, IT and our CMIO are looking at opportunities to use generative AI to enhance patient safety and reduce biases,” said the vice president of quality and patient safety at an East Coast nonprofit health system. “There will need to be a lot of validation done, however, to ensure tools aren’t exacerbating those problems.”
Data security and patient privacy are additional concerns. Generative AI has the potential to increase the footprint of where protected health information exists.
“The problem is that generative AI is conversational,” said the chief information security officer at an East Coast nonprofit health system. “How do we ensure the tools are intelligent enough not to leak secrets unintentionally? It’s not the underlying data sources that people will attack. It’s the front door of generative AI.”
While generative AI has the power to process massive amounts of data, clinicians need to know that insights are rooted in evidence-based sources. In an envisioned future, the seamless integration of generative AI into healthcare workflows must uphold unwavering trust, ethical standards, and significantly enhance the delivery of patient-centered care.
ClinicalKey AI: Hospitals and health systems
Download here 在新的选项卡/窗口中打开