メインのコンテンツにスキップする

申し訳ございませんが、お客様のブラウザには完全に対応しておりませんオプションがありましたら、新しいバージョンにアップグレードするか、 Mozilla Firefox、 Microsoft Edge、Google Chrome、またはSafari 14以降をお使いください。これらが利用できない場合、またサポートが必要な場合は、フィードバックをお送りください。

この新ホームページへのフィードバックを歓迎します。ご意見をお寄せください 新しいタブ/ウィンドウで開く

Elsevier
論文を投稿する
Connect

Addressing bias in AI-fueled knowledge systems

2024年6月11日

Susan Jenkins別

Librarians contribute their unique skills to counter the impact of bias in AI models

As AI technologies become more integrated into applications across the research spectrum, there is increased attention on the quality of information used to train the models behind them. This reflects a growing concern about these technologies perpetuating the existing biases in society. Librarians are finding opportunities to not only evaluate but shape the direction of AI’s integration into areas that touch society directly, such as healthcare. 

Partnering for improved outcomes

While “these technologies can perpetuate the biases that have persisted in the knowledge systems – and society at large – it doesn’t have to be that way,” according to Dr. Leo Anthony Celi 新しいタブ/ウィンドウで開く, Clinical Research Director at MIT’s Laboratory for Computational Physiology. For Dr. Celi, it’s become critical to evaluate the AI models behind technology used to diagnose and treat health issues, highlighting their potential to deepen existing health disparities (see below). 

“Librarians are the wardens of the knowledge systems”

Leo Anthony Celi

DLAC

Dr. Leo Anthony Celi, MD, MSc, MPH

Massachusetts Institute of Technology, Senior Research Scientist

Dr. Celi actively recruits librarians to contribute to his lab’s research studying how bias in AI emerges in healthcare knowledge systems and how to mitigate it. With their background in understanding knowledge systems, advocacy for information literacy and experience identifying gaps in research, Dr. Celi explains, “librarians are the wardens of the knowledge systems.” Librarians can apply their unique skills to AI evaluation and can be integral to the efforts combating its deficits. Librarians are also at the forefront of the wise adoption of AI tools in the work of research itself. 

Dr. Leo Anthony Celi

We recently sat down to speak with Dr. Celi and three librarians working with his team to discuss some of their projects:  

  • Rachel S. Hicklen, MSLS  Research Services Manager, Research Medical Library, University of Texas MD Anderson Cancer Center, Houston, TX, USA  

  • Megan McNichol, MLS, AHIP  Manager, Division of Knowledge Services, Department of Information Services (M.M.), Beth Israel Lahey Health, Cambridge, MA, USA 

  • Lynne Simpson, PhD  Library Manager for Information Services, Morehouse School of Medicine, Atlanta, GA, USA 

  • Leo Anthony Celi, MD, MSc, MPH  Senior Research Scientist, Clinical Research Director, MIT Laboratory of Computational Physiology, Co-Director, MIT Critical Data

Bias in AI 

The information used to create AI models for healthcare has been built on literature mainly drawn from limited populations residing in high-resource settings, predominantly males of white European ancestry. This widely acknowledged issue has increased efforts to diversify research populations to more fairly reflect age, sex, geographic and socio-ethnic backgrounds.  However, disparities in the published literature remain potential sources of bias in AI-based diagnostic and predictive tools. These biases tend to compound inequities already existing in societies.  

Here are just two examples: In a 2019 paper from Ziad Obermeyer and colleagues, “Dissecting racial bias in an algorithm used to manage the health of populations 新しいタブ/ウィンドウで開く”, which describes how an algorithm was used to predict and allocate clinical demand based on an analysis of healthcare cost reimbursements across a broad population. Because “Black patients with similar disease severity to White patients tend to access less care…the prediction model underestimated Black patient’s illness severity, resulting in fewer resources dedicated to [those] patients” compared with their White counterparts. 

In “Bias in artificial intelligence algorithms and recommendations for mitigation (2023), 新しいタブ/ウィンドウで開く” to which librarian Rachel Hicklen was a contributor, the authors identified five stages where bias can be introduced into an AI system: the initial research question, data collection, data pre-processing, model development and validation, and model implementation. They also proposed a checklist with recommendations for reducing bias in each of these steps. 

Using AI wisely to enable more inclusive research processes

Megan McNichol has worked with Dr. Celi’s team to survey the literature on the sources of bias in electronic health record data 新しいタブ/ウィンドウで開く, leveraging her role embedding with clinical research teams conducting systematic reviews – including reviews on diagnostic processes using AI 新しいタブ/ウィンドウで開く. “AI is essentially what we do all the time as information specialists – these tools actually just help us get our jobs done. But we also have to be the voice of wisdom with research colleagues, to say ‘use this with caution.’” 

Megan McNichol

Working on and completing close to a hundred systematic reviews over the years gives Megan the insight needed for training teams on how they should and shouldn’t rely on AI-enhanced tools. “There is a whole process that underlies these tools, and you have to understand why you’re doing that process, and not think of them simply as a shortcut. My role is to say, ‘remember that you are the expert, and you have to check the work of these tools,’ because there is bias in there.” 

“There is a whole process that underlies these tools, and you have to understand why you’re doing that process, and not think of them simply as a shortcut."

Megan McNichol

MM

Megan McNichol, MLS, AHIP

Beth Israel Lahey Health, Cambridge, MA, USA, Manager, Division of Knowledge Services, Department of Information Services (M.M.)

Megan uses the PRISMA checklist, a guide for the clinician researcher indicating what needs to be included into the written manuscript. AI tools can help if there is a good foundation in the protocols and a solid clinical research question.  She advises her teams how to align these, and “with AI and bias, to be careful and pay attention to the inclusions and exclusions” in the protocol. With a systematic or scoping review, identifying the question or a gap is ultimately “why the team is put together in the first place,” to produce more equitable knowledge. The potential for AI tools to support reviewer expertise can result in better reviews.

Understanding the dimensions of AI bias in knowledge systems 

In her work supporting research teams, Rachel Hicklen is concerned with “propelling the responsible progress” of how technology is applied and protecting aspects that support human dignity in the healthcare system, such as privacy. “We try to make sure that our researchers understand that no patient data should ever be entered into these tools. Even if it’s just for grammatical checks, it’s sharing it." 

“While I think our database searches are the gold standard, not everything is in the traditional databases. AI allows us to cast a wider net and look at things that we may not have been able to see before.”

Rachel S. Hicklen

RH

Rachel Hicklen, MSLS

Research Medical Library, University of Texas MD Anderson Cancer Center, Houston, TX, USA , Research Services Manager

She’s keenly aware of how the existing knowledge system’s flaws can be perpetuated by the new tools. “As librarians, we’re always super devoted to information literacy and providing access to reputable information, but with things changing so quickly in this landscape, I worry about how once something is cited, even if it’s later corrected, it’s impossible to stop the avalanche” of misunderstanding that follows. “These things still live on in the medical literature, which is a big risk.” At the same time, she believes AI research tools “allow us to cast a wider net and look at things that we may not have been able to see before.”

Rachel S. Hicklen

Rachel has co-authored scoping reviews with Dr. Celi and other researchers that study underrepresentation in health data 新しいタブ/ウィンドウで開く, showing how this leads to disparities in health outcomes and suggesting potential strategies for overcoming these pitfalls in the production of new AI tools based on that data. These studies provide not only a basis for AI development best practices but aid other librarians and researchers with important insights into the “explainability” aspect necessary to build trust in AI models 新しいタブ/ウィンドウで開く.

“We as knowledge practitioners need to be able to promote this type of thinking, critical thinking – truly being able to identify the flaws.”

Leo Anthony Celi

DLAC

Dr. Leo Anthony Celi, MD, MSc, MPH

Massachusetts Institute of Technology, Senior Research Scientist

Dr. Celi adds, “the idea is to truly understand ‘who are the people behind the papers that are getting published, are they representative of the people whose lives will be impacted by the findings of this paper?’ We as knowledge practitioners need to be able to promote this type of thinking, critical thinking – truly being able to identify the flaws.” 

Rachel’s current project is collaborating with Dr. Celi’s laboratory and a graduate student looking at “just the absolutely massive amount of information that’s being published and comparing large language models to human performance” in evaluating this deluge, deciding if the generated result “is really meaningful at all.”   

Educating healthcare providers to use AI wisely 

Practitioners assessing AI limitations need a basic literacy in cultural bias to apply those “critical thinking” skills - also known as cultural competency. Librarians like Dr. Lynne Simpson have been developing this expertise for years. But teaching cultural competency in medical school requires a novel approach. 

Dr. Lynne notes, “as an African American woman working in a medical school who’s devoted to advancing health equity, I see that there isn’t a resource that can effectively teach the next generation of physicians to not be influenced by these biases.”

“At the end of the day, we’re teaching people how to treat patients across all of their communities. We still have to come up with language that helps people learn how.”

Lynne Simpson

LS

Lynne Simpson, PhD

Morehouse School of Medicine, Atlanta, GA, USA, Library Manager for Information Services

The challenge is to integrate intangible knowledge into an education system that relies heavily on the tangible – the physical body and biological processes. “There isn’t one text or curriculum that’s standard for teaching cultural competence” in the way other topics are taught. Dr. Lynne’s current research with Dr. Celi’s laboratory will contribute to changing this by building a core language for teaching physicians and medical students how to think beyond the physical body and understand the broader context for the tools they will be using. “At the end of the day, we’re teaching people how to treat patients across all of their communities. We still have to come up with language that helps people learn how.”

Lynne Simpson

Dr. Celi emphasizes how this awareness has arisen because of AI. “I credit AI for putting a mirror in front of us and showing us all the cracks in all the systems that we have, whether that's in knowledge systems, in education, or in healthcare delivery, and we should be grateful for this opportunity to overhaul them.” 

Cross-disciplinary spaces make a difference 

These librarians are contributing to a bigger strategy to bring attention to biases that affect underserved populations. According to Dr. Celi, this takes advantage of the newness of AI’s ubiquity, where “no one is an expert” yet – enabling people from all kinds of backgrounds to participate in shaping how it will be used. 

“It’s a time of crazy growth in the world of information and while it’s exciting, it’s also scary, so we all need to be really responsible with how we move forward.”

Rachel S. Hicklen

RH

Rachel Hicklen, MSLS

Research Medical Library, University of Texas MD Anderson Cancer Center, Houston, TX, USA , Research Services Manager

For example, Rachel’s research proposed 新しいタブ/ウィンドウで開く that AI model development should start with a team that is not only expert in the relevant disciplines, but representative of the diverse population where the models will be deployed. “AI is growing along with us learning about AI. It’s a time of crazy growth in the world of information and while it’s exciting, it’s also scary, so we need to be really responsible with how we move forward.” 

In that vein, MIT Critical Data convenes regular events to "truly build a community that is more critical” – inviting a wide range of people to mingle insights from their different perspectives and lived experiences. “We make sure to have people across generations and backgrounds, professionals, high school students, doctors, pharmacies, and computer scientists” collectively exploring priorities and strategies to make AI-fueled knowledge more equitable and transparent. Dr. Celi emphasizes the benefits of this model: “I’m limited in what I can offer my students, but if I can offer my students twelve other teachers, and each other, I find I learn from them. This should be the way we educate – what we call ‘high learning and village mentoring.’” Megan agrees that having diverse groups in learning situations is important. “Interdisciplinary education for medicine, for learning in general, is the way to build more critical thinking.” 

The opportunities for librarians to influence AI development offers paths to institutional leadership as well: Dr. Lynne Simpson was asked to join Morehouse’s new committee to develop policy for how AI should be integrated in the medical school’s curriculum. Interdisciplinarity is how she sees the future of education as well: “When you really look at how students perform better, it’s when they work together. That makes not only for a better world in general, but definitely for a better physician – and something we need to impart to all disciplines, not just medicine.”

Call for collaborators:  

Are you a librarian based at a healthcare institution in a low or middle-income region, interested in contributing research into AI and technological bias in healthcare? Dr. Celi would like to hear from you. 

Dr. Celi founded and co-directs MIT Critical Data 新しいタブ/ウィンドウで開く, a cross-disciplinary global consortium, whose objective is to scale clinical research to be more inclusive through open access data and software, particularly for limited resource settings; identifying bias in the data to prevent them from being encrypted in models and algorithms; and redesigning research using the principles of team science and the hive learning strategy. The consortium is a platform to involve a more expansive community with a diverse lived experience expertise to investigate how to tackle bias in all aspects of healthcare and medical education.

Dive deeper into AI topics for librarians:

Key components of developing AI literacy at your institution 新しいタブ/ウィンドウで開く covers the fundamentals you need to begin learning more about AI and support your user's education.

The role of AI in library services 新しいタブ/ウィンドウで開く looks at making AI more accessible at your institution, including recommendations for supporting researchers and students. 

貢献者

Portrait of Susan Jenkins

SJ

Susan Jenkins

Freelance writer and translator

S. Tyler Jenkins

Susan Jenkinsの続きを読む