How life sciences researchers regard and use AI
2024๋ 10์ 24์ผ
์ ์: Ann-Marie Roche

DMP/E+ via Getty Images
As Elsevierโs report on AI attitudes sparks conversation in the research and health communities, co-author Adrian Mulligan comments on its significance for corporate R&D
We can safely use an overused word like โgamechangerโ for ChatGPT. When released in November 2022, it triggered a revolutionary shift across many sectors, including R&D. However, two years later, while generative AI continues to evolve rapidly, we are still largely figuring out what these changes entail.
To help chart the state and future of AI and R&D, Elsevier recently released its report Insights 2024: Attitudes Toward AI, based on how nearly 3,000 researchers and clinicians worldwide gauge the attractiveness, impact and benefits of AI for their work and broader society. Participants were also asked about what they needed in terms of trust and transparency to feel more comfortable using AI tools.
โWhile responses vary somewhat by region, most researchers and clinicians regard AI more positively than negatively,โ says Elsevier Research Director and report co-author Adrian Mulliganย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ. โHowever, itโs also clear we must establish more trust in AIโs application through regulations and the use of more quality data.โ
Recently, we spoke with Adrian to delve into the findings and their implications for life sciences research.
What are scientists really thinking?
Adrian, who manages market research at Elsevier, explains: โOur focus is usually on customer experience tracking: finding out what the scientific community thinks about our services and solutions so we can help guide decision-making around how we can better serve this community โ in terms of helping scientists acquire knowledge and creating new and better ways of working.โ
To this end, Adrian and the team dive deep. โWe undertake market research projects that examine the challenges and concerns researchers and clinicians face,โ he says. โIdeally, these spark longer โ and larger โ conversations. Itโs about trying to understand what the community thinks about the dynamics around how new technologies are being used โ and how this is shifting and changing. This is exactly what our report is about.โ
Between December 2023 and February 2024, Elsevier asked about 2,000 researchers, 1,000 clinicians and 300 corporate researchers to complete a 15-minute quantitative survey to gauge their attitudes towards artificial intelligence, including generative AI. The results were split by region, highlighting the US, China and India.
Sparking conversations โ and change
Adrian has managed similar studies over the years. Published a decade ago, Peer Review In A Changing World: An International Study Measuring The Attitudes Of Researchersย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ showed how peer review remained a fundamental and valued process.
โThe study re-examined peer review, which is at the core of scholarly communication,โ Adrian says. โAnd while peer review is highly regarded, it was clear it is not perfect and can be changed. The report offered concrete steps for improvement, including training reviewers, introducing standardized proformas to ensure consistency of reviews, and providing a reward mechanism for reviewers, such as acknowledgements in the journals.โ
The future of research

Adrian Mulligan, Research Director for Customer Insights at Elsevier, presents the original Research Futures report at the AAAS Annual Meeting in 2019. (Photo by Alison Bert)
More recently, Elsevierโs report Research Futures 2.0 looked at how the research landscape might evolve over the next decade. โTypically in our sector, we might look forward three or five years into the future, so this timeframe was quite different,โ Adrian says. โWe identified three possible different futures: โBrave Open World,โ in which open science is the norm; โTech Titans,โ in which the influence of tech companies is predominant; and โEastern Ascendance,โ where the locus of research shifts to China.โ
However, while the report correctly saw how tech companies would play a key role in the use of AI, it is fair to say it did not account for the dizzying rise of generative AI with the release of ChatGPT. The new Insights 2024 report is meant to fill that gap by collecting the latest views on AI from the Life Sciences research community.
When ChatGPT crashed the party
โObviously, there has been a transformative shift over the last two years,โ Adrian says. โAt the time of the Research Futures 2.0 report, AI was already very much part of the landscape but working in the background. Many researchers have been using AI for a long time. ChatGPT created excitement around the capabilities of AI and entered the public consciousness in a way it hadnโt before, except perhaps in film. It went well beyond what everyone, including many researchers, thought AI could do. โSuddenly, AI could create things and, importantly, was accessible. So naturally, we wanted to understand better the response of researchers, physicians and corporate researchers in this new context.โ
Different regions, different attitudes โ to a point
The report insight that struck Adrian the most was how views differ regionally โ namely that APAC countries, particularly China, tend to be less conservative than Europe and North America regarding embracing AI and the change that comes with it.
โAt first, this may seem counterintuitive since North America is where many of the advances in AI originated,โ Adrian says. โBut in fact, this may work to explain the caution โ that the main players have had more time to consider the potential impact of their work. Of course, itโs less surprising that clinicians are more cautious than researchers when it comes to AI โ the nature of their role at the frontlines means any errors can have immediate real-world impact.โ
Indeed, recent elections have mirrored cultural differences in how people regard AI: While in the US, constituents are often scared off by candidates who apply AIย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ, in India, the use of AI is often actually celebrated, giving candidates credit as innovatorsย ์ ํญ/์ฐฝ์์ ์ด๊ธฐ.
AI is primarily seen as a positive โ to a point
The reportโs overall message for Adrian is that, on balance, most people see AI as a positive advancement:
โWhile some communities are perhaps a bit more cautious, people think thereโs more to be gained than lost through AI. Thereโs a lot of optimism that it will speed up science, and the majority think it will accelerate knowledge discovery.โ

AM
Adrian Mulligan
Elsevier์ Research Director for Customer Insights
โYes, there is also an obvious drive for guardrails and the use of high-quality data,โ Adrian adds. โBut these are issues Elsevier has been addressing long before the rise of generative AI.โ
Beyond the report: a future of next steps
As with any worthy research, the report raises more questions than answers. โFor instance, with this stated desire for more regulations and guardrails,โ Adrian says, โthe questions now become: What kind of regulations should there be? Who should do the regulating? Should it be the academic institutions or businesses that developed AI in the first place? Or should the government oversee and enforce these regulations? And how far should they be allowed to go?โ So yes, thereโs still a lot of work to be done.
Where is AIโs most significant potential?
Meanwhile, AI's impact will likely vary, with some areas benefiting more than others. โIf youโre in a research field managing large data sets, for example, astronomy or as with pharmaceutical companies, then the capabilities of AI will likely provide a significant advantage managing those datasets, connecting them, identifying and interpreting any underlying patterns. In these spaces, AI can accelerate discovery opportunities.โ
Most experts are now convinced AI will ultimately revolutionize healthcare and research. โOn the healthcare side, thereโs great potential. To help with a diagnosis, physicians will have quick access to options based on a substantive amount of data evidence curated and brought together in a way that wasnโt possible before. Clinicians, of course, will be the final decision-makers and will still have oversight, but will be able to make faster and better decisions.โ
The research also shows that AI has a role to play beyond the analysis and collating space:
โResearchers think thereโs potential at the front end of the research process, with AI helping to identify new targets and research areas. By finding these knowledge gaps, it is possible to discover a new research opportunity. In other words, AI is also expected to deliver on the creation side.โ

AM
Adrian Mulligan
Elsevier์ Research Director for Customer Insights
The tipping point is still ahead
Of course, generative AIs like ChatGPT remain a problem child due to the tendency to hallucinate. โSociety at large is still transitioning and figuring out how to use a tool like ChatGPT,โ Adrian says. โUltimately, I think it is a supporting mechanism to enable people in any industry to do their jobs more efficiently โ not dissimilar to email when it came along 25 years ago. But since itโs more in the space of what humans do โ creation โ we will need to have some controls in place. And I think people are now responding to that need.โ
โSociety at large is still transitioning and figuring out how to use a tool like ChatGPT. Ultimately, I think it is a supporting mechanism to enable people in any industry to do their jobs more efficiently.โ

AM
Adrian Mulligan
Elsevier์ Research Director for Customer Insights
Meanwhile, we shouldnโt overstate AIโs impact, according to Adrian. โEven in the research and clinician community, where awareness of AI is very high, many people still havenโt used it or experimented with ChatGPT: Around 25% have used it for work purposes.โ
A slow-motion revolution โ with vetting and regulating
โThere has been a surge in new AI tools recently, and weโre still trying to establish the best way to use them โ and for what,โ Adrian added. โMoreover, we have to figure out how to use them correctly: Governance and guardrails are essential, along with ensuring these systems are based on high-quality, authoritative and up-to-date data.
โYes, it will be revolutionary. But we need to do it in a managed way.โ
Report: Attitudes in corporate R&D
Are corporate researchers ready to embrace GenAI? Learn what the findings from Elsevierโs Insights 2024: Attitudes toward AI report tell us about their perspectives on AI.
Highlights of the full report
Elsevierโs report Insights 2024: Attitudes toward AI brings together the views of nearly 3,000 researchers and healthcare professionals around the world.
Main survey themes
Awareness of AI tools is high, but usage is low. Expectations are that this will grow. Institutions still need to clearly convey their AI usage restrictions or their preparations for increased use to researchers and clinicians.
Attitudes are mixed. However, researchersโ and cliniciansโ overall sentiment is more positive than negative.
Specific actions can help increase trust. By taking such actions and communicating them, providers of AI tools can improve usersโ comfort.
Interesting report insights
71% expect the results of generative AI-dependent tools to be based on high-quality, trusted sources only
94% believe AI (including GenAI) will be used for misinformation
58% say training the model to be factually accurate, moral and not harmful (safety) would enormously increase their trust in that tool Knowing that the information the model uses is up to date was ranked highest by respondents for increasing their comfort in using an AI tool
Report recommendations
GenAI technology developers can:
Enhance accuracy and reliability
Increase transparency
Strengthen safety and security
Institutions employing researchers and clinicians can:
Establish policies and plans and communicate them clearly
Build governance and expertise
Provide training and capacity
Ensure access
๊ธฐ์ฌ์

AR
Ann-Marie Roche
Senior Director of Customer Engagement Marketing
Elsevier
Ann-Marie Roche ๋ ์ฝ์ด๋ณด๊ธฐ