Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

How life sciences researchers regard and use AI

October 24, 2024

By Ann-Marie Roche

Photo depicting two women and a man working in a pharmaceutical laboratory (DMP/E+ via Getty Images)

DMP/E+ via Getty Images

As Elsevier’s report on AI attitudes sparks conversation in the research and health communities, co-author Adrian Mulligan comments on its significance for corporate R&D

We can safely use an overused word like “gamechanger” for ChatGPT. When released in November 2022, it triggered a revolutionary shift across many sectors, including R&D. However, two years later, while generative AI continues to evolve rapidly, we are still largely figuring out what these changes entail.

To help chart the state and future of AI and R&D, Elsevier recently released its report Insights 2024: Attitudes Toward AI, based on how nearly 3,000 researchers and clinicians worldwide gauge the attractiveness, impact and benefits of AI for their work and broader society. Participants were also asked about what they needed in terms of trust and transparency to feel more comfortable using AI tools.

“While responses vary somewhat by region, most researchers and clinicians regard AI more positively than negatively,” says Elsevier Research Director and report co-author Adrian Mulligan opens in new tab/window. “However, it’s also clear we must establish more trust in AI’s application through regulations and the use of more quality data.”

Recently, we spoke with Adrian to delve into the findings and their implications for life sciences research.

What are scientists really thinking?

Adrian, who manages market research at Elsevier, explains: “Our focus is usually on customer experience tracking: finding out what the scientific community thinks about our services and solutions so we can help guide decision-making around how we can better serve this community — in terms of helping scientists acquire knowledge and creating new and better ways of working.”

To this end, Adrian and the team dive deep. “We undertake market research projects that examine the challenges and concerns researchers and clinicians face,” he says. “Ideally, these spark longer — and larger — conversations. It’s about trying to understand what the community thinks about the dynamics around how new technologies are being used — and how this is shifting and changing. This is exactly what our report is about.”

Between December 2023 and February 2024, Elsevier asked about 2,000 researchers, 1,000 clinicians and 300 corporate researchers to complete a 15-minute quantitative survey to gauge their attitudes towards artificial intelligence, including generative AI. The results were split by region, highlighting the US, China and India.

Sparking conversations — and change

Adrian has managed similar studies over the years. Published a decade ago, Peer Review In A Changing World: An International Study Measuring The Attitudes Of Researchers opens in new tab/window showed how peer review remained a fundamental and valued process.

“The study re-examined peer review, which is at the core of scholarly communication,” Adrian says. “And while peer review is highly regarded, it was clear it is not perfect and can be changed. The report offered concrete steps for improvement, including training reviewers, introducing standardized proformas to ensure consistency of reviews, and providing a reward mechanism for reviewers, such as acknowledgements in the journals.”

The future of research

Adrian Mulligan, Research Director for Customer Insights at Elsevier, presents the original Research Futures report at the AAAS Annual Meeting in 2019. (Photo by Alison Bert)

Adrian Mulligan, Research Director for Customer Insights at Elsevier, presents the original Research Futures report at the AAAS Annual Meeting in 2019. (Photo by Alison Bert)

More recently, Elsevier’s report Research Futures 2.0 looked at how the research landscape might evolve over the next decade. “Typically in our sector, we might look forward three or five years into the future, so this timeframe was quite different,” Adrian says. “We identified three possible different futures: ‘Brave Open World,’ in which open science is the norm; ‘Tech Titans,’ in which the influence of tech companies is predominant; and ‘Eastern Ascendance,’ where the locus of research shifts to China.”

However, while the report correctly saw how tech companies would play a key role in the use of AI, it is fair to say it did not account for the dizzying rise of generative AI with the release of ChatGPT. The new Insights 2024 report is meant to fill that gap by collecting the latest views on AI from the Life Sciences research community.

When ChatGPT crashed the party

“Obviously, there has been a transformative shift over the last two years,” Adrian says. “At the time of the Research Futures 2.0 report, AI was already very much part of the landscape but working in the background. Many researchers have been using AI for a long time. ChatGPT created excitement around the capabilities of AI and entered the public consciousness in a way it hadn’t before, except perhaps in film. It went well beyond what everyone, including many researchers, thought AI could do. “Suddenly, AI could create things and, importantly, was accessible. So naturally, we wanted to understand better the response of researchers, physicians and corporate researchers in this new context.”

Different regions, different attitudes — to a point

The report insight that struck Adrian the most was how views differ regionally — namely that APAC countries, particularly China, tend to be less conservative than Europe and North America regarding embracing AI and the change that comes with it.

“At first, this may seem counterintuitive since North America is where many of the advances in AI originated,” Adrian says. “But in fact, this may work to explain the caution — that the main players have had more time to consider the potential impact of their work. Of course, it’s less surprising that clinicians are more cautious than researchers when it comes to AI — the nature of their role at the frontlines means any errors can have immediate real-world impact.”

Indeed, recent elections have mirrored cultural differences in how people regard AI: While in the US, constituents are often scared off by candidates who apply AI opens in new tab/window, in India, the use of AI is often actually celebrated, giving candidates credit as innovators opens in new tab/window.

AI is primarily seen as a positive — to a point

The report’s overall message for Adrian is that, on balance, most people see AI as a positive advancement:

“While some communities are perhaps a bit more cautious, people think there’s more to be gained than lost through AI. There’s a lot of optimism that it will speed up science, and the majority think it will accelerate knowledge discovery.”

Photo of Adrian Mulligan, Research Director, Customer Insights, at Elsevier

AM

Adrian Mulligan

Research Director for Customer Insights at Elsevier

“Yes, there is also an obvious drive for guardrails and the use of high-quality data,” Adrian adds. “But these are issues Elsevier has been addressing long before the rise of generative AI.”

Beyond the report: a future of next steps

As with any worthy research, the report raises more questions than answers. “For instance, with this stated desire for more regulations and guardrails,” Adrian says, “the questions now become: What kind of regulations should there be? Who should do the regulating? Should it be the academic institutions or businesses that developed AI in the first place? Or should the government oversee and enforce these regulations? And how far should they be allowed to go?” So yes, there’s still a lot of work to be done.

Where is AI’s most significant potential?

Meanwhile, AI's impact will likely vary, with some areas benefiting more than others. If you’re in a research field managing large data sets, for example, astronomy or as with pharmaceutical companies, then the capabilities of AI will likely provide a significant advantage managing those datasets, connecting them, identifying and interpreting any underlying patterns. In these spaces, AI can accelerate discovery opportunities.”

Most experts are now convinced AI will ultimately revolutionize healthcare and research. “On the healthcare side, there’s great potential. To help with a diagnosis, physicians will have quick access to options based on a substantive amount of data evidence curated and brought together in a way that wasn’t possible before. Clinicians, of course, will be the final decision-makers and will still have oversight, but will be able to make faster and better decisions.”

The research also shows that AI has a role to play beyond the analysis and collating space:

“Researchers think there’s potential at the front end of the research process, with AI helping to identify new targets and research areas. By finding these knowledge gaps, it is possible to discover a new research opportunity. In other words, AI is also expected to deliver on the creation side.”

Photo of Adrian Mulligan, Research Director, Customer Insights, at Elsevier

AM

Adrian Mulligan

Research Director for Customer Insights at Elsevier

The tipping point is still ahead

Of course, generative AIs like ChatGPT remain a problem child due to the tendency to hallucinate. “Society at large is still transitioning and figuring out how to use a tool like ChatGPT,” Adrian says. “Ultimately, I think it is a supporting mechanism to enable people in any industry to do their jobs more efficiently — not dissimilar to email when it came along 25 years ago. But since it’s more in the space of what humans do — creation — we will need to have some controls in place. And I think people are now responding to that need.”

“Society at large is still transitioning and figuring out how to use a tool like ChatGPT. Ultimately, I think it is a supporting mechanism to enable people in any industry to do their jobs more efficiently.”

Photo of Adrian Mulligan, Research Director, Customer Insights, at Elsevier

AM

Adrian Mulligan

Research Director for Customer Insights at Elsevier

Meanwhile, we shouldn’t overstate AI’s impact, according to Adrian. “Even in the research and clinician community, where awareness of AI is very high, many people still haven’t used it or experimented with ChatGPT: Around 25% have used it for work purposes.”

A slow-motion revolution — with vetting and regulating

“There has been a surge in new AI tools recently, and we’re still trying to establish the best way to use them — and for what,” Adrian added. “Moreover, we have to figure out how to use them correctly: Governance and guardrails are essential, along with ensuring these systems are based on high-quality, authoritative and up-to-date data.

“Yes, it will be revolutionary. But we need to do it in a managed way.”

Report: Attitudes in corporate R&D

Are corporate researchers ready to embrace GenAI? Learn what the findings from Elsevier’s Insights 2024: Attitudes toward AI report tell us about their perspectives on AI.

Highlights of the full report

Elsevier’s report Insights 2024: Attitudes toward AI brings together the views of nearly 3,000 researchers and healthcare professionals around the world.

Main survey themes

  • Awareness of AI tools is high, but usage is low. Expectations are that this will grow. Institutions still need to clearly convey their AI usage restrictions or their preparations for increased use to researchers and clinicians.

  • Attitudes are mixed. However, researchers’ and clinicians’ overall sentiment is more positive than negative.

  • Specific actions can help increase trust. By taking such actions and communicating them, providers of AI tools can improve users’ comfort.

Interesting report insights

  • 71% expect the results of generative AI-dependent tools to be based on high-quality, trusted sources only

  • 94% believe AI (including GenAI) will be used for misinformation

  • 58% say training the model to be factually accurate, moral and not harmful (safety) would enormously increase their trust in that tool Knowing that the information the model uses is up to date was ranked highest by respondents for increasing their comfort in using an AI tool

Report recommendations

GenAI technology developers can:

  • Enhance accuracy and reliability

  • Increase transparency

  • Strengthen safety and security

  • Institutions employing researchers and clinicians can:

  • Establish policies and plans and communicate them clearly

  • Build governance and expertise

  • Provide training and capacity

  • Ensure access

Contributor

Ann-Marie Roche

AR

Ann-Marie Roche

Senior Director of Customer Engagement Marketing

Elsevier

Read more about Ann-Marie Roche