Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us

Shaping an AI-driven future

Discover how tackling misinformation and critical errors while ensuring accuracy can build trust in AI tools—leading to a smarter, safer future.

Female researcher with tablet in front of screen with analytics

Attitudes toward AI: Chapter 3

While AI has immense potential, significant worries about misinformation, critical errors and over-reliance persist. Ensuring accuracy and transparency is key to building trust in AI tools. Learn more about the concerns and trust factors surrounding AI among researchers and clinicians.

Shaping an AI-driven future

Understanding not only their concerns but also the factors that build researchers’ and clinicians’ trust in AI tools and their comfort using them can help technology developers create better tools and institutions maximize their benefit.

  • 94% believe AI could be used for misinformation

  • 86% are concerned AI could cause critical errors or mishaps

  • 81% think AI will to some extent erode critical thinking with 82% of doctors expressing concern physicians will become over reliant on AI to make clinical decisions

  • 58% say training the model to be factually accurate, moral, and not harmful (safety) would strongly increase their trust in that tool

  • Knowing the information the model uses is up to date was ranked highest by respondents for increasing their comfort in using an AI tool

Almost all respondents are concerned that AI will be used for misinformation, a concern that was identified in Elsevier’s Confidence in Research global survey,56 as well as cause critical errors or mishaps.

Factual accuracy and up-to-date models and information would help increase trust among users.

“We’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.”2

Open data: The researcher perspective

BG

Bill Gates

Exploring users’ concerns

The potential of GenAI is becoming clearer as the technology develops, as are the potential pitfalls. GenAI tools can be powerful, not only for automating structured tasks and accelerating data analysis and visualization but also developing hypotheses and supporting clinical decisions.

When the stakes are high, as they are in the treatment of patients, it is vital that technology is responsible, ethical and transparent. Concern about the loss of the human element is particularly high around the use of AI in healthcare, and most Americans think it could harm the patient–clinician relationship.18

In a Pew Research survey, 60% of adults said they would feel uncomfortable if their healthcare provider relied on AI for their medical care, and opinion was split about the health outcomes, with 38% expecting them to be better and 33% worse.18

This provides a dilemma for tech companies developing the technology as well as those using it: they need to move fast to keep up with the changing landscape and harness the potential for innovation, but they also need to be cautious about the risks, many of which are still unknown.10

Understanding users’ (and potential users’) concerns around GenAI is an important step in developing tools with minimized risks. Some of the biggest concerns are around misinformation and errors.

Researchers’ and clinicians’ concerns

Overall, 94% of respondents (95% of researchers and 93% of clinicians) believe to some extent that AI will be used for misinformation over the next two to five years.

“These tools are not yet based on scientific evidence, do not provide references, and are not yet reliable.”

Open data: The researcher perspective

SRDB

Survey respondent, doctor, Brazil

GenAI technology can be used to produce misinformation, and if trained with this data, it can use misinformation as a basis for outputs it considers true. As Ofcom notes, “generative AI models are not capable of determining the truth or accuracy of information on their own.”47 Users are not always aware of the misinformation they collect, such as in the case of a lawyer cited for using fictitious case law in a legal brief that he used GenAI to write.30

This makes the governance and regulation of GenAI even more vital, and institutions have a role to play in mitigating the intentional use of GenAI to produce misinformation. As noted in View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead, academic leaders are concerned about how to mitigate risks like the falsification of research results.54

Most researchers and clinicians (86%) are also worried about critical errors or mishaps (accidents) occurring, with 14% not expecting this not to happen at all.

However, previous research suggests particular concern about mistakes in healthcare resulting from AI use, with over three-quarters of US clinicians considering it important for tech companies and governments to carefully manage AI applications in disease diagnosis.26

“I am very worried about generative AI leading to clinical mistakes that could harm patients. These machines don’t think, they recognize patterns to make confident but nonsensical answers. That is dangerous when making decisions. Lawyers are already in deep legal trouble for trying to pass off generative AI documents as their work.”

Open data: The researcher perspective

SRDU

Survey respondent, doctor, USA

Survey showing the negative impact of AI in various areas over the next two to five years

Fig 17. Question: Thinking about the impact AI will have on society and your work, to what extent do you think over the next 2 to 5 years it will…? A great extent, some extent, not at all. n=2,829

When technology meets humanity

Several other concerns relate to the impact GenAI could have on people and the way they think and behave. In the current study, 81% of respondents think AI will erode human critical thinking skills. Indeed, there is suggestion of a risk that AI will affect the way students think, which any changes in curriculum should consider.55

Over four in five (82%) doctors think use of AI may mean physicians become over reliant on the technology to make clinical decisions. This concern was echoed in the Clinician of the Future Education Edition, in which more than half (56%) of students feared the negative effects AI can have on the medical community.35

Social disruption is a concern for 79% of respondents, for example with AI causing the unemployment of large numbers of people.

Ethical concerns are also important: in the current survey, most respondents (85%) have at least some concerns, with only 11% reporting no concerns about the ethical implications of AI on their area of work and 11% reporting fundamental concerns. This is higher in Europe (17%) and North America (14%) (see detailed findings in databook).

Survey showing level of concerns about the ethical implications of AI

Fig 18. Question: To what extent, if at all, do you have concerns about the ethical implications of AI (including generative AI) in your area of work?

Factors impacting trust in AI tools

When combined, the potential GenAI has for misinformation, hallucinations, disruption to society and impact on job security paints a picture for many of a technology that is difficult to trust.25 Yet surveys show that most people do trust the technology.

The Capgemini Research Institute found that 73% of consumers trust content created by GenAI.20 Specifically, 67% believed they could benefit from GenAI used for diagnosis and medical advice, and 63% were excited by the prospect of GenAI bolstering drug discovery.

“I’m distrustful of all AI tools at present. It would take a lot of transparency along with concrete examples of the tool in action to convince me it is trustworthy. My career and my scientific integrity are too valuable to hand over to anyone or anything else. I am also not protected by tenure so any slip-ups and I will lose my career.”

Open data: The researcher perspective

SRRC

Survey respondent, researcher, Canada

What makes researchers and clinicians trust AI?

There is room for improvement when it comes to trust. Respondents to the current survey share their views about how to build trust in AI tools, and views are similar for researchers and clinicians across all factors.

More than half (58%) of respondents say training the model to be factually accurate, moral and not harmful would strongly increase their trust in that tool.

Some of the other factors respondents say would increase their trust in AI tools relate to quality and reliability. For example, 57% say only using high-quality peer-reviewed content to train the model would strongly increase their trust, while just over half (52%) say training the model for high coherency outputs (quality model output) would strongly increase their trust.

Transparency and security are also important factors. For 56% of respondents, citing references by default (transparency) will strongly increase trust in AI tools. Keeping the information input confidential is a trustboosting factor for 55%, as is abidance by any laws governing development and implementation (legality) for 53%.

Survey showing responses to Statement: Factors that strongly increase trust in AI tools

Fig 19. Question: To what extent, if at all, would the following factors increase your trust in tools that utilize generative AI? Scale: Strongly increase my trust, Slightly increase my trust, No impact on my level of trust

The importance of access

Regional differences across many survey questions highlight the importance of access in the implementation of AI globally.

Respondents in lower-middle-income countries are significantly more likely than those in high income countries to think AI will increase collaboration, at 90% and 65% respectively. They are also more likely to think AI will be transformative, at 32% compared to the global average of 25%.

However, respondents are less likely to have used AI for work purposes (at 21% versus the average of 31%), perhaps owing to access issues. While 26% of respondents globally cite a lack of budget as a restriction to using AI, this increases to 42% in lower- middle-income countries.

Actions for an AI-powered future

Respondents to the current survey clearly share the view that the AI tools they use now and in the future to support research and clinical work should be responsible, ethical and transparent. With this in mind, information, consent and quality are critical factors to consider from different angles.

GenAI technology providers

Enhance accuracy and reliability

As we saw in Chapter 2 (see figure 13 on page 27), researchers and clinicians expect tools powered by GenAI to be based on high-quality, trusted sources only (71%). To support this, developers should work to ensure the datasets used to train GenAI tools are reliable, accurate and unbiased. To minimize bias, advanced NLP techniques could be applied to understand the intent of users for more relevant outputs.20 Efforts to minimize the risk of hallucination should continue.

Increase transparency

Respondents expect to be informed whether the tools they are using depend on GenAI (81%) and would want the option to turn off the functionality (75%). In line with their expectation that it should be possible to choose whether to activate AI functionality, 42% of respondents would prefer AI to be provided as a separate module, while 37% would want it integrated into a product.

“All emerging technologies, including AI, have both advantages and disadvantages. It is essential to further develop and regulate these technologies, aiming to extract maximum benefits.”

Open data: The researcher perspective

SRRC

Survey respondent, researcher, Canada

Solution providers should be clear about the datasets used, and ensure intellectual property and copyright is protected. GenAI functionality should be clearly labelled or otherwise indicated, ideally with the ability for users to switch it off and on.

Strengthen safety and security

As regulation and policy develops, tech companies have a role to play in ensuring the safety of their GenAI tools, including robust governance and human oversight.

Given the importance of privacy and data security, developers could go beyond regulation to ensure their tools are safe and secure for users, thereby increasing trust.

Survey showing access preference for AI tools

Fig 20. Question: Would you prefer any generative AI functionality included in a product you use already to be…?

Two technicians in industrial lab

Institutions employing researchers and clinicians

Establish policies and plans and communicate them clearly

As we have seen, numerous organizations are working on policies, guidance and plans to integrate GenAI into their operations. However, as respondents shared in the survey, many are unaware of their institutions’ plans, including restrictions on using GenAI.

In addition to establishing guidelines on GenAI and taking steps towards a strategy for the organization, communicating those actions and plans to researchers and clinicians would help mitigate risk and maximize benefit.

Build governance and expertise

Institutions can help increase the comfort and trust of researchers and clinicians in GenAI by ensuring the tools they choose are overseen in a way that identifies and reduces biases and risks.

Any GenAI strategy should include a robust governance structure, including people with expertise in the technology and its area of application.

Provide training and capacity

Despite its rapid increase in awareness and usage, GenAI remains a relatively young technology.

As the use of GenAI increases, researchers and clinicians will need to spend time learning how to maximize its benefit. Previous research with clinicians has highlighted the potential burden of AI due to the required time to learn.34

To ensure the technology is part of the solution rather than the problem, institutions could identify ways to give researchers and clinicians the time and a safe space to explore GenAI.

Ensure access

AI perception is markedly more positive in lower-middle-income countries, yet its use among researchers and clinicians is limited due to budgetary restrictions.

Institutions are increasingly aware of the importance of inclusion, and the role accessibility plays in that. As use of AI becomes increasingly widespread globally, there will be a growing need to address gaps in access to the technology, especially in international collaboration. To help ensure improved access to AI technology globally, institutions could consider AI as part of their wider strategy, to help foster partnership and ensure greater diversity at the institutional and project level.

References

2. Bill Gates. The Age of AI has begun. Gates Notes. 21 March 2023. https://www.gatesnotes.com/The-Age-of-AI-Has-Begun opens in new tab/window

10. MIT Technology Review Insights. The great acceleration: CIO perspectives on generative AI. 2023. https://www.databricks.com/sites/default/files/2023-07/ebook_mit-cio-generative-ai-report.pdf opens in new tab/window

18. Michelle Faverio and Alec Tyson. What the data says about Americans’ views of artificial intelligence. Pew Research Center. 21 November 2023. https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/ opens in new tab/window

20. Capgemini Research Institute. Why Consumers Love Generative AI. 7 June 2023. https://prod.ucwe.capgemini.com/wp-content/uploads/2023/06/GENERATIVE-AI_Final_WEB_060723.pdf opens in new tab/window

25. Portulans Institute. Network Readiness Index 2023. https://download.networkreadinessindex.org/reports/nri_2023.pdf opens in new tab/window

26. Elsevier. Clinician of the Future 2023. Page 27.

30. Maryam Alavi and George Westerman. How Generative AI Will Transform Knowledge Work. Harvard Business Review. 7 November 2023. https://hbr.org/2023/11/how-generative-ai-will-transform-knowledge-work opens in new tab/window

34. Elsevier. Clinician of the Future 2023. Page 18.

35. Elsevier. Clinician of the Future 2023 Education Edition. Page 23.

47. Ofcom. Future Technology and Media Literacy: Understanding Generative AI. 22 February 2024. https://www.ofcom.org.uk/__data/assets/pdf_file/0033/278349/future-tech-media-literacy-understanding-genAI.pdf opens in new tab/window

54. Elsevier. View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead. March 2024. Pages 37 and 48. https://www.elsevier.com/academic-and-government/academic-leader-challenges-report-2024

55. Elsevier. Clinician of the Future 2023 Education Edition. Page 24.

56. Confidence in Research. 2022. Page 9. https://confidenceinresearch.elsevier.com/ opens in new tab/window