A future lens on AI
AI is the future — but is is not without risk.
Attitudes toward AI: Chapter 2
Respondents are clear: If the benefits of AI tools were to be realized, the tools themselves must be based on high quality, trusted content. Discover the perceived impact and benefits according to survey participants, and what they feel needs to be in place for AI to transform research and health.
Researchers and clinicians recognize the growing potential of AI tools, and if they’re not already using them, most expect to do so in the coming two to five years. Almost all respondents expect AI (including GenAI) to have an impact by helping accelerate knowledge discovery and rapidly increasing the volume of research. While they identify numerous benefits, they also note that AI will not replace inherently human capabilities like creativity and empathy. Transparency and quality will be important in the future as AI use increases.
Key findings:
95% think AI will help accelerate knowledge discovery
94% think AI will help rapidly increase the volume of scholarly and medical research
92% foresee cost savings for institutions and businesses
67% of those not using AI expect to use it in the next two to five years
42% of those who have ethical concerns about AI cite as a top disadvantage that it is unable to replace human creativity, judgement and/or empathy
71% expect generative AI dependent tools’ results be based on high quality trusted sources only
Perceived impact and benefits
The sentiment around AI is influenced by the impact people expect the technology to bring in the future, some of it positive and some negative. In the current study, almost all (96%) respondents think AI will change the way education is delivered and 95% believe it will accelerate knowledge discovery at least to some extent in the next two to five years.
Similarly, 94% of respondents think AI will rapidly increase the volume of scholarly and medical research, with clinicians (96%) more likely than researchers (92%) to think this. Although those in North America (and the USA specifically) and Europe generally believe AI will impact positively, they are consistently less likely to do so compared to other regions and indeed, for North America, more likely to think AI will cause mishaps and disruption than average globally (see Chapter 3 on page 32 and detailed findings in databook). Specifically, 95% of respondents see benefit in using AI for research related activities (see figure 10).
Drilling down into the detail of where AI may deliver benefits across different areas, 95% of respondents believe it will help with using scientific content (e.g., keeping up-to-date). The benefit of AI is also expected to extend to human interaction, with 79% of respondents (84% of clinicians and 74% of researchers) saying they think AI will increase collaboration (see detailed findings in databook for more detail).
As noted in Elsevier’s 2022 Research Futures 2.0 report opens in new tab/window, “Artificial intelligence (AI) and machine learning tools are changing the shape of science.”
Researchers today can use GenAI-powered tools across a multitude of tasks, including to:
Collect, generate, sort and analyze data
Identify errors, inconsistencies and biases in data
Visualize data
Identify plagiarism
Discover relevant published research
Support peer review
Refine written work through translation and editing
Summarize and simplify academic papers
Brainstorm ideas for structuring presentations and articles
Buying time for high-value work
As we have seen, GenAI is likely to have a bigger impact on knowledge work than manual work. GenAI can play a role by automating structured tasks, reducing cognitive load and supporting unstructured tasks like critical thinking and creativity.30
According to the Office of National Statistics, about one-third (32%) of UK adults believe that AI will benefit them, rising to 49% of those with higher education qualifications.22 This perception is largely down to the potential GenAI has to improve work: 41% of professional workers thought AI could make their job easier.
The expansion beyond purely data-related and repetitive tasks is reflected in public surveys.31 For example, consumers are using GenAI for creative purposes, like generating content (52%) and brainstorming (28%).
This is reflected in the current study: 85% of respondents believe AI tools will free their time for higher value work, though 15% don’t expect any impact in this area (see figure 9 on page 20). For researchers and clinicians, it takes a lot of time and effort to keep up to date with the influx of new knowledge being published every day. The resulting ‘digital debt’ builds up a backlog that can hide useful information and even impact mental health. According to Microsoft’s 2023 Work Trend Index: Annual Report, 68% of people “say they don’t have enough uninterrupted focus time during the workday,” and 62% “struggle with too much time spent searching for information in their workday.”32
In the current survey, 95% of respondents see benefit in AI using scientific content – in other words, keeping up to date with new information and reducing their digital debt. Clinicians (97%) see more benefit than researchers (93%).
In addition, 92% expect AI to increase their work efficiency to some extent, and 92% expect the technology to provide cost savings. This is echoed in research by Capgemini, in which executives predicted operational improvements of 7-9% within three years.23
About nine in ten (87%) respondents expect AI to improve their work quality to some extent, while 13% predict there will be no impact in this area. Similarly, 83% think the technology will increase their work consistency, compared to 17% who expect no benefit.
Powering education
GenAI already plays a role in education, and as such, many universities have set out policies and guidance for students and educators. GenAI tools can support learning by acting as “advisor, tutor, coach, and simulator,” providing instructions, feedback and different perspectives, for example.31
Almost all (96%) respondents to the current survey expect AI to change the way students are taught to some extent, and a nearly all (96%) see at least some benefit in AI for teaching and lecturing activities.
This is in line with findings from Clinician of the Future 2023, in which 51% of clinicians considered the use of AI desirable for training medical students and 50% for training nurses.34 And students reported similar sentiments, with 43% of respondents in the Clinician of the Future Education Edition saying their instructors welcome GenAI.35
The way today’s researchers and clinicians perceive and approach using GenAI in teaching will affect not only its impact on education but also on the views and behaviors of the next generation of researchers and clinicians.
Supporting clinical activities
The potential applications of GenAI technology in the clinic are growing rapidly. The Research Futures 2.0 opens in new tab/window report highlights the use of AI in predicting the progress of Alzheimer’s disease, monitoring the progression of Parkinson’s disease, examining CT scans and x-rays, diagnosing and developing personalized medication plans for cancer patients, and improving the effectiveness of mental healthcare.36
In the current study, 95% of respondents of those involved in clinical practice see a benefit in AI for clinical activities such as diagnoses and patient summaries. This is in line with the views clinicians shared in 2023.37
Despite clinicians having reservations about the impact of GenAI on the patient–clinician relationship, blinded research reveals a more positive picture. The study, by US researchers, asked the question: “Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?”
The results were striking, with a panel of licensed healthcare professionals preferring ChatGPT’s responses to physicians’ responses 79% of the time, rating them higher quality and more empathetic.39
AI in publishing and funding
The publishing process — including authoring, reviewing and editing — can be time-consuming for researchers and clinicians, and AI is already being employed in several systems.
Applications mentioned in Research Futures 2.0 include StatReviewer, which has been integrated into Editorial Manager, UNSILO’s AI-supported tools Evaluate Technical Checks, integrated into ScholarOne, and AIRA, used by Frontiers.40
There have been suggestions that the application of GenAI could go even further, potentially even replacing human review, at least in part, in the future. In Research Futures 2.0, presented with this hypothetical scenario, 21% of researchers said they would be willing to read an article reviewed by AI.41 Respondents shared reasons including lower subjectivity and greater consistency across reviews.
However, the majority in the study — 59% — disagreed or strongly disagreed that they would be willing to read an article reviewed by AI, many saying they “valued human understanding and believed AI incapable of quality peer review.” In the current study, 93% of respondents believe AI will bring benefit in publication and monitoring the impact of research, for example in authoring and reviewing. When it comes to funding, though, respondents were not as optimistic, with 84% expecting AI to provide some benefit for funding-related activities.
Perceived drawbacks
Respondents were not solely positive about AI — they also identified a number of potential disadvantages of AI. The majority (85%) had at least some concerns about the ethical implications of AI in their area of work. People see its inability to replace human creativity, judgement and/or empathy as the main disadvantage, with 42% of those who have concerns about AI ranking this as a top-three disadvantage of the technology.
Clinicians (45%) are more likely to say this than researchers (39%). And women (46%) are more likely to say this than men (38%).
Regulation and accountability Two-fifths (40%) of respondents with concerns cite the lack of regulation and governance as a top three disadvantage of AI. Those in South America (45%) and Europe (45%) are most concerned. Indeed, there is currently a dearth of regulation for GenAI, largely due to the speed at which the technology has developed — faster than policymakers can update laws.42
This concern about a lack of regulation is widespread, even among the corporate leaders driving the GenAI movement, with the CEOs of OpenAI and Google and the President of Microsoft among those taking steps to encourage regulation.43 Senate Judiciary Committee Chairman Richard Durbin said it is “historic” for “people representing large corporations [to] come before us and plead with us to regulate them.”44
One of the benefits of regulation is highlighting the potentially negative effects of GenAI, and as Joshua Gans, co-author of Power and Prediction: The Disruptive Economics of Artificial Intelligence, shared in an interview with the International Monetary Fund (IMF), “it behooves us to monitor for those consequences, identify their causes, and consider experimentation with policy interventions that can mitigate them.”45
The need for better guidance and oversight is reflected in two other top three disadvantages. About one-third (30%) of respondents with concerns ranked lack of accountability over the use of AI outputs in their top three; this is highest in North America (34%), and researchers (32%) are more likely than clinicians (29%) to cite this as a top concern. Conversely, clinicians (23%) were more likely than researchers (15%) to cite ‘lack of relevant expertise within organizations’ as a top disadvantage of AI.
Early days of regulation: The EU’s AI Act46
Agreed in December 2023, the AI Act aims to address the risks certain AI systems can create in order to avoid “undesirable outcomes.” The Regulatory Framework defines four levels of risk for AI systems. An AI system will be banned if it is “considered a clear threat to the safety, livelihoods and rights of people.” This includes, for example, social scoring by governments and toys encouraging dangerous behavior. The European AI Office oversees the enforcement and implementation of the AI Act.
Discrimination and bias
Following its inability to replace humans and the lack of regulation and accountability around AI, the next most commonly cited disadvantage is that outputs can be discriminatory or biased, with 24% of respondents with concerns ranking this in their top three. For almost one-fifth (18%) of respondents, the risk of AI homogenizing culture via its use of global models is a top three disadvantage, and 7% of respondents cite the technology’s discrimination against non-native English speakers. The concern about bias and discrimination in AI is not new. As noted by the UK’s communications regulator Ofcom, if the voices and perspectives of marginalized groups are underrepresented in training data, GenAI models can underrepresent them in outputs, leading to exclusion and inaccurate information about those groups.47 Tech leaders have acknowledged the problem and recognize the need for improvement.48 To overcome this, Ofcom suggests that the datasets used to train GenAI models should be “diverse and representative,” which will require human quality control. Capgemini research revealed that 45% of organizations lack confidence that GenAI programs are fair (inclusive of all population groups) and 36% say the potential for bias to lead to embarrassing (i.e. undesirable or socially unacceptable) results is a challenge for implementing the technology.23
Conversely, there are some indications that GenAI has the potential to make a positive impact on existing biases and discrimination. According to the Pew Research Center, 51% of US adults who see a problem with racial and ethnic bias in health and medicine think AI would improve the issue, and 53% believe the same for bias in hiring.18
Lack of accuracy
More insight into the datasets used to train GenAI models would not only help mitigate against the potential for bias but also give transparency around how an output was generated. A number of respondents to the current survey (17%) consider ‘the logic behind an output is not well described’ as a top-three disadvantage. Researchers (20%) are more likely than clinicians (14%) to rank this issue in their top three.
Accuracy was more important to respondents than transparency. For 19% overall, being too dependent on outdated data and/or information is a top three disadvantage of AI. Researchers (21%) are more likely than clinicians (17%) to rank this highly.
Similarly, 18% of respondents with concerns consider hallucinations (i.e. when AI generates incorrect and/or nonsensical outputs) to be a major disadvantage, with researchers (25%) significantly more likely than clinicians (11%) to rank this in their top three. Hallucinations are incorrect and sometimes nonsensical outputs generated based on patterns in training data, and they occur in an estimated 3% to 30% of answers.49
Hallucinations are a topic of discussion among tech leaders as well as users. According to Sundar Pichai, CEO of Alphabet, “No one in the field has yet solved the hallucination problems. All models do have this as an issue.”50 Given their ubiquity, hallucinations are of particular concern for areas like law and medicine, according to some researchers.31
Privacy and ethical issues
Some of the less commonly ranked disadvantages are related to privacy and ethical issues. For example, 13% of respondents with concerns consider the lack of confidentiality of AI inputs or prompts as a top-three disadvantage, and 11% rank the lack of confidentiality of outputs as such.
Privacy is one of the main concerns of consumers, with 72% of the UK public surveyed by the Office of National Statistics considering the use of personal data without consent a negative impact, and 60% mentioning the increased chances of experiencing cybercrime.22 Looking at the data ownership issue from the other side, 14% of respondents in the current study say the lack of permission to use data or information AI tools are trained on is a top three disadvantage. And almost one in ten (9%) respondents consider AI’s need for a lot of computer processing power to be a top-three disadvantage.
Expectations
As noted in chapter 1, more than half of respondents in the current study have used AI, either for a work or non-work purpose (see page 10). This is likely to change soon: 67% of those who have yet to use AI (including GenAI) tools expect to do so in the next two to five years.
The proportion of those in North America (and in the USA specifically), who have yet to use AI, is notably lower — only 51% expect to do so in the near future, significantly lower than the global average and highest in APAC and Middle East and Africa.
While respondents were optimistic about their future use of AI, they also shared a number of expectations around how they believe AI should develop.
The top expectation overall is that generative AI will always be paired with human expertise, with 83% of respondents globally agreeing with this. Clinicians (86%) are more likely than researchers (81%) to agree.
Information and consent are critical: 81% of respondents expect to be informed whether the tools they use depend on generative AI, and 71% expect AI tools’ results to be based on high quality trusted sources.
Three-quarters (75%) of respondents expect to be given a choice to turn off generative AI in the tools that they use.
Respondents expect generative AI will work well with nontext modalities (i.e. chemical or biological compounds, chemical reactions, graphs, plans) (74%); agreement is higher among clinicians (77%) than researchers (72%).
Quality is important too: about seven in ten (71%) respondents expect generative AI dependent tools’ results to be based on high quality trusted sources only, with agreement higher among clinicians (73%) than researchers (68%). This aligns with the findings shared earlier in this chapter, with researchers more likely to consider outdated source information a top disadvantage (see page 23).
Researchers believe AI can bring benefit across a range of activities, including developing new ideas, preparing articles and summarizing information. Of those who think AI would benefit research activities or using scientific content, 94% are likely to use a reliable AI assistant to review prior studies, identify gaps in knowledge and generate a new research hypothesis for testing, 91% to proof their paper and 89% to generate a synthesis of research articles in an area.
For clinicians, who believe AI could bring benefit across clinical activities such as diagnoses and clinical imaging, 94% are likely to use a reliable and secure AI assistant to assess symptoms and identify a disease or condition.
Institutions are preparing for an AI-powered future
Looking to the future, institutions are also expecting the use of GenAI to increase — and they’re preparing for it. Elsevier’s report View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead highlights that many universities have GenAI guidelines in place, or are working on them, both for research and education.51 In particular, 64% of academic leaders are prioritizing the challenge of AI governance, though only 23% consider their institutions well prepared to tackle the challenge.52
Businesses more broadly are taking the subject seriously. Capgemini reports that GenAI is on the boardroom agenda for 96% of organizations, with one-fifth of executives expecting the technology to “significantly disrupt their industries.”23 Support is even stronger among pharma and healthcare companies: 98% of executives in this industry say GenAI is on the board’s agenda, and 58% say company leaders are strong advocates of GenAI.
As such, according to Capgemini 97% of organizations had plans for GenAI and by July 2023, 40% of organizations had set up teams and allocated budget to GenAI (42% for the pharma and healthcare sector). A further 49% planned to do so within a year.23 More than two-thirds (68%) reported establishing guidelines and policies on employees’ use of GenAI, and 10% had banned, or were considering banning, GenAI tools.
In the current study, actions institutions are taking include building a plan or protocol to evaluate the purchase of tools that include AI (reported by 16% of respondents), setting up a community of practice around it (14%) and providing ethics courses (14%). Overall, 12% plan to acquire tools that include AI in 2024 or beyond.
It is less common for institutions to be appointing new AI leadership (6%) or operational functions such as GenAI Librarian (10%).
Learn more about Attitudes toward AI
References
28. Elsevier. Research Futures 2.0 opens in new tab/window. April 2022. Page 4.
29. Elsevier. View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead opens in new tab/window. March 2024. Page 72.
30. Maryam Alavi and George Westerman. How Generative AI Will Transform Knowledge Work. Harvard Business Review. 7 November 2023. https://hbr.org/2023/11/how-generative-ai-will-transform-knowledge-work opens in new tab/window
31. Capgemini Research Institute. Why Consumers Love Generative AI. 7 June 2023. https://prod.ucwe.capgemini.com/wp-content/uploads/2023/06/GENERATIVE-AI_Final_WEB_060723.pdf opens in new tab/window
32. Microsoft. Will AI Fix Work? 2023 Work Trend Index: Annual Report. 9 May 2023. https://assets.ctfassets.net/y8fb0rhks3b3/5eyZc6gDu1bzftdY6w3ZVV/93190f5a8c7241ecf2d6861bdc7fe3ca/WTI_Will_AI_Fix_Work_060723.pdf opens in new tab/window
33. Elsevier. Clinician of the Future 2023 opens in new tab/window. Page 26.
34. Elsevier. Clinician of the Future 2023 opens in new tab/window. Page 18.
35. Elsevier. Clinician of the Future 2023 Education Edition opens in new tab/window. Page 23.
36. Elsevier. Research Futures 2.0 opens in new tab/window. April 2022.
37. Elsevier. Clinician of the Future 2023 opens in new tab/window. Page 22.
38. Elsevier. Clinician of the Future 2023 opens in new tab/window. Page 25.
39. Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023;183(6):589–596. https://doi.org/10.1001/jamainternmed.2023.1838 opens in new tab/window
40. Elsevier. Research Futures 2.0 opens in new tab/window. April 2022. Page 91.
41. Elsevier. Research Futures 2.0 opens in new tab/window. April 2022. Page 92.
42. Andrés García Higuera. What if generative artificial intelligence became conscious? AT A GLANCE Scientific Foresight: What if? European Parliamentary Research Service. October 2023. https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/753162/EPRS_ATA(2023)753162_EN.pdf opens in new tab/window
43. Tom Wheeler. The three challenges of AI regulation. 15 June 2023. https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/ opens in new tab/window
44. Cristiano Lima-Strong and David DiMolfetta. OpenAI Embraced Regulation Until Talks Got Serious in Europe. The Washington Post. 26 May 2023. https://www.washingtonpost.com/politics/2023/05/26/openai-embraced-regulation-until-talks-got-serious-europe/ opens in new tab/window
45. Henriquez, M. Embracing artificial intelligence. International Monetary Fund. September 2023. https://www.imf.org/en/Publications/fandd/issues/2023/09/Cafe-Econ-embracing-artificialintelligence-joshua-gans opens in new tab/window
46. European Commission. AI Act. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai opens in new tab/window
47. Ofcom. Future Technology and Media Literacy: Understanding Generative AI. 22 February 2024. https://www.ofcom.org.uk/__data/assets/pdf_file/0033/278349/future-tech-media-literacy-understanding-genAI.pdf opens in new tab/window
48. Zachary Small. Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History. The New York Times. 4 July 2023. https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html opens in new tab/window
49. Cade Metz. Chatbots May ‘Hallucinate’ More Often Than Many Realize. The New York Times. 6 November 2023. https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html opens in new tab/window
50. Will Daniel. Google CEO Sundar Pichai says ‘hallucination problems’ still plague A.I. tech and he doesn’t know why. Fortune. 17 April 2023. https://fortune.com/2023/04/17/google-ceo-sundar-pichai-artificial-intelligence-bard-hallucinations-unsolved/ opens in new tab/window
51. Weale, S. UK universities draw up guiding principles on generative AI. The Guardian. 4 July 2023.
52. Elsevier. View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead opens in new tab/window. March 2024. Page 34.
53. Elsevier. View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead opens in new tab/window. March 2024. Page 72.