塑造 AI 驅動的未來
探索在確保準確性的同時,如何處理錯誤資訊和重大錯誤,以建立對 AI 工具的信任,進而創造更智慧、更安全的未來。

對 AI 的態度:第三章
雖然 AI 擁有無窮的潛力,但對於錯誤資訊、嚴重錯誤和過度依賴的憂慮仍然存在。確保準確性和透明度是建立對 AI 工具信任的關鍵。進一步瞭解研究人員和臨床醫師對於 AI 的疑慮和信任因素。
塑造 AI 驅動的未來
了解研究人員和臨床醫生的擔憂,以及建立他們對 AI 工具信任和舒適使用感的因素,可以幫助技術開發者創造更好的工具,並讓機構最大化其效益。
94% 的人認為 AI 可能被用於散播錯誤資訊
86% 的人擔心 AI 可能導致嚴重錯誤或事故
81% 的人認為 AI 會在某種程度上削弱批判性思維,82% 的醫生擔心醫師會過度依賴 AI 來做出臨床決策
58% 的人表示,訓練模型以確保其事實準確、道德且無害(安全性)將大大增加他們對該工具的信任
受訪者認為,了解模型使用的資訊是最新的,最能增加他們使用 AI 工具的舒適度幾乎所有受訪者都擔心 AI 會被用於散播錯誤資訊,這一擔憂在 Elsevier 《研究信心全球調查》中也得到了證實,同時也擔心 AI 可能導致嚴重錯誤或事故。
事實準確性和最新的模型與資訊將有助於增加用戶的信任。
「我們對於AI 所能達到的成就,還只是剛開始。無論今天它有什麼限制,都會在我們意識到之前消失」2

BG
Bill Gates
探討使用者的疑慮
隨著技術的發展,生成式人工智慧(GenAI)的潛力變得更加清晰,同時潛在的陷阱也顯現出來。GenAI 工具不僅可以自動化結構化任務、加速數據分析和可視化,還能協助提出假設和支持臨床決策,功能強大。
當涉及高風險的情況,例如患者治療時,技術必須是負責任、合乎道德且透明的。在醫療保健領域使用 AI 時,對失去人性化元素的擔憂尤其高,大多數美國人認為這可能會損害患者與臨床醫生之間的關係。18
根據皮尤研究中心的一項調查,60%的成年人表示,如果他們的醫療服務提供者依賴 AI 進行醫療護理,他們會感到不安。對於健康結果的看法也存在分歧,38%的人預計結果會更好,而33%的人認為會更糟。18
這為開發這項技術的科技公司以及使用它的機構帶來了困境:他們需要快速行動以跟上不斷變化的環境並利用創新潛力,但同時也需要對風險保持謹慎,因為許多風險仍然未知。10
了解使用者(和潛在使用者)對 GenAI 的疑慮,是開發風險最小化工具的重要一步。一些最大的顧慮是關於假消息和錯誤資訊。
研究人員和臨床醫生的擔憂
總體而言,94%的受訪者(95%的研究人員和93%的臨床醫生)在某種程度上認為,在未來兩到五年內,AI 將被用於散播錯誤資訊。
「這些工具尚未建立在科學證據的基礎上,沒有提供參考資料,而且還不可靠」。

SRDB
Survey respondent, doctor, Brazil
生成式人工智慧(GenAI)技術可能被用於製造錯誤資訊,如果使用這些數據進行訓練,它可能會將錯誤資訊作為其認為真實的輸出的基礎。正如 Ofcom 所指出的,「生成式 AI 模型無法自行判斷資訊的真實性或準確性。」47 使用者並不總是意識到他們收集到的錯誤資訊,例如一位律師因在法律簡報中使用虛構的案例法而被引用,而該簡報是他使用 GenAI撰寫的。30
這使得對 GenAI 的治理和監管變得更加重要,機構在減輕故意使用 GenAI 製造錯誤資訊方面可以發揮作用。正如《高層觀點:學術領袖和資助者對未來挑戰的見解》中所指出的,學術領袖擔心如何減輕像偽造研究結果這樣的風險。54
大多數研究人員和臨床醫生(86%)也擔心會發生嚴重錯誤或事故,其中 14%的人完全不認為這種情況會發生。
然而,先前的研究表明,人們特別擔心人工智慧在醫療保健中導致的錯誤,超過四分之三的美國臨床醫生認為科技公司和政府應謹慎管理人工智慧在疾病診斷中的應用。26
「我非常擔心生成式 AI會導致臨床錯誤,傷害病人。這些機器不會思考,它們會識別模式,做出自信但無意義的答案。這在做決定時是很危險的。律師已經因為試圖將生成式 AI 文件冒充為他們的工作而深陷法律糾紛」。

SRDU
Survey respondent, doctor, USA

Fig 17. Question: Thinking about the impact AI will have on society and your work, to what extent do you think over the next 2 to 5 years it will…? A great extent, some extent, not at all. n=2,829
當科技與人性相遇
還有一些擔憂與生成式人工智慧(GenAI)對人們及其思維和行為方式的影響有關。在當前的研究中,81% 的受訪者認為 AI 將削弱人類的批判性思維能力。事實上,有跡象表明 AI 可能會影響學生的思維方式,這在課程的任何變更中都應予以考慮。55
超過五分之四(82%)的醫生認為,使用 AI 可能表示醫師會過度依賴這項技術來做出臨床決策。這一擔憂在《未來臨床醫生教育版》中得到了呼應,超過一半(56%)的學生擔心人工智慧對醫學界可能產生的負面影響。35
79%的受訪者擔心社會動盪,例如人工智慧導致大量人口失業。
倫理問題也很重要:在本次調查中,大多數受訪者(85%)至少有一些擔憂,只有11%的人表示對 AI在其工作領域的倫理影響沒有擔憂,而11%的人表示有根本性的擔憂。這一比例在歐洲(17%)和北美(14%)更高(詳見數據手冊中的詳細發現)。

Fig 18. Question: To what extent, if at all, do you have concerns about the ethical implications of AI (including generative AI) in your area of work?
Factors impacting trust in AI tools
When combined, the potential GenAI has for misinformation, hallucinations, disruption to society and impact on job security paints a picture for many of a technology that is difficult to trust.25 Yet surveys show that most people do trust the technology.
The Capgemini Research Institute found that 73% of consumers trust content created by GenAI.20 Specifically, 67% believed they could benefit from GenAI used for diagnosis and medical advice, and 63% were excited by the prospect of GenAI bolstering drug discovery.
“I’m distrustful of all AI tools at present. It would take a lot of transparency along with concrete examples of the tool in action to convince me it is trustworthy. My career and my scientific integrity are too valuable to hand over to anyone or anything else. I am also not protected by tenure so any slip-ups and I will lose my career.”

SRRC
Survey respondent, researcher, Canada
What makes researchers and clinicians trust AI?
There is room for improvement when it comes to trust. Respondents to the current survey share their views about how to build trust in AI tools, and views are similar for researchers and clinicians across all factors.
More than half (58%) of respondents say training the model to be factually accurate, moral and not harmful would strongly increase their trust in that tool.
Some of the other factors respondents say would increase their trust in AI tools relate to quality and reliability. For example, 57% say only using high-quality peer-reviewed content to train the model would strongly increase their trust, while just over half (52%) say training the model for high coherency outputs (quality model output) would strongly increase their trust.
Transparency and security are also important factors. For 56% of respondents, citing references by default (transparency) will strongly increase trust in AI tools. Keeping the information input confidential is a trustboosting factor for 55%, as is abidance by any laws governing development and implementation (legality) for 53%.

Fig 19. Question: To what extent, if at all, would the following factors increase your trust in tools that utilize generative AI? Scale: Strongly increase my trust, Slightly increase my trust, No impact on my level of trust
The importance of access
Regional differences across many survey questions highlight the importance of access in the implementation of AI globally.
Respondents in lower-middle-income countries are significantly more likely than those in high income countries to think AI will increase collaboration, at 90% and 65% respectively. They are also more likely to think AI will be transformative, at 32% compared to the global average of 25%.
However, respondents are less likely to have used AI for work purposes (at 21% versus the average of 31%), perhaps owing to access issues. While 26% of respondents globally cite a lack of budget as a restriction to using AI, this increases to 42% in lower- middle-income countries.
Actions for an AI-powered future
Respondents to the current survey clearly share the view that the AI tools they use now and in the future to support research and clinical work should be responsible, ethical and transparent. With this in mind, information, consent and quality are critical factors to consider from different angles.
GenAI technology providers
Enhance accuracy and reliability
As we saw in Chapter 2 (see figure 13 on page 27), researchers and clinicians expect tools powered by GenAI to be based on high-quality, trusted sources only (71%). To support this, developers should work to ensure the datasets used to train GenAI tools are reliable, accurate and unbiased. To minimize bias, advanced NLP techniques could be applied to understand the intent of users for more relevant outputs.20 Efforts to minimize the risk of hallucination should continue.
Increase transparency
Respondents expect to be informed whether the tools they are using depend on GenAI (81%) and would want the option to turn off the functionality (75%). In line with their expectation that it should be possible to choose whether to activate AI functionality, 42% of respondents would prefer AI to be provided as a separate module, while 37% would want it integrated into a product.
“All emerging technologies, including AI, have both advantages and disadvantages. It is essential to further develop and regulate these technologies, aiming to extract maximum benefits.”

SRRC
Survey respondent, researcher, Canada
Solution providers should be clear about the datasets used, and ensure intellectual property and copyright is protected. GenAI functionality should be clearly labelled or otherwise indicated, ideally with the ability for users to switch it off and on.
Strengthen safety and security
As regulation and policy develops, tech companies have a role to play in ensuring the safety of their GenAI tools, including robust governance and human oversight.
Given the importance of privacy and data security, developers could go beyond regulation to ensure their tools are safe and secure for users, thereby increasing trust.

Fig 20. Question: Would you prefer any generative AI functionality included in a product you use already to be…?

Institutions employing researchers and clinicians
Establish policies and plans and communicate them clearly
As we have seen, numerous organizations are working on policies, guidance and plans to integrate GenAI into their operations. However, as respondents shared in the survey, many are unaware of their institutions’ plans, including restrictions on using GenAI.
In addition to establishing guidelines on GenAI and taking steps towards a strategy for the organization, communicating those actions and plans to researchers and clinicians would help mitigate risk and maximize benefit.
Build governance and expertise
Institutions can help increase the comfort and trust of researchers and clinicians in GenAI by ensuring the tools they choose are overseen in a way that identifies and reduces biases and risks.
Any GenAI strategy should include a robust governance structure, including people with expertise in the technology and its area of application.
Provide training and capacity
Despite its rapid increase in awareness and usage, GenAI remains a relatively young technology.
As the use of GenAI increases, researchers and clinicians will need to spend time learning how to maximize its benefit. Previous research with clinicians has highlighted the potential burden of AI due to the required time to learn.34
To ensure the technology is part of the solution rather than the problem, institutions could identify ways to give researchers and clinicians the time and a safe space to explore GenAI.
Ensure access
AI perception is markedly more positive in lower-middle-income countries, yet its use among researchers and clinicians is limited due to budgetary restrictions.
Institutions are increasingly aware of the importance of inclusion, and the role accessibility plays in that. As use of AI becomes increasingly widespread globally, there will be a growing need to address gaps in access to the technology, especially in international collaboration. To help ensure improved access to AI technology globally, institutions could consider AI as part of their wider strategy, to help foster partnership and ensure greater diversity at the institutional and project level.
Learn more about Attitudes toward AI
References
2. Bill Gates. The Age of AI has begun. Gates Notes. 21 March 2023. https://www.gatesnotes.com/The-Age-of-AI-Has-Begun 打開新的分頁/視窗
10. MIT Technology Review Insights. The great acceleration: CIO perspectives on generative AI. 2023. https://www.databricks.com/sites/default/files/2023-07/ebook_mit-cio-generative-ai-report.pdf 打開新的分頁/視窗
18. Michelle Faverio and Alec Tyson. What the data says about Americans’ views of artificial intelligence. Pew Research Center. 21 November 2023. https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/ 打開新的分頁/視窗
20. Capgemini Research Institute. Why Consumers Love Generative AI. 7 June 2023. https://prod.ucwe.capgemini.com/wp-content/uploads/2023/06/GENERATIVE-AI_Final_WEB_060723.pdf 打開新的分頁/視窗
25. Portulans Institute. Network Readiness Index 2023. https://download.networkreadinessindex.org/reports/nri_2023.pdf 打開新的分頁/視窗
26. Elsevier. Clinician of the Future 2023. Page 27.
30. Maryam Alavi and George Westerman. How Generative AI Will Transform Knowledge Work. Harvard Business Review. 7 November 2023. https://hbr.org/2023/11/how-generative-ai-will-transform-knowledge-work 打開新的分頁/視窗
34. Elsevier. Clinician of the Future 2023. Page 18.
35. Elsevier. Clinician of the Future 2023 Education Edition. Page 23.
47. Ofcom. Future Technology and Media Literacy: Understanding Generative AI. 22 February 2024. https://www.ofcom.org.uk/__data/assets/pdf_file/0033/278349/future-tech-media-literacy-understanding-genAI.pdf 打開新的分頁/視窗
54. Elsevier. View from the Top: Academic Leaders’ and Funders’ Insights on the Challenges Ahead. March 2024. Pages 37 and 48. https://www.elsevier.com/academic-and-government/academic-leader-challenges-report-2024
55. Elsevier. Clinician of the Future 2023 Education Edition. Page 24.
56. Confidence in Research. 2022. Page 9. https://confidenceinresearch.elsevier.com/ 打開新的分頁/視窗