์ฃผ์š” ์ฝ˜ํ…์ธ ๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ

๊ท€ํ•˜์˜ ๋ธŒ๋ผ์šฐ์ €๊ฐ€ ์™„๋ฒฝํ•˜๊ฒŒ ์ง€์›๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์˜ต์…˜์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ตœ์‹  ๋ฒ„์ „์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๊ฑฐ๋‚˜ Mozilla Firefox, Microsoft Edge, Google Chrome ๋˜๋Š” Safari 14 ์ด์ƒ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ๊ฐ€๋Šฅํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ง€์›์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ํ”ผ๋“œ๋ฐฑ์„ ๋ณด๋‚ด์ฃผ์„ธ์š”.

์ด ์ƒˆ๋กœ์šด ๊ฒฝํ—˜์— ๋Œ€ํ•œ ๊ท€ํ•˜์˜ ์˜๊ฒฌ์— ๊ฐ์‚ฌ๋“œ๋ฆฝ๋‹ˆ๋‹ค.์˜๊ฒฌ์„ ๋ง์”€ํ•ด ์ฃผ์„ธ์š”์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ

Elsevier
์—˜์Šค๋น„์–ด์™€ ํ•จ๊ป˜ ์ถœํŒ
Connect

With the rise of LLMs, what should we really be concerned about?

2023๋…„ 12์›” 14์ผ

์ €์ž: Ian Evans

Michael Wooldridge is a Professor of Computer Science at the University of Oxford

Michael Wooldridge is a Professor of Computer Science at the University of Oxford.

Oxford University Prof Michael Wooldridge talks about the perils and promise of AI in advance of his Royal Institution Christmas Lecture on the BBC

The arrival of large language AI models like ChatGPT has triggered debates across academia, government, business and the media. Discussion range from their impact on jobs and politics to speculation on the existential threat they could present to humanity.

Michael Wooldridge์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ, Professor of Computer Science at the University of Oxford, described the advent of these large language models (LLMs) as being like โ€œweird things beamed down to Earth that suddenly make possible things in AI that were just philosophical debate until three years ago.โ€ For Michael, the potential existential threat of AI is overstated, while the actual โ€” even mortal โ€” harms they can already cause are understated. And the potential they offer is tantalizing.

What is the real risk of AI?

Speaking in advance of delivering one of the Royal Institution Christmas Lectures์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ December 12, Michael said concerns around existential threats์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ were unrealistic:

In terms of the big risks around AI, you donโ€™t have to worry about ChatGPT crawling out of the computer and taking over. If you look under the hood of ChatGPT and see how it works, you understand thatโ€™s not going to be the case. In all the discussion around existential threat, nobody has ever given me a plausible scenario for how AI might be an existential risk.

โ€œIn terms of the big risks around AI, you donโ€™t have to worry about ChatGPT crawling out of the computer and taking over. If you look under the hood of ChatGPT and see how it works, you understand thatโ€™s not going to be the case. In all the discussion around existential threat, nobody has ever given me a plausible scenario for how AI might be an existential risk.โ€

Michael Wooldridge

MW

Michael Wooldridge

University of Oxford์˜ Professor of Computer Science

Instead, Michael sees the focus on this issue as a distraction that can โ€œsuck all the air out of the roomโ€ and ensure thereโ€™s no space to talk about anything else โ€” including more immediate risks:

Thereโ€™s a danger that nothing else ever gets discussed, even when there are abuses and harms being caused right now, and which will be caused over the next few years, that need consideration, that need attention, regulation and governance.

Michael outlined a scenario where a teenager with medical symptoms might find themselves too embarrassed or awkward to go to a doctor or discuss them with a caregiver. In such a situation, that teenager might go to an LLM for help and receive poor quality advice.

โ€œResponsible providers will try to intercept queries like that and say, โ€˜I donโ€™t do medical advice.โ€™ But itโ€™s not hard to get around those guardrails, and when the technology proliferates, there will be a lot of providers who arenโ€™t responsible,โ€ Michael said. โ€œPeople will die as a consequence because people are releasing products that arenโ€™t properly safeguarded.โ€

That scenario โ€” where technology proliferates without guardrails โ€” is a major risk around AI, Michael argued. AI itself wonโ€™t seek to do us harm, but people misusing AI can and do cause harm.

โ€œThe British government has been very active in looking at risks around AI and they summarize a scenario they call the Wild West,โ€ he said. In that scenario, AI develops in a way that everyone can get their hands on a LLMs with no guardrails, which then becomes impossible to control.

โ€œIt puts powerful tools in the hands of potentially bad actors who use it to do bad things,โ€ he said. โ€œWeโ€™re going into elections in the UK, in the US, in India, where there is going to be a really big issue around misinformation.โ€

What is the grand challenge โ€” and how can we address it?

Michael summarized the challenge as: โ€œHow do we support people who want to innovate in this field, while at the same time avoiding this technology proliferating in such a way that it becomes impossible to govern?โ€

There are no easy answers immediately available, but Michael noted that more could be done by social media companies to implement systems that spot gross misinformation and prevent it from propagating. โ€œThe usual counter-arguments are that if you try and address this, youโ€™re stifling freedom of speech,โ€ he said. โ€œBut when there are manifest falsehoods being spread, I think there is an obligation for social media companies to be doing more.โ€

Finding the balance between preventing harm and enabling innovation is essential because, as Michael pointed out, LLMs are a fascinating area for researchers with a lot of potential:

The arrival of these models has just been this supermassive black hole that has twisted the whole fabric of computing. And all of science has been moved by this enormous presence.

โ€œThe arrival of these models has just been this supermassive black hole that has twisted the whole fabric of computing. And all of science has been moved by this enormous presence.โ€

Michael Wooldridge

MW

Michael Wooldridge

University of Oxford์˜ Professor of Computer Science

Michael noted that all 10 of the research groups in his university department have been affected by the advances in LLMs: โ€œIn some instances, itโ€™s re-written their research agenda; in others, theyโ€™re wrapping up because the work just isnโ€™t relevant anymore.โ€

For Michael personally, multi-agent systems are of particular interest, where multiple AI systems with competing or complimentary goals interact with each other to solve a problem that would elude a single system.

โ€œThat really pushes my buttons,โ€ he said. โ€œLarge language models represent a really tantalizing opportunity there โ€” this idea of having them interact with each other and not necessarily doing it in human language.

โ€œSo, for example, one idea is that you can deal with hallucination์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ by having large language models that are essentially in a competitive scenario with one another. One model is coming up with copy and the other is critiquing it, and the idea is that the process ends with them in some kind of agreement on a factual statement.โ€

In the face of the kind of seismic change these AI models represent, Michael sees science communication as essential. As co-Editor-in-Chief of the Elsevier-published journal Artificial Intelligence์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ, he is well versed in communicating about research among researchers. The Royal Institute Christmas lectures, meanwhile, provide a platform to communicate facts about AI more broadly.

โ€œThatโ€™s very prominent on my agenda and has been for several years,โ€ he said. โ€œWith all the discussion around AI, I see it as essential to try and inform the public about what AI is. Itโ€™s part of the science. If I accept public funding for my work, I have an obligation โ€” if this becomes something people are discussing โ€” to stand up and talk about it.โ€

In particular, Michael talks about addressing the public misconception that AI has its own intent or its own considerations:

One of the big misunderstandings is that people imagine there is a mind on the other side of the screen, and there absolutely is not. An AI doesnโ€™t contemplate your question. When you understand how they work, even at a superficial level, you realize itโ€™s just a statistical algorithm. Itโ€™s a very cool and impressive statistical algorithm, but it doesnโ€™t think or consider. Some people are surprised to learn that thereโ€™s no โ€˜mindโ€™ there at all.

That misconception can be fueled by the language around AI โ€” that it โ€œlooksโ€ for information, that it can be โ€œtricked,โ€ or that it โ€œwantsโ€ to provide a certain kind of answer.

โ€œWe use that language because itโ€™s convenient,โ€ Michael said, โ€œbut the danger in anthropomorphizing AI is that we read far more into it than is actually there.โ€

Despite the storm of discussion, catastrophizing, misconception and potential for misinformation, Michael is enthusiastic about AI from a research perspective:

Itโ€™s such an interesting development. Weโ€™ve got really powerful tools, and weโ€™re just starting to explore their dimensions. These tools are weird, and we donโ€™t understand exactly why they go wrong in certain ways and what their capabilities are. Mapping that out is a fascinating journey.

Michael Wooldridge

Michael Wooldridge์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ (@wooldridgemike์ƒˆ ํƒญ/์ฐฝ์—์„œ ์—ด๊ธฐ) is a Professor of Computer Science at the University of Oxford. He has been an AI researcher for more than 30 years and has published more than 400 scientific articles on the subject. He is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of AI (AAAI), and the European Association for AI (EurAI). From 2014โ€“16, he was President of the European Association for AI, and from 2015โ€“17, he was President of the International Joint Conference on AI (IJCAI).

๊ธฐ์—ฌ์ž