주요 콘텐츠로 건너뛰기

귀하의 브라우저가 완벽하게 지원되지 않습니다. 옵션이 있는 경우 최신 버전으로 업그레이드하거나 Mozilla Firefox, Microsoft Edge, Google Chrome 또는 Safari 14 이상을 사용하세요. 가능하지 않거나 지원이 필요한 경우 피드백을 보내주세요.

이 새로운 경험에 대한 귀하의 의견에 감사드립니다.의견을 말씀해 주세요 새 탭/창에서 열기

Elsevier
엘스비어와 함께 출판
Connect

책임있는 AI 개발을 위한 6가지 필수 실천 방안

2024년 8월 8일

저자: Ian Evans

Stock image depicting woman using AI on a computer (Source: JohnnyGreig/E+ via Getty Images)

JohnnyGreig/E+ via Getty Images

연구 과정에서 GenAI의 문제를 피하고 그 기능을 활용하기 위해 따라야 할 몇 가지 지침이 있습니다.

Generative artificial intelligence (genAI) is sweeping through industry — including the research enterprise — with promises of increased efficiency and effectiveness. Making good on those promises while mitigating risks and ensuring ethical use requires responsible development practices.

In a recent webinar 새 탭/창에서 열기, two Elsevier experts highlight the importance of implementing guardrails to guide responsible design decisions when developing AI systems. Principal Product Manager for Scopus Adrian Raudaschl 새 탭/창에서 열기 discusses the development of Elsevier’s Scopus AI 새 탭/창에서 열기, and Senior Director of Data Science and Responsible AI Dr Harry Muncey 새 탭/창에서 열기 recommends best practices for using GenAI.

In this article, we focus on Harry’s more general recommendations.

“Developing AI responsibly means putting guardrails in place that enable us to make responsible design decisions when developing AI systems.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Elsevier의 Senior Director of Data Science and Responsible AI

1. Address bias in training data.

One of the fundamental challenges in AI development is the presence of bias in training data, which can lead to discriminatory outcomes. Harry emphasizes the critical need to identify and design out bias in AI systems to prevent the replication or amplification of existing biases. They explained:

AI is usually trained using vast amounts of data. That data is often collected and selected because it captures the characteristics of some decision process or system that we want to improve using technology. By acknowledging and actively working to mitigate bias in training data, developers can strive for fair and equitable AI applications.

Harry underscores the importance of diversity and inclusivity in AI development, explaining:

Without investing effort to identifying and designing that bias out, we’re likely to replicate it or even amplify it at scale when we implement AI. By promoting diversity in training data and actively addressing biases, developers can create AI systems that are more reflective of the diverse perspectives and experiences of society.

In essence, addressing bias in AI products leads to a better experience for the user.

“Without investing effort to identifying and designing that bias out, we’re likely to replicate it or even amplify it at scale when we implement AI. By promoting diversity in training data and actively addressing biases, developers can create AI systems that are more reflective of the diverse perspectives and experiences of society.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Elsevier의 Senior Director of Data Science and Responsible AI

2. Understand the capabilities — and limitations — of GenAI.

Harry highlights the potential risk of generative AI content that can undermine trust and perpetuate misinformation:

Generative AI is very good at producing convincing synthetic content, whether that is artificially generated articles in the style of a reputable newspaper or journalist, or a video of a politician saying something embarrassing that they didn’t actually say.

By understanding the capabilities and limitations of generative AI, Harry explains, developers can implement safeguards to prevent misuse and misinformation.

Moreover, they emphasized the need for ongoing research and development to address the ethical implications of generative AI.

There’s much research and effort into how we can correct this historical bias encoded in data, but there’s no silver bullet. By engaging in continuous learning and adaptation, developers can stay ahead of emerging challenges and ensure the responsible use of generative AI technologies.

3. Prioritize ethics — and human oversight — in AI deployment.

Elsevier’s commitment to responsible AI development extends to their internal use of AI and machine learning technologies. Harry emphasizes the importance of enhancing human decision-making through AI solutions that are designed to benefit both customers and society at large. They stated:

Our solutions, both internal and external, are designed to enhance human decision-making. And this approach is underpinned by our commitment to corporate responsibility.

By prioritizing ethical considerations and corporate responsibility, organizations can leverage AI technologies for positive impact while mitigating potential negative consequences.

“Our solutions, both internal and external, are designed to enhance human decision-making. And this approach is underpinned by our commitment to corporate responsibility.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Elsevier의 Senior Director of Data Science and Responsible AI

4. Ensure transparency and accountability in AI systems.

Transparency and accountability are essential for responsible AI development. Harry stresses the importance of clear communication and accountability mechanisms: By fostering transparency and accountability in AI systems, they explain, developers can build trust with users and stakeholders while promoting responsible AI practices.

Harry also stresses the importance of “proactively working to increase positive impacts and prevent negative outcomes.”

5. Engage with stakeholders.

Harry highlights the importance of engaging with stakeholders and fostering open dialogue around AI development:

Having a strong understanding of how these issues might show up and cause problems for the particular context where we might wish to use generative AI is really critical to being able to deploy and use it responsibly.

By actively engaging with stakeholders and soliciting feedback, developers can ensure that AI systems are developed and deployed in a responsible and ethical manner.

6. Plan for continuous learning and adaptation.

As AI technologies continue to evolve, developers must prioritize continuous learning and adaptation to address emerging challenges and opportunities. Harry highlights the dynamic nature of AI development, commenting:

Whilst increasing complex technology, proximity to decision-making, and potential impact on people create new challenges, they also create many opportunities to benefit our customers and society.

By staying informed about the latest advances in AI and actively engaging in ongoing learning and adaptation, developers can navigate the complexities of AI development with a focus on responsible and ethical practices.

Conclusion

Responsible AI development requires a multifaceted approach that encompasses addressing bias in training data, implementing safeguards for generative AI, prioritizing ethical considerations in deployment, promoting transparency and accountability, and embracing continuous learning and adaptation. By adhering to these principles, developers can harness the transformative potential of AI technologies while upholding ethical standards and societal values.

기여자

Portrait photo of Ian Evans

IE

Ian Evans

Senior Director, Editorial and Content

Elsevier

Ian Evans 더 읽어보기