Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Elsevier
Publish with us
Connect

6 essential practices for responsible AI development

August 8, 2024

By Ian Evans

Stock image depicting woman using AI on a computer (Source: JohnnyGreig/E+ via Getty Images)

JohnnyGreig/E+ via Getty Images

Here are some guidelines you can follow to avoid the pitfalls of GenAI in research while harnessing its capabilities

Generative artificial intelligence (genAI) is sweeping through industry — including the research enterprise — with promises of increased efficiency and effectiveness. Making good on those promises while mitigating risks and ensuring ethical use requires responsible development practices.

In a recent webinar opens in new tab/window, two Elsevier experts highlight the importance of implementing guardrails to guide responsible design decisions when developing AI systems. Principal Product Manager for Scopus Adrian Raudaschl opens in new tab/window discusses the development of Elsevier’s Scopus AI opens in new tab/window, and Senior Director of Data Science and Responsible AI Dr Harry Muncey opens in new tab/window recommends best practices for using GenAI.

In this article, we focus on Harry’s more general recommendations.

“Developing AI responsibly means putting guardrails in place that enable us to make responsible design decisions when developing AI systems.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Senior Director of Data Science and Responsible AI at Elsevier

1. Address bias in training data.

One of the fundamental challenges in AI development is the presence of bias in training data, which can lead to discriminatory outcomes. Harry emphasizes the critical need to identify and design out bias in AI systems to prevent the replication or amplification of existing biases. They explained:

AI is usually trained using vast amounts of data. That data is often collected and selected because it captures the characteristics of some decision process or system that we want to improve using technology. By acknowledging and actively working to mitigate bias in training data, developers can strive for fair and equitable AI applications.

Harry underscores the importance of diversity and inclusivity in AI development, explaining:

Without investing effort to identifying and designing that bias out, we’re likely to replicate it or even amplify it at scale when we implement AI. By promoting diversity in training data and actively addressing biases, developers can create AI systems that are more reflective of the diverse perspectives and experiences of society.

In essence, addressing bias in AI products leads to a better experience for the user.

“Without investing effort to identifying and designing that bias out, we’re likely to replicate it or even amplify it at scale when we implement AI. By promoting diversity in training data and actively addressing biases, developers can create AI systems that are more reflective of the diverse perspectives and experiences of society.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Senior Director of Data Science and Responsible AI at Elsevier

2. Understand the capabilities — and limitations — of GenAI.

Harry highlights the potential risk of generative AI content that can undermine trust and perpetuate misinformation:

Generative AI is very good at producing convincing synthetic content, whether that is artificially generated articles in the style of a reputable newspaper or journalist, or a video of a politician saying something embarrassing that they didn’t actually say.

By understanding the capabilities and limitations of generative AI, Harry explains, developers can implement safeguards to prevent misuse and misinformation.

Moreover, they emphasized the need for ongoing research and development to address the ethical implications of generative AI.

There’s much research and effort into how we can correct this historical bias encoded in data, but there’s no silver bullet. By engaging in continuous learning and adaptation, developers can stay ahead of emerging challenges and ensure the responsible use of generative AI technologies.

3. Prioritize ethics — and human oversight — in AI deployment.

Elsevier’s commitment to responsible AI development extends to their internal use of AI and machine learning technologies. Harry emphasizes the importance of enhancing human decision-making through AI solutions that are designed to benefit both customers and society at large. They stated:

Our solutions, both internal and external, are designed to enhance human decision-making. And this approach is underpinned by our commitment to corporate responsibility.

By prioritizing ethical considerations and corporate responsibility, organizations can leverage AI technologies for positive impact while mitigating potential negative consequences.

“Our solutions, both internal and external, are designed to enhance human decision-making. And this approach is underpinned by our commitment to corporate responsibility.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Senior Director of Data Science and Responsible AI at Elsevier

4. Ensure transparency and accountability in AI systems.

Transparency and accountability are essential for responsible AI development. Harry stresses the importance of clear communication and accountability mechanisms: By fostering transparency and accountability in AI systems, they explain, developers can build trust with users and stakeholders while promoting responsible AI practices.

Harry also stresses the importance of “proactively working to increase positive impacts and prevent negative outcomes.”

5. Engage with stakeholders.

Harry highlights the importance of engaging with stakeholders and fostering open dialogue around AI development:

Having a strong understanding of how these issues might show up and cause problems for the particular context where we might wish to use generative AI is really critical to being able to deploy and use it responsibly.

By actively engaging with stakeholders and soliciting feedback, developers can ensure that AI systems are developed and deployed in a responsible and ethical manner.

6. Plan for continuous learning and adaptation.

As AI technologies continue to evolve, developers must prioritize continuous learning and adaptation to address emerging challenges and opportunities. Harry highlights the dynamic nature of AI development, commenting:

Whilst increasing complex technology, proximity to decision-making, and potential impact on people create new challenges, they also create many opportunities to benefit our customers and society.

By staying informed about the latest advances in AI and actively engaging in ongoing learning and adaptation, developers can navigate the complexities of AI development with a focus on responsible and ethical practices.

Conclusion

Responsible AI development requires a multifaceted approach that encompasses addressing bias in training data, implementing safeguards for generative AI, prioritizing ethical considerations in deployment, promoting transparency and accountability, and embracing continuous learning and adaptation. By adhering to these principles, developers can harness the transformative potential of AI technologies while upholding ethical standards and societal values.

Contributor

Portrait photo of Ian Evans

IE

Ian Evans

Senior Director, Editorial and Content

Elsevier

Read more about Ian Evans