“My whole career has been about finding alternatives to animal testing”
2022年11月11日
Ann-Marie Roche
AI can learn from past animal studies and generate animal data, reducing the need to conduct new animal studies, says FDA toxicology researcher
Caption: Dr Weida Tong, a lead researcher at the FDA’s National Center for Toxicological Research and Director of its Bioinformatics and Biostatistics division, has created a virtual lab rat as part of his quest to find data-driven methods to test human toxicity. (App image © istock.com)
Artificial Intelligence can learn from past animal studies to generate new animal data, reducing the need to conduct new animal studies, explains Dr Weida Tong 打開新的分頁/視窗.
That reality is at the heart of his research.
“The revolution has begun, and over the next few years, we’ll see tremendous progress,” Weida says.
In fact, he already has a virtual lab rat to prove it. But what does America’s Got Talent 打開新的分頁/視窗 have to do with it?
Big data + powerful machines = unprecedented results
“My field is, by default, trying to replace animal testing,” Weida explains. “Alternative methodologies such as in silico are simply a new way to talk about the existing paradigm. How do we make sure a chemical is safe for humans? This is what we’ve been continually working on.
“But now the machines are powerful enough to deal with all these tremendous resources of data, such as PharmaPendium. We are now getting unprecedented results and these results are being evaluated and considered by regulatory agencies.”
As Director of the Division of Bioinformatics and Biostatistics 打開新的分頁/視窗 at the National Center for Toxicological Research (NCTR) 打開新的分頁/視窗 — the only dedicated research facility for the US Food & Drug Administration (FDA) —Weida obviously has a very serious job. However, with twinkling eyes and an email sign-off that says, “Be nice, play fair, and hire someone smarter than you!” it’s safe to say he doesn’t take himself overly seriously.
With over 300 peer-reviewed papers and book chapters, Weida has become a champion of data-driven alternatives to animal testing — making him the perfect guest for Elsevier’s Successful Alternatives to Animal Testing 打開新的分頁/視窗 webinar series.
Webinar series: Successful alternatives to animal testing
Elsevier Life Science has created a free webinar series on animal testing:
Watch Dr Weida Tong’s webinar now: Successful AI Alternatives to Animal Testing 打開新的分頁/視窗
Watch Dr Gail Van Normal's webinar now: Successful Alternatives to Animal Testing 打開新的分頁/視窗
Register for the November 16 webinar: Successful Alternatives to Animal Testing: Using Virtual Control Groups 打開新的分頁/視窗
A short history of ascertaining toxicity
“This whole field is evolving very rapidly and along the way, the name has changed many times while still basically covering the same things,” Weida explains. “So this causes a lot of confusion.”
Born and raised in Shanghai, Weida earned his PhD there in polymer chemistry. “This subject really has nothing to do with what I do now,” he says. “But when I moved to the US to pursue my career in 1991, I found a job working with so-called computational chemistry, which at that time theorized if we know a chemical structure and how it’s joined, we’ll be able to figure out its toxicological profile and perhaps what the biological activity might look like.
“This was a very big deal at the time because pharmaceutical companies thought it was going to be the magic bullet and we’d be able to foresee new drug candidates. But of course, it was not true. And that happens a lot in this field.”
For instance, the concept of machine learning was soon brought in as a stand-in for a big chunk of computational chemistry. “But that also didn’t last because the methods were not yet sophisticated enough. So that’s why we are talking about AI today,” Weida says, chuckling.
“Around 1994, a new director at NCTR had come in and wondered: ‘We do all these toxicology studies we publish the papers, and then it all disappears into a mass of literature? Why can’t we just go back and use all this data to improve our knowledge and develop new hypotheses?’ So in 1996, I was brought in to create a toxicological knowledge base so we would be able to capture all this generated knowledge and then use it to guide future research.”
Weida’s first project was the Endocrine Disruptor Knowledge Base 打開新的分頁/視窗, which was very much ahead of its time. The book Our Stolen Future 打開新的分頁/視窗, by Theo Colborn, had just been released and basically collected all the evidence and confirmed what was already written in the book Silent Spring 打開新的分頁/視窗 from 30 years earlier — namely that environmental pollutants were impacting human health and fertility. The book triggered the writing of a new US law that would ensure any compounds put into food or drinking water did not cause any toxicity or endocrine disruption.
It also sparked the Environmental Protection Agency (EPA) 打開新的分頁/視窗 to start looking into how they could streamline the testing of over 50,000 known chemicals, which at the time cost $25,000 per chemical. “So I began developing these models, which we would now call AI,” Weida recalls. “At that time and still today, these are called Quantitative Structure-Activity Relationships (QSARs) 打開新的分頁/視窗, which could determine which chemicals were most likely going to disrupt the endocrine system based on their structure.”
It proved a big job — and remains a work in progress.
“We’re still pretty much in the same boat and the EPA is still trying to develop what they call a data-driven scientific approach to address this,” Weida says. “At least now, the techniques we are using are more sophisticated.”
Meanwhile, the proposed law remains unratified.
The toxicogenomic revolution
In the meantime, NCTR had moved to another area called toxicogenomics — a field made possible by the completion of the Human Genome Project in 2003. “This revolutionized our entire field, which was now being called bioinformatics — an existing term — but then it became a more overarching concept to include all computational techniques dealing with biological information from this new data tsunami,” Weida explains.
“One of the resulting ongoing projects from that time is the MicroArray/Sequencing Quality Control (MAQC/SEQC) 打開新的分頁/視窗 project whereby we used so-called microarrays to test for tens of thousands of gene expressions in a single experiment,” says Weida.
As a result, the FDA took a close look at the use of genomic technologies in assessing drug and food safety and clinical applications.
“I’m also very proud of the Liver Toxicity Knowledge Base (LTKB) 打開新的分頁/視窗,” Weida adds. “Basically, liver damage is a top reason — number 1 or number 2, depending on who you talk to — why a drug is withdrawn from the market. And with the cost of bringing a drug to market about $2.6 billion, you really want to flag these drugs and withdraw them at an early stage.
“Our model aims at asking the right questions during the review process to help determine if a drug is safe or not.”
“And now we’ve arrived at today, when everyone is talking about AI. But the only difference is that before, we were working with rather small datasets so the results did not make a big wave; but now, with much bigger datasets, we are able to make a lot more sense.”
What do deepfakes and animal testing have in common?
Weida is a fan of America’s Got Talent, and one episode caught his attention: the AI company Metaphysic 打開新的分頁/視窗 became a finalist by resurrecting Elvis using deepfake technology.
Dr Weida Tong was inspired by Metaphysic co-founders Tom Graham and Chris Ume, who demonstrated their AI technology on the popular TV show America's Got Talent. As singer Daniel Emmet performed, the technology made it appear as though the judges — and Elvis — were the ones singing.
“We usually associate this technology with synthetic media 打開新的分頁/視窗, but there’s also a positive side,” Weida says with a smile. “Like with America’s Got Talent, we would be able to use the same deep generative adversarial network (GAN)-based framework to do synthetic animal testing.”
In short, Weida and his team proved they could derive new animal results from existing animal studies 打開新的分頁/視窗. This framework, which they call Tox-GAN, resulted in over 87% agreement between their results and the real data. Certainly, these promising results fit nicely into FDA’s larger project to spearhead innovation.
With the FDA Modernization Act 打開新的分頁/視窗, Congress asked the FDA to study and establish alternative methodologies as part of the regulatory process. As a result, the agency is following the 3Rs principle of “replace, reduce and refine animal use” and using a predictive toxicology roadmap. Their Innovative Science and Technology Approaches for New Drugs (ISTAND 打開新的分頁/視窗) pilot program has set out to discover new digital tools that will help streamline drug development — including AI-driven methods.
“All these efforts just make sense,” Weida says. “Conventional animal studies are expensive, time-consuming, labor-intensive, and there are also ethical concerns. Above all, these tests are not fail-safe for human safety. But with the FDA Modernization Act, the train has now left the station, and over the next few years, we will see tremendous progress.”
Free data for future research
What surprises Weida most is how we’re not yet seeing more virtual animal models:
We’ve been doing animal studies for so long, and all this data is stored somewhere. We need to get the machines to find and learn from this data. At NCTR, we already have a virtual rat that has learned from past rat experiments. So we can know if you treat the rat with certain drugs at a certain dose and a certain time, it will produce certain results.
It’s not yet a complete replacement, but it can supplement some real experiments:
I think this is a real missed opportunity — using past investments, which would otherwise be lost, to leverage future research. I’m surprised not more people are doing it!