- Animal testing ensures chemicals are safe, but is costly and presents ethical dilemmas
- Scientists have discovered a way to test toxicity with a new computational algorithm
- This won’t fully replace animal testing, but it could improve experimental efficiency
Animal testing is a controversial topic. On the one hand, animal research has saved countless lives and continues to teach scientists about the complicated nature of biology. But there’s also a strong case to be made about the pain these studies may inflict.
However, a third perspective that many people may not consider is that testing new materials on animals is very expensive and inefficient. Dr. Hao Zhu, an associate professor of chemistry at Rutgers University, explains just how much time scientists have to invest in these projects.
“Animal testing varies from weeks to years,” says Zhu. “Some really complicated phenomena, such as testing the chemical toxicity to the second generation of an animal, costs millions of dollars per compound and takes years.”
This high cost and inefficiency was the driver for a study Zhu set up with his student and lead author Daniel Russo. Together, they created an algorithm to test a compound’s oral toxicity using only public data and computational resources—no animals involved.
Brick by brick
Creating a new algorithm from scratch is complicated. To explain, Zhu draws parallels to constructing a house.
“Let's say that you want to build a house. First, you need to collect the bricks. This is similar to the first part of our algorithm: it retrieves all the available data for the target compounds from the public sources,” says Zhu.
“And then you need to build the building,” he continues. “But a building is not just a random pile of bricks. You need to move the bricks in a rational way, and maybe you only use some of them. That’s the second part of the algorithm. First we identify useful data and then organize them in a rational way to explain the toxicity.”
What Zhu and Russo built was an algorithm that could detect oral toxicity by comparing public data from tested compounds to those of untested compounds. The algorithm checks how tested chemicals interact with different biomolecules (like proteins or other components of cells) and detects patterns for which chemical fragments existing in the chemicals are interacting with which biomolecules.
“For example, a particular chemical fragment may be responsible for a compound disrupting a protein necessary for proper cellular function, which then results in an observed toxicity,” says Russo. “Using this information, we can prioritize the tens of thousands of compounds that lack toxicity information, by the presence or absence of these chemical fragments, and the likelihood of their interactions with certain proteins.”
Unlike a DIY project, Zhu and Russo couldn’t just make a stop at the hardware store for their raw materials. They had to rely on PubChem, a digital database of chemicals, for the data they needed to feed their algorithm.
Though they only examined around 10,000 chemicals within the database of millions they still needed high-performance computing (HPC) resources to sort through all the information.
“I am part of the Rutgers Center for Computational and Integrative Biology, which has a cluster of high-performance computers they call Kestrel,” says Russo. “Some toxicity datasets can be pretty big and we did a lot of machine learning and data science. There were a few parts where we had to train and optimize dozens to hundreds of machine learning models, and having access to the multiple nodes with multiple cores sure took the stress off my local machine.”
Russo also explains that many parts of this project, such as adjusting aspects of the algorithm’s coding, required only a traditional laptop. However, HPC made sorting through the data a lot easier.
Not a perfect solution
The researchers have no illusions that their algorithm will completely replace animal testing anytime soon. Regardless, this work could alter how we research toxicity. For instance, scientists could check a large group of compounds for toxicity in a single batch and then determine which of those require additional animal testing.
What’s more, Zhu wants to expand this work beyond oral toxicity.
“There is another ongoing project using a similar algorithm in my lab focusing on the liver,” says Zhu. “The liver is a target organ for lots of compounds and is very important. And we expect to expand similar studies to even more complicated toxicity, like reproductive and developmental toxicity.”
Russo is also excited to see where this field of study goes. For him, this was a passion project.
“This idea that we can make inferences about problems in biology and chemistry just using preexisting data and a computer is pretty neat,” says Russo. “I get embarrassingly excited when I get new data to work with, and this project had tons of it.”
Which is a good thing, since modern science is all about the data, and we’re making more of it every day. The world needs more enthusiastic scientists like Zhu and Russo to extract meaning from all that abundant information and put it to good use.