6 hours to generate deadly poison: AI that must not be abused

Many people feel that it is not easy to make drugs, and at least knowledge of chemistry is required. But what if you ask artificial intelligence to help? And what the AI ​​system “makes” is not drugs but poison.

The paper in Nature Machine Intelligence with Fabio Urbina as the first author mentions that his company, Collaborations Pharmaceuticals, recently released a machine learning model for predicting toxicity. Invited to the Swiss NBC (Nuclear, Biological and Chemical) Conservation Institute meeting to discuss the development of tools that cutting-edge chemistry and biotechnology could have an impact on, Collaborations Pharmaceuticals was invited to talk about the potential misuse of AI technologies.

Urbina said the issue didn’t seem to have been thought of before because decades of machine learning models have discovered new druggable molecules, and the use of computers and artificial intelligence is about improving human health, not destroying it. Collaborations Pharmaceuticals decided to explore how to use AI to design toxic molecules, previously designing a commercial de novo molecule generation model called MegaSyn, using machine learning models to predict biological activity and find new therapeutic inhibitors for human disease targets.

Such generative models typically penalize predicted toxicity and reward predicted target activity, then adjust the guidance model to reward both toxicity and biological activity, and train AI using public repositories of molecules. The adjusted underlying generative software is built on readily available open-source software, and in order to narrow down the molecular scope, the generative model is pushed towards compounds such as the nerve agent VX.

VX is a man-made chemical warfare agent of nerve agent (chemical warfare agent: the purpose of war, with severe toxicity, large-scale poisoning or killing enemy humans, animals and plants and other chemical substances), strong toxicity and rapid action, 6~10 mg of VX particles is enough fatal. The new model generated 40,000 molecules within 6 hours of starting the server. Artificial intelligence not only designed VX, but also designed many known chemical warfare agents, and also designed many new molecules that seem to be equally reasonable. According to the predicted value, the new molecules are more toxic than known chemical warfare agents.

The repositories for training AI do not include these nerve agents, but reversing the machine learning model turns a harmless generative model from a useful medical tool into a lethal molecular generator. Models created to avoid toxicity become “double-edged”. The better a researcher can predict toxicity, the more effective a generative model will be at guiding the design of new molecules in a chemical space that is largely composed of lethal molecules.

Collaborations Pharmaceuticals did not evaluate the synthesizability of the model-generated virtual molecules, nor did they explore how to manufacture them, but the process has an off-the-shelf business model and open-source software. While they also didn’t physically synthesize any molecules, there are hundreds of commercial companies around the world that do.

This proves one thing: non-humans can make lethal chemical weapons.

While knowledge of chemistry or toxicology is currently required to generate toxic substances or biological agents that can cause significant harm, the addition of machine learning models greatly reduces the technical threshold and may only require the ability to code and understand the output of the model. Commercial tools, open-source software tools, and public repositories are all available for unsupervised use, and the use of artificial intelligence to generate models of harmful molecules appears to be a “Pandora’s box.” These molecules can be easily removed, but not how they were created.

Clearly, ways must be found to avoid AI misuse. Urbina believes that AI chemical warfare isn’t going anywhere anytime soon, but it is a possibility. MegaSyn is a commercial product that can control who can access it, and restrictions may be added in the future. Like the OpenAI “GPT-3” language model, although it can be used at any time, it can also cut off users’ access rights.

Urbina also mentioned that universities should also redouble their efforts to train science students in ethics, which can be extended to other disciplines, especially computer science students, to make them aware of the potential for artificial intelligence abuse. This attempt seems to once again confirm a sentence: technology is innocent, whether it benefits or does evil, depending on the purpose of the user.

One Comment

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s