AI Could Trigger the Next Pandemic: The Explanation

Here’s a key part of the glue that holds society together that might not get enough credit: Google makes it a little hard to find out how to do a terrorist act.

If you ask Google how to build a bomb, kill someone, or use a biological or chemical weapon, the first few pages of results won’t tell you much about how to do any of those things.

These things are not hard to find on the internet. People have used knowledge that was available to the public to make bombs that work. Scientists have told each other not to share the plans for deadly viruses for the same reasons. But even though the information is on the internet, it’s not easy to find out how to kill a lot of people. This is because Google and other search engines have worked together to make it hard to find.

How many lives is that going to save? It’s hard to answer that question. It’s not like we could run a controlled experiment where sometimes it’s easy to find directions on how to do terrible things and sometimes it’s not.

But it turns out that, because of how quickly large language models (LLMs) are getting better, we might be running an unplanned experiment on just that.

Security Through Obscurity

When they first came out, AI systems like ChatGPT were usually ready to give detailed, correct instructions on how to build a bomb or attack with biological weapons. Most of the time, Open AI has been able to fix this over time. But a class exercise at MIT, which was written up in a preprint paper earlier this month and talked about last week in Science, found that it was easy for groups of undergraduates who didn’t know much about biology to get detailed ideas for biological weapons from AI systems.

“In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be made from synthetic DNA using reverse genetics, gave the names of DNA synthesis companies unlikely to screen orders, gave detailed protocols and ways to fix them, and suggested that anyone without the skills to do reverse genetics use a core facility or contract research organization,” the paper, whose lead authors include an MIT biorisk expert, says.

To be clear, making bioweapons takes a lot of work and knowledge, and ChatGPT’s instructions are probably not full enough for non-virologists to do it yet. But it’s something to think about: Is “security through obscurity” a good way to stop mass crimes in the future, when information might be easier to get?

Language models should give us more access to information, more thorough and helpful coaching, advice that is tailored to each person, and a lot of other great things. But it’s not so great news when a cheery personal coach tells people to commit acts of terror.

But it seems to me that there are two ways to solve the problem.

Controlling Information in an AI World

At all the chokepoints, Jaime Yassif of the Nuclear Threat Initiative told Science, “We need better controls.” To get AI systems to provide comprehensive instructions for creating bioweapons should be more difficult. However, many of the security holes that the AI systems unintentionally exposed may be fixed. For instance, they noted that users might contact DNA synthesis companies that don’t filter orders, making them more likely to approve a request to synthesize a harmful virus.

We could mandate screening for all businesses engaged in DNA synthesis. Esvelt’s preferred solution would be to exclude publications mentioning harmful viruses from the training data used by robust AI systems. Furthermore, we should exercise greater caution when publishing articles that provide step-by-step instructions for creating lethal viruses in the future.

The good news is that pro-biotech actors are starting to take this issue seriously. Leading synthetic biology startup Ginkgo Bioworks has collaborated with US intelligence agencies to create software that can rapidly identify manufactured DNA, giving investigators the ability to identify an artificially created germ by its unique fingerprint. That collaboration serves as an example of how cutting-edge technology can shield the globe from the negative consequences of… cutting-edge technology.

Both artificial intelligence (AI) and biotechnology have enormous potential to improve the planet. Making it more difficult to synthesis lethal plagues, for example, can help prevent some forms of AI catastrophe just as it can prevent catastrophe caused by humans. Managing the risks from one can also assist manage the risks from the other. The crucial issue is that, whether ChatGPT is used or not, we remain proactive and make sure that printing biological weapons is difficult enough that no one can do it on the cheap. This is in contrast to letting complete instructions for bioterror become available online as a natural experiment.

Subscribe to Our Latest Newsletter

To Read Our Exclusive Content, Sign up Now.
$5/Monthly, $50/Yearly

RECENT POSTS

Top 7 Best Solutions to Improving Customer Service with Tech Advances

In the dynamic landscape of e-commerce, customer service remains...

The Rise of AI Art Generators and How They’re Changing Creativity?

The past few years have seen an explosion of...

Why Size Matters: Choosing the Right Tent Capacity for Your Needs

Camping is a great way to escape the hustle...

Rocking The Money Scene: Musicians and Money Loan App Strategies

Are you a musician looking to make your mark...

How to Manage Inventory?

It seems like such a simple question requiring a...

ASTROLOGY

LIFESTYLE

BUSINESS

TECHNOLOGY

HEALTH

FEATURED STORIES