The globe representing the use of artificial intelligence

Yuichiro Chino/Getty Images

As artificial intelligence continues to revolutionize how we interact with technology, there is no denying that it will have an amazing impact on our future. There’s also no denying that AI poses some serious risks if left unchecked.

Enter a new team of experts assembled by OpenAI.

Also: Google is expanding its bug bounty program to include rewards for AI attack scenarios

Designed to help combat so-called “catastrophic” risks, OpenAI’s team of experts — called “Preparedness” — evaluated current and projected future AI models for several risk factors. These include individual persuasion (or matching the content of a message to what the recipient wants to hear), comprehensive cybersecurity, autonomous copying and adaptation (or, artificial intelligence changing itself on its own), and even extinction-level threats such as chemical and biological threats. Radiation and nuclear attacks.

If AI might start a nuclear war seems a bit far-fetched, remember that earlier this year a group of top AI researchers, engineers and executives, including Google DeepMind CEO Demis Hassabis, warned against “mitigating From the risk of extinction due to artificial intelligence. “It should be a global priority alongside other social risks such as pandemics and nuclear war.”

How could artificial intelligence cause a nuclear war? Computers are ever-present in determining when, where and how military strikes occur these days, and artificial intelligence will certainly be involved. But artificial intelligence is susceptible to hallucinations and does not necessarily have the same philosophies that a human might. In short, an AI may decide that it is time to launch a nuclear strike when it is not.

Also: Organizations struggle with ethical adoption of AI. Here’s how you can help

“We believe that groundbreaking AI models, which will exceed the capabilities of today’s most advanced models, have the potential to benefit all of humanity. But they also pose increasingly severe risks,” a statement from OpenAI said.

To help keep AI in check, OpenAI says the team will focus on three key questions:

  • When deliberately misused, how dangerous are the leading AI systems we have today and those that will come in the future?
  • If the weights of frontier AI models were stolen, what exactly could a malicious actor do?
  • How can we build a framework that monitors, assesses, predicts, and protects against the dangerous capabilities of frontier AI systems?

This team is led by Alexander Madry, director of the MIT Center for Deployable Machine Learning and co-leader of the MIT AI Policy Forum.

Also: The ethics of generative AI: How we can harness this powerful technology

To expand its research, OpenAI has also launched what it calls the “AI Readiness Challenge” to prevent catastrophic misuse. The company is offering up to $25,000 in API credits to up to 10 of the top posts that spread potential, but potentially catastrophic, misuse of OpenAI.

Leave a Reply

Your email address will not be published. Required fields are marked *