Leading AI researchers issue renewed warnings about advanced autonomous AI


In a recent paper, a group of leading AI researchers and experts emphasize the need to focus on risk management, safety, and the ethical use of AI.

In their paper, “Managing AI Risks in an Era of Rapid Progress,” the researchers warn of societal risks such as social injustice, instability, global inequality, and large-scale criminal activity.

They call for breakthroughs in AI safety and ethics research and effective government oversight to address these risks. They also call on major technology companies and public funders to invest at least one-third of their AI research and development budgets in safety and ethical use.

Caution with autonomous AI systems

The research team specifically warns against AI in the hands of “a few powerful actors” that could entrench or exacerbate global inequalities, or facilitate automated warfare, tailored mass manipulation, and pervasive surveillance.



In this context, they specifically warn against autonomous AI systems that could exacerbate existing risks and create new ones.

These systems will pose “unprecedented control challenges,” and humanity may not be able to keep them in check if they pursue undesirable goals. It is not clear how AI behavior can be reconciled with human values, according to the paper.

“Even well-meaning developers may inadvertently build AI systems that pursue unintended goals—especially if, in a bid to win the AI race, they neglect expensive safety testing and human oversight,” the paper says.

Critical technical challenges that need to be addressed include oversight and honesty, robustness, interpretability, risk assessment, and dealing with emerging challenges. The authors argue that solving these problems must become a central focus of AI.

In addition to technical developments, they say governance measures are urgently needed to prevent recklessness and misuse of AI technologies. The authors call for national institutions and international governance structures to enforce standards and ensure responsible development and implementation of AI systems.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top