OpenAI,  Anthropic Employees Resign, Sound Alarm Over AI Risks

OpenAI,  Anthropic Employees Resign, Sound Alarm Over AI Risks

Two prominent AI researchers from OpenAI and Anthropic have resigned this week, issuing public warnings about the direction of major AI companies and raising concerns over ethical and safety practices in the fast-growing industry.

Mrinank Sharma, former head of Anthropic’s safeguards research team, quit on Monday, describing the global situation as alarming. Sharma, who studied engineering at Cambridge and machine learning at Oxford, revealed that his team worked on defending against AI-assisted bioterrorism and investigating AI sycophancy, a growing concern as users increasingly rely emotionally on AI systems.

On social media, Sharma wrote:

“The world is in peril. Not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

Sharma also highlighted pressures within organizations to compromise on values, adding that he now plans to pursue a poetry degree.

Meanwhile, Zoë Hitzig, a researcher at OpenAI, announced her departure on Wednesday through an op-ed in The New York Times, citing concerns over ChatGPT advertising. She warned that monetising ChatGPT, which has collected extensive user-generated data on personal and sensitive topics, creates the potential for user manipulation in ways the company is ill-equipped to manage.

“People tell chatbots about their medical fears, relationship problems, beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent,” Hitzig wrote.

The resignations come amid growing unease in the AI industry over rapid automation and the rise of superintelligent systems. Hieu Pham, a technical team member at OpenAI, highlighted the existential implications, noting that AI could disrupt human roles across industries.

Industry Context and Wider Concerns
Anthropic, founded by ex-OpenAI researchers over safety disagreements, recently launched AI agent tools capable of handling legal and data tasks spurring fears of widespread job disruption. Elon Musk’s xAI has also seen co-founder departures, leaving half of its original team this week.

Andrea Moretti, CEO of ControlAI, warned:

“Employees are leaving as they grapple with the threats posed by the technology they are building. These companies are explicitly aiming to build superintelligence, a technology even the CEOs themselves acknowledge poses an existential risk to humanity.”

Internal Changes and Corporate Responses
OpenAI has also disbanded its “mission alignment team”, responsible for ensuring AI benefits humanity. In response to criticism over ads and ethics, OpenAI CEO Fidji Simo reassured the public that advertising would remain separate from AI content.

xAI CEO Elon Musk stated on social media that staff departures were part of a company restructuring to improve execution speed, adding that the company is hiring aggressively to fill gaps.

The resignations and public warnings underscore growing ethical, social, and existential concerns surrounding AI technologies as major companies scale rapidly. Both Anthropic and xAI were contacted for comment on the departures.

Leave a Comment

Your email address will not be published. Required fields are marked *