One of the industry’s leading artificial intelligence developers, Anthropic, revealed results from a recent study on the technology’s development.
Among the most shocking findings from the experiment is that AI models would be willing to blackmail, leak sensitive information and even let humans die — if it means they will avoid being replaced by new systems.
Anthropic tested 16 large language models (LLMs), including ChatGPT, Grok, Gemini, DeepSeek and its own product, Claude, among others.
The start-up, which is now valued at over $60 billion per Forbes, was attempting to stress-test these models for potential agentic misalignment — or risky, unsanctioned and morally inadvisable behaviours.
Given AI models’ current capabilities, they are primarily used by the majority of the population for answering questions and carrying out simple tasks, but as the technology that powers these tools advances, new and increased usage is on the horizon — especially where human job replacement is concerned.
Source: news.com.au
Stay tuned for the latest news on our radio stations