AI, Pentagon and Anthropic
Digest more
OpenAI CEO Sam Altman announced late Friday that the company had signed a deal with the Pentagon for its AI tools to be used in the military’s classified systems, but with seemingly similar guardrails rival Anthropic had also requested.
Amodei recommended that all labs develop “a true ‘MRI for AI,’” but he acknowledged that they might not have enough time, given how quickly AI is advancing. This interpretability problem gets to the core of Anthropic’s concern about autonomous weapons.
Researchers found that interest in AI agents has undoubtedly skyrocketed in the last year or so. Research papers mentioning “AI Agent” or “Agentic AI” in 2025 more than doubled the total from 2020 to 2024 combined, and a McKinsey survey found that 62% of companies reported that their organizations were at least experimenting with AI agents.
More than 60 percent of K-12 teachers told the EdWeek Research Center that they used AI-based tools in their classrooms in 2025, nearly double the share that used the technology just two years before. Half of teachers said they have received at least some training in the tools, though the substance varied widely.
However, some scholars, activists and proponents of AI regulation warn the race between the two countries could get out of hand. They fear that, in a blind rush to get ahead, both are creating systems that could eventually pose cataclysmic risks with few guardrails.
India should utilise the opportunity to push for a non-binding framework rooted in its principles of accountability and aligned with its interests
Organizations must proactively manage developer risk through establishing a self-governance strategy—one that accounts for upskilling, awareness-building, AI usage oversight and continuous policy refinement and enforcement—instead of waiting for a set of regulations to tell them what to do.