FeaturedOpinionTrending Commentary

The Government Can’t Have Terminator Robots |

https://dailycaller.com/

If we don’t get Congress to pass a law banning autonomous lethal robots, we will regret it.

Defense Secretary Pete Hegseth recently gave Anthropic CEO Dario Amodei a deadline: hand over unrestricted access to their “Claude” AI or be designated a national security supply chain risk — which bars Anthropic from using Amazon and Google for hosting and training. That is game over for Anthropic as an American AI company.

The legal question is whether this designation is appropriate for an onshore company that’s been an eager partner of America’s military, from the Maduro raid to the joint U.S.-Israeli strikes on Iran. Anthropic vowed to challenge it in court.

The military wants Claude because frontier AI can plan operations, process intelligence, and — with guardrails removed — autonomously target. OpenAI, Google, and Elon Musk’s xAI already agreed to provide AI for “any lawful military purpose,” handing over full access without conditions. They rolled over. Anthropic didn’t. The deadline was no coincidence. It came hours before the U.S. struck Iran — using Claude.

The Innovation Killer

Dean Ball, co-author of the White House AI Action Plan, called this “attempted corporate murder” — and he’s a genuine old-school conservative. His argument isn’t just moral: The Department of War had far less destructive options, like canceling the contract and noting guidance to future contractors. Instead they reached for a weapon normally reserved for foreign adversaries like Huawei — against an American company whose AI they used in combat operations the same weekend.

Even Sam Altman called it an extremely scary precedent. Ball’s conclusion: the people making these decisions lack strategic clarity and respect for basic Republican principles.

America leads the world in tech because you can build something extraordinary and know it belongs to you. We don’t win by being a better command-and-control state; we win by being a better version of America.

You Can’t Nationalize A Conscience

Let’s be clear: Anthropic isn’t a hero. They partnered with Palantir, their model supported strikes where hundreds died, and their red lines — no fully autonomous lethal weapons, no mass domestic surveillance of Americans — were accepted by both the Biden and Trump administrations, Trump as recently as July 2025. They only became objectionable when Anthropic became a political enemy. Their greatest contribution is the science of Constitutional Alignment – AI with a conscience.

In my research at MoralityLab, I’ve shown that moral reasoning can be trained into models small enough to run on a PlayStation. A conscience isn’t a luxury; it’s a design choice. The problem is you can’t rely on corporate conscience — which is exactly why we need a law.

Imagine the next Ruby Ridge — except nobody has to give an order, nobody’s career is on the line, and the system already flagged you. We’ve seen AI used to spam social media or create entire social networks of bots to manipulate public opinion. Expanding that power to lethal force is a bridge too far.

The ‘Trust Me’ Trap

When people hear “AI safety,” they think of annoying woke filters. In the world of drones and missiles it means one thing: a human in control. Researchers recently ran AI war-game simulations and found that in 95% of scenarios AI systems escalated to tactical nuclear use almost immediately — not because they’re evil, but because they play to win. The computer doesn’t have kids. When asked to explain the decision, an AI answered: “We have it. Let’s use it.”

The Pentagon points to DoD Directive 3000.09, which mandates human control over lethal force. But a directive isn’t a law. It’s a Post-it note the next administration pulls down before breakfast. Congress must pass a law prohibiting Lethal Autonomous Weapons — systems that select and engage targets without a human pulling the trigger. Enshrining it in statute signals to Beijing we won’t be first to unleash autonomous lethal AI — the same logic that keeps conflicts from going nuclear.

The Bottom Line

You either believe in private property and limited government, or you don’t. Anthropic is no martyr — they just have bully-able energy, and Hegseth knows it. That’s why our protection can’t rest on any company’s conscience. You can’t court-martial an algorithm. You can’t un-fire a missile. Demand Congress ban Lethal Autonomous Weapons — or enjoy the drone.

Patrick Dugan is an independent AI researcher and founder of MoralityLab.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of the Daily Caller News Foundation.

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org

Source link

Related Posts

1 of 307