Washington’s attitude toward AI has changed in a subtle, almost reluctant way, but anyone paying close attention can sense it. The White House sent out a clear message just a year ago: move aside and let technology take its course.
The New York Times reports that President Trump is currently considering an executive order that would establish a federal working group to look into oversight of new A.I. models. It’s difficult to avoid reading between the lines because of the stark reversal.
| Field | Detail |
|---|---|
| Topic | Federal A.I. Oversight & Policy Reversal |
| Reported By | The New York Times, citing officials briefed on deliberations |
| Trigger Event | Concerns over Anthropic’s new model, Mythos |
| Key Figure | President Donald Trump (second term, 2025–present) |
| Defense Secretary | Pete Hegseth, Senate Armed Services testimony, April 30, 2026 |
| Foundational Law | National Artificial Intelligence Initiative Act of 2020 |
| Executive Order Revoked | Biden’s 2023 A.I. Safety Order (revoked Jan 2025) |
| Current Framework | “Winning the Race: America’s A.I. Action Plan“ |
| Pentagon Partners | SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, AWS |
| Concern Areas | Autonomous weapons, surveillance, cyberattack capabilities |
Anthropic’s most recent model, Mythos, is the purported trigger. The system’s coding skills could accelerate cyberattacks and find vulnerabilities more quickly than any human team, according to cybersecurity researchers. Speaking with members of the security community gives me the impression that this is no longer theoretical. Over coffee in Arlington, a former federal analyst bluntly described it as an open dialogue that used to take place in hushed tones. The instruments are available. What comes next is the question.
Trump’s previous A.I. blueprint, which was made public in July, strongly leaned the other way. It increased export routes to allies, relaxed environmental regulations, and portrayed regulations as a hindrance to American competitiveness in the face of China. Biden’s 2023 executive order, which mandated that developers of high-risk A.I. systems share safety test results with the government, was revoked on his first day back in office in 2025. There were teeth in that order. It sent a signal when it was removed. Even in working-group form, reintroducing oversight sends a different message.

The Pentagon, meanwhile, is only heading in one direction. The U.S. military must “stay ahead,” Defense Secretary Pete Hegseth told the Senate Armed Services Committee last week, citing domain awareness and targeting cycles as key applications. Seven significant companies were awarded contracts by the Department of Defense last Friday: SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and Amazon Web Services. Although the language was bureaucratic and discussed improving warfighter decision-making and simplifying data synthesis, the implications go beyond that. Experiments are no longer being conducted by the military. The deployment is underway.
Gregory Allen, who contributed to the development of the current military A.I. policy, stated it simply. People in a photo were counted by drones. The model can now tell you which artillery unit can hit them the fastest, who wasn’t there yesterday, what vehicles they’re standing close to, and the range of their weapons. That is the present, not a forecast.
In April, Trump told Time magazine that a human would always make the final decision. It’s a comforting line. It’s another matter entirely whether it holds up under operational pressure, when seconds count and the amount of data is overwhelming. After the last Pentagon A.I., lawmakers, especially Democrats on the armed services committee, have been advocating for safeguards. Public disputes over autonomous weapons and domestic surveillance resulted from the contract’s termination. Their worries are still present. Simply put, they have been outpaced.
Structural issues are the deeper issue. While federal policy continues to rely on the administration’s own “innovation-first” framework, state legislatures are enacting legally binding A.I. regulations that go into effect in 2026. As this develops, it seems that Washington wants both the safety of oversight and the speed of deregulation without deciding which is more important. Years ago, the industry largely ignored Tesla’s autonomy claims until someone was harmed. It’s possible that A.I. policy is moving in the same direction, albeit more quickly and with far greater stakes than anyone is currently acknowledging.
