If you’re willing to look for it, there is a pattern here. The federal government discovers a brand-new technology, claims it is crucial to the country’s competitiveness, lowers the entry barrier to promote adoption, and then subtly dismantles the oversight initiatives designed to prevent the whole thing from blowing up. About fifteen years ago, cloud computing experienced this. It seems to be occurring once more, but with artificial intelligence this time.
For the past two years, I have covered cybersecurity and federal IT contracting, which has allowed me to observe firsthand how Washington responds to these moments of technological fervor. Furthermore, the real lesson isn’t comforting. There is genuine optimism. There is a true sense of urgency. However, the infrastructure to deal with the fallout is largely lacking.
| Topic | Federal Government’s rapid adoption of Artificial Intelligence |
| Primary Agency Involved | General Services Administration (GSA) |
| Oversight Program | FedRAMP (Federal Risk and Authorization Management Program), established 2011 |
| AI Pricing Deals (2024–25) | OpenAI ChatGPT at $1, Google Gemini at $0.47, Grok by xAI at $0.42 per use |
| Key Warning Issued By | GSA — usage costs “can grow quickly without proper monitoring” |
| Historical Parallel | Federal cloud computing transition under Obama administration (early 2010s) |
| Current FedRAMP Status | Operating with “absolute minimum of support staff” and “limited customer service” |
| Third-Party Assessment Issue | Assessors are paid by the companies they evaluate — creating conflict of interest |
| Cybersecurity Concern | Sensitive government data increasingly processed through AI tools with minimal vetting |
| Related Framework | OECD Digital Government Policy Framework (6 dimensions of digital maturity) |
The Trump administration announced a series of agreements with large tech firms last year, portraying the agreements as a workable way to give federal employees access to AI at affordable prices. For $1, use ChatGPT. Gemini from Google costs 47 cents. Grok for forty-two. At first glance, it seems like a great deal—Washington is finally keeping up with the private sector.
However, you become wary of the word “bargain” when you spend time with people who made similar agreements during the cloud era. As part of a post-cyberattack goodwill initiative, Microsoft offered free security upgrades to federal customers ten years ago, which was a very similar gesture.

The result, according to a former Microsoft salesperson, was “successful beyond what any of us could have imagined.” Agencies that accepted the free offer were locked in and would have to pay hefty fees if they attempted to leave. The GSA now specifically cautions that the costs associated with using AI “can grow quickly without proper monitoring and management controls.” It’s easy to overlook that warning in the rush of government guidance documents.
And there’s the question of who is observing. The gatekeeper was supposed to be FedRAMP, a program designed in 2011 to assess whether cloud products meet federal security standards. It was a good idea to centralize the screening process so that different agencies wouldn’t have to start from scratch. However, examining the program’s handling of Microsoft’s GCC High cloud offering turned up some unsettling information. FedRAMP finally approved the product despite significant internal concerns about its cybersecurity.
Speaking about the process, former employees described a team that was simply worn out, outmatched in resources and outwitted by a business that had a better grasp of the bureaucratic landscape than the regulators. The same program now runs with what it publicly refers to as a “absolute minimum of support staff.” It was an early target set by the Department of Government Efficiency. The end effect is a watchdog that has been severed just when it is most needed.
It’s difficult to ignore the timing. The program intended to verify AI tools is running on fumes as federal agencies start feeding them mountains of sensitive data. Speaking openly, former GSA officials characterize FedRAMP as a “paper pusher”—an office that handles more paperwork than it actually assesses risk. In response to that description, a GSA representative stated that the program “operates with strengthened oversight and accountability mechanisms.” It sounds more like a press release than a defense.
Perhaps the most subtly concerning aspect of this is the third one. For a long time, the federal government has relied on outside companies, or “third-party assessors,” to independently determine whether cloud products are sufficiently secure for use by the government.
The issue is that these assessors are compensated by the very businesses they are assessing, as reporting has made evident. Two assessors reportedly suggested Microsoft’s GCC High even though they were unable to thoroughly evaluate the product. One company did not reply to inquiries.
The account was rejected by the other. FedRAMP allegedly established an unofficial back channel to allow assessors to voice concerns they wouldn’t include in official reports because they were aware of how this financial arrangement could skew findings. That’s a telling workaround—an admission that there are structural issues with the official process, which are resolved by adding a quiet side door rather than fixing the structure.
It’s important to take a step back and think about what all of this means, particularly in the context of AI. Although cloud computing was disruptive, the main issues were server management and data storage locations. AI is not the same. These systems do more than just store data; they also evaluate it, act upon it, and increasingly base decisions on it.
These tools’ algorithmic decisions have actual repercussions, such as which cases are reported, which benefits are authorized, and which security risks are revealed. The effects spread in ways that are genuinely difficult to undo if those systems are poorly vetted, biased in their design, or shaped by vendors with profit motives that don’t align with the public interest.
There is a more general pattern that is worth mentioning. Digital government maturity, according to the OECD, depends on factors like accountability, transparency, and user-driven design. Notably, the United States’ data is not included in the 2023 Digital Government Index. It’s unclear if that’s bureaucratic timing or something more significant. What is evident is that the government’s haste to implement AI is surpassing the mechanisms designed to guarantee successful adoption.
The cloud was an example of that. It seems to apply to AI. The question is whether the lesson from fifteen years ago just didn’t stick, or if anyone in a position to slow things down is genuinely paying attention.
