OpenAI, the creator of the widely adopted AI model ChatGPT, is currently facing significant backlash and widespread criticism for its decision to fill a void left by the U.S. Department of Defense (DoD). This void emerged after a competitor, Anthropic, notably refused to lift its restrictions against utilizing its AI for surveillance purposes and the development of autonomous weapons systems. The move by OpenAI to engage with the DoD has sparked a strong reaction from both its user base and internal employees, many of whom did not anticipate their contributions to AI development would be leveraged for government mass surveillance initiatives.
Early indicators suggest a dramatic public response to the announcement. Reports indicate a surge in ChatGPT uninstalls, with figures showing an increase of nearly 300% in the immediate aftermath of the company disclosing its deal with the Department of Defense. This user dissent prompted an acknowledgment from Sam Altman, CEO of OpenAI. Altman conceded that the initial agreement was characterized as "opportunistic and sloppy." In an attempt to assuage concerns and clarify the company’s position, he subsequently re-published an internal memo on social media. This memo stipulated that modifications to the agreement now explicitly state: "Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals."
However, the crux of the controversy lies in the interpretation of these stipulations, particularly the phrase "consistent with applicable laws." The U.S. government’s historical approach to surveillance suggests a considerably more permissive understanding of "applicable law" than a strict adherence to privacy protections. For years, governmental bodies have largely embraced interpretations that permit mass surveillance and significant infringements on civil liberties, actively resisting judicial oversight in these matters. The very notion of what constitutes "domestic surveillance" under U.S. law has been a subject of ongoing debate and legal contention, with various administrations pushing the boundaries of existing legislation.
The emphasis on the word "intentionally" within OpenAI’s revised agreement is also a point of significant contention. For an extended period, government agencies have maintained that the mass surveillance of U.S. citizens occurs only "incidentally." This argument posits that communications of U.S. persons, when interspersed with communications from individuals located outside the United States, are swept up in surveillance programs ostensibly designed to monitor foreign communications exclusively. This "incidental collection" argument has been a cornerstone of the government’s defense against claims of illegal domestic surveillance, allowing for the broad acquisition of data without direct intent to monitor U.S. nationals.
OpenAI’s own amendment to the contract, published on its website, further elaborates on this point. It states: "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." The inclusion of the word "deliberate" serves as a red flag for privacy advocates and civil liberties organizations. This is due to the frequent reliance of intelligence and law enforcement agencies on data acquired incidentally or through commercial channels, which often bypasses the stricter privacy protections afforded by more direct surveillance methods. The ease with which such data can be acquired and integrated into surveillance frameworks raises concerns about the genuine effectiveness of this clause in preventing intrusive monitoring.
Another clause that has drawn scrutiny is: "The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law." The ambiguity surrounding terms like "unconstrained" is particularly problematic. The lack of a precise definition, and the question of who defines it, leaves ample room for interpretation that could favor governmental interests over individual privacy. This type of phrasing is often referred to by legal experts as "weasel words." These terms are deliberately crafted to create ambiguity, allowing one party to avoid genuine accountability for contract violations.
This mirrors patterns observed in previous negotiations, such as those between Anthropic and the Pentagon. In that instance, the Pentagon reportedly agreed to adhere to Anthropic’s red lines only "as appropriate." This suggests a potential strategy by governmental bodies to publicly commit to limitations in principle while strategically preserving broad flexibility in practice. Such an approach allows for public relations wins while maintaining operational latitude, a common tactic when navigating sensitive technological and ethical boundaries.
OpenAI has also highlighted assurances from the Pentagon that the National Security Agency (NSA) would be prohibited from utilizing OpenAI’s tools without a new, explicit agreement. Furthermore, the company asserts that its deployment architecture will facilitate verification processes to ensure that no stipulated red lines are crossed. However, historical precedent suggests that secret agreements and technical assurances alone have consistently proven insufficient to constrain surveillance agencies. They are not a substitute for robust, enforceable legal frameworks and genuine transparency. The history of intelligence gathering is replete with examples where technological safeguards or contractual limitations were circumvented or reinterpreted to achieve broader surveillance objectives.
While OpenAI executives may genuinely believe that their contractual relationship with the Pentagon can serve as a mechanism to ensure the responsible use of AI tools by the government, in line with democratic processes, current evidence suggests this hope may be overly optimistic. The company’s charter promises the public that it will "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power." However, the enabling of mass surveillance demonstrably does both of these things, raising questions about the practical application of these stated principles.
This perceived naivete on the part of OpenAI is not merely a philosophical concern; it carries potentially dangerous implications. In an era where governments are increasingly willing to adopt extreme and often unfounded interpretations of "applicable laws" to expand surveillance powers, companies must demonstrate a commitment to their principles with concrete actions and robust safeguards. The historical record is a stark reminder that many of the world’s most egregious human rights violations were, at the time, considered "legal" under the prevailing legal frameworks.
OpenAI is not an isolated entity in this regard. Several consumer-facing technology companies find themselves navigating a similar tightrope: seeking to reassure the public about their ethical stances while simultaneously pursuing lucrative opportunities in government mass surveillance initiatives. This creates a perception of marketing double-speak, suggesting an inherent incompatibility between profiting from mass surveillance and upholding human rights. The reality is that companies cannot effectively pursue both objectives simultaneously.
Furthermore, the fundamental issue remains that the power to define the boundaries of our privacy should not be concentrated in the hands of a select few. Relying on the discretion of either corporate CEOs or government officials to protect fundamental civil liberties is an inherently precarious position for the public. A more robust and democratic approach would involve clear, transparent, and legally binding frameworks that explicitly protect individual privacy from unwarranted governmental intrusion, regardless of the technological means employed. The current situation underscores the urgent need for public discourse and legislative action to establish unequivocal safeguards against AI-powered mass surveillance.







