Interpreting the Anthropic vs. War Department Conflict: What Does Trump Intend to Do?
Original Title: Clawed
Original Author: Dean W. Bal
Translation: Peggy, BlockBeats
Editor's Note: When an individual's life and death experience is intertwined with the rise and fall of a national system, political narrative ceases to be merely an abstract institutional discussion and becomes a profound emotional recognition. Taking the death of a father and the birth of a child as a starting point, this article extends the private realization that "death is a process" to a reflection on the current state of the American republic system. In the author's view, the current conflict between artificial intelligence companies and the government is not an isolated event, but a sidelight of long-term institutional loosening and power imbalance.
The article focuses on the controversy between Anthropic and the U.S. defense system, discussing not only the contractual terms, policy boundaries, and the threat of "supply chain risk," but also a more fundamental question: in the era of cutting-edge artificial intelligence, who should hold the control? Should it be private companies, executive power, or some as-yet-unformed public mechanism? As national security becomes a reason for power expansion, as policy measures increasingly rely on temporary and mandatory arrangements, is the sense of the rules and predictability of the republic system weakening?
Technological leaps and institutional changes may occur simultaneously, and their intersection often influences the direction of an era. The author questions the government's practices, holds hope for the reconstruction of future institutions, and reminds readers not to equate "democratic control" simply with "government control." Against the backdrop of rapid AI evolution and the ongoing reshaping of governance models, this debate may just be the beginning. How to achieve a new balance between security, efficiency, and freedom will be an important long-term issue facing the future.
The following is the original text:
Over a decade ago, I sat by my father's side as he passed away. Just six months prior, he was a vibrant man, stronger than I am today, cycling faster and with more resilience than most twenty-year-olds. Then one day, he had heart surgery, and he was never the same again. It was as if his soul had been sucked out, the light in his eyes dimmed. Occasionally, he would regain some spirit, the familiar father briefly returning to his aging body, but those moments became increasingly scarce. His thoughts became disjointed, his voice growing fainter.
During those six months, he was in and out of hospitals. On his last day, he was put into hospice care. He hardly said a word that day. In the final few hours of his life, he was almost gone from this world. Lying on the hospital bed, his breath slowed, his voice fading. Almost inaudible, only a disconcerting "death rattle" remained — the result of a body no longer able to swallow. A body unable to swallow, unable to eat or drink, in a sense, had given up the fight.
My mother and I locked eyes, understanding each other without stating the obvious, without voicing the questions in our hearts. We knew time was running out. Anything said or asked at this moment would not bring useful information; probing would only add to the pain.
I had spoken with him privately more than once. I held his hand, trying to bid him farewell. My mother returned to the room, the three of us hand in hand. In the end, a machine let out a long beep, signaling that he had crossed a certain threshold — an invisible line to those in the room. In the late afternoon of December 26, 2014, my father passed away.
A few days later, eleven years later, on December 30, 2025, my son was born. I had witnessed death, and now I witnessed life. What I learned is this: neither is a momentary event but an unfolding process. Birth is a series of awakenings, death a series of slumbers. It took my son years to truly "be born," while it took my father six months to truly "leave." Some people even take decades to slowly fade away.
At some point in my life, I can't pinpoint exactly when, the America we knew began its decline. Like most natural deaths, its causes were complex and intertwined. No single event, crisis, attack, president, political party, law, idea, individual, corporation, technology, mistake, betrayal, failure, misjudgment, or foreign adversary "solely" ushered in the beginning of the end, although all played a part. I don't know at which stage of this process we find ourselves, but I know we are already in the "hospice room." I have long known this, but sometimes I, like all mourners, would engage in self-denial. I refrained from discussing it further, as talking about it often only brings pain.
However, if I didn't acknowledge that we are sitting bedside, I wouldn't be able to complete this writing with the analytical rigor you expect from me today. To honestly discuss the advancement of cutting-edge artificial intelligence and what kind of future we should build, we cannot ignore the fact that the Republic we knew is on its deathbed. Yet, no machine here will sound the final beep for us. We can only watch in silence.
In American history, our Republic has "died" and "been reborn" multiple times. The United States has been through more than one "founding." Perhaps we are now standing at the threshold of another rebirth, opening a new chapter in the country's continuous self-reinvention. I hope so. But it is also possible that we no longer possess enough virtue and wisdom to support a new founding, and the more realistic understanding is that we are slowly transitioning into a post-Republic era of American governance. I do not claim to know the answer.
What I am about to write is a confrontation between an artificial intelligence company and the U.S. government. I do not wish to exaggerate. The kind of "death" I am about to describe has been ongoing for most of my life. The events I am about to describe took place last week, and may even be resolved to some extent in a matter of days.
I'm not saying that this event "caused" the death of the republic, nor am I saying that it "ushered in a new era." If it meant anything, it was simply to make that ongoing decline more evident and harder to deny from my personal perspective. I see last week's event as the Republic's final "death rattle," a sound emitted by a body that has already given up the struggle.
As far as I know, this is what happened: during the Biden administration, the artificial intelligence company Anthropic reached an agreement with the Department of Defense (now called the Department of War, hereinafter referred to as DoW) allowing its AI system Claude to be used in classified environments. This agreement was expanded in July 2025 by the Trump administration (full disclosure: I was serving in the Trump administration at the time but was not involved in this deal). While other language models could be used in non-classified settings, classified work involving intelligence gathering, live combat operations, etc., could only use Claude until recently.
The original agreement negotiated by the Biden team with Anthropic, noteworthy, as several key architects of the Biden administration's AI policy immediately joined Anthropic after leaving office, contained two usage restrictions. First, Claude was not to be used for large-scale monitoring of Americans. Second, it was not to be used to control lethal autonomous weapons, i.e., weapons that operate entirely without human intervention throughout the entire process of identification, tracking, and engagement. When expanding the agreement, the Trump administration had the opportunity to review these clauses and ultimately accepted them.
Trump officials stated that their change of heart was not due to a rush to engage in large-scale monitoring or deploy lethal autonomous weapons but rather a rejection of the idea of private companies imposing restrictions on military technology use. The government's shift in attitude prompted it to take policy measures aimed at undermining or even destroying Anthropic—a company that may have been one of the fastest-growing in capitalist history and considered a global leader in AI, while the government repeatedly stated that AI was crucial to the nation's future. But we'll come back to this later.
The viewpoint put forth by the Trump administration was not entirely without merit: private companies imposing restrictions on military technology use does sound somewhat off. However, in reality, thousands of private companies are doing just that. Every technology transaction between the military and private enterprise exists in the form of a contract (hence the term "defense contractor"), with contracts typically containing operational restrictions (e.g., "system X shall not be used in country Y," similar to common clauses in Musk's Starlink), technical limitations (e.g., "a certain fighter jet is certified for use under specific conditions"), and intellectual property constraints ("contractors own and can reuse relevant technology IP").
In some respects, Anthropic's terms resembled these traditional restrictions. For example, the company was not opposed to lethal autonomous weapons per se but believed that current cutting-edge AI systems were not yet capable of autonomously determining life and death. This is akin to "fighter jet certification restrictions."
However, the key difference is that the restrictions imposed by Anthropic through contracts are more like policy restrictions than technical restrictions. For example, the difference between "This fighter jet is not certified to fly to a certain altitude" and "You are not allowed to fly to a certain altitude." The military may not be supposed to accept such terms, and neither may private companies. However, the Biden administration accepted them, and the Trump administration initially did as well, until later changing its mind.
This in itself indicates that such terms are not absurd violations. There is no law that contracts can only have technical restrictions and not policy restrictions. Contracts are not illegal; they may just be seen as unwise in hindsight. Even if you support positions against mass surveillance and lethal autonomous weapons, you may also believe that defense contracts are not the best tool for achieving policy goals. Under the Republic's regular rules, the way to implement new policies is through legislation.
However, "through legislation" is increasingly becoming a joke in contemporary America. If you genuinely wish to achieve a certain outcome, legislation is no longer the preferred path. Governance is becoming more informal and temporary, executive power is expanding, and policy tools are increasingly mismatched with their goals.
The Trump administration cited two concerns for its change of heart: first, Anthropic may withdraw its services at a crucial moment; second, as a subcontractor, Anthropic's terms may constrain other military contractors. Coupled with the government viewing Anthropic as a political adversary (which they may be correct about), the military suddenly realized it was relying on a company it did not trust.
The rational approach would have been to cancel the contract and publicly state the reasons, while also avoiding similar situations in the future through regulatory terms. However, the Department of War insisted that the contract must allow for "all lawful purposes" and threatened to designate Anthropic as a "supply chain risk." This designation typically only applies to enterprises controlled by foreign adversaries, such as Huawei. The Secretary of War went further, vowing to prevent all military contractors from having "any business dealings" with Anthropic.
This is almost equivalent to declaring "corporate murder" on a company. Even if the bullets are not necessarily lethal, it is enough to send a signal: do business on our terms, or your business will end.
This touches on a core principle of the American Republic: private property. If the military were to tell Google, "Sell global personalized search data, or else be listed as a risk," it would be no different in principle from current actions. So-called private property is merely a resource that can be requisitioned in the name of national security.
This move will raise the capital costs of the entire AI industry, weaken the international credibility of American AI, and may even damage the profitability prospects of the AI industry itself.
With each presidential turnover, American policy-making becomes increasingly unpredictable, rough, and arbitrary. It is difficult to determine when the order of freedom will evaporate.
Even though the Secretary of War has withdrawn the threat, the damage has already been done. The government has made it clear: as long as you refuse to comply, you may be treated as an enemy. This poses a deeper erosion to the American political culture.
More importantly, this is the first genuine public debate about "where should AI control lie." Our public institutions have shown disorder, malice, and lack of strategic clarity. The failure of political elites is not a new phenomenon but a theme that has been intensifying over the past two decades: "same as before, but notably worse."
Perhaps the next phase of reconstruction will be closely tied to advanced AI. In shaping the institutions of the future, please do not equate "democratic control" with "government control." The gap between the two has never been as stark as it is today.
Regardless of the future, we must ensure that mass surveillance and autonomous weapons do not erode freedom. I appreciate the AI lab holding the line. In the coming decades, our freedom may be more fragile than we think.
Everyone must choose the future they are willing to fight for or defend. When making that choice, please disregard the noise of the "deathbed rattle" and maintain independent thinking. You are entering a new era of institution-building.
But before that, take a moment to mourn for that once great republic.
You may also like

Consumer-grade Crypto Global Survey: Users, Revenue, and Track Distribution

Prediction Markets Under Bias

Stolen: $290 million, Three Parties Refusing to Acknowledge, Who Should Foot the Bill for the KelpDAO Incident Resolution?

ASTEROID Pumped 10,000x in Three Days, Is Meme Season Back on Ethereum?

ChainCatcher Hong Kong Themed Forum Highlights: Decoding the Growth Engine Under the Integration of Crypto Assets and Smart Economy

Why can this institution still grow by 150% when the scale of leading crypto VCs has shrunk significantly?

Anthropic's $1 trillion, compared to DeepSeek's $100 billion

Geopolitical Risk Persists, Is Bitcoin Becoming a Key Barometer?

Annualized 11.5%, Wall Street Buzzing: Is MicroStrategy's STRC Bitcoin's Savior or Destroyer?

An Obscure Open Source AI Tool Alerted on Kelp DAO's $292 million Bug 12 Days Ago

Mixin has launched USTD-margined perpetual contracts, bringing derivative trading into the chat scene.
The privacy-focused crypto wallet Mixin announced today the launch of its U-based perpetual contract (a derivative priced in USDT). Unlike traditional exchanges, Mixin has taken a new approach by "liberating" derivative trading from isolated matching engines and embedding it into the instant messaging environment.
Users can directly open positions within the app with leverage of up to 200x, while sharing positions, discussing strategies, and copy trading within private communities. Trading, social interaction, and asset management are integrated into the same interface.
Based on its non-custodial architecture, Mixin has eliminated friction from the traditional onboarding process, allowing users to participate in perpetual contract trading without identity verification.
The trading process has been streamlined into five steps:
· Choose the trading asset
· Select long or short
· Input position size and leverage
· Confirm order details
· Confirm and open the position
The interface provides real-time visualization of price, position, and profit and loss (PnL), allowing users to complete trades without switching between multiple modules.
Mixin has directly integrated social features into the derivative trading environment. Users can create private trading communities and interact around real-time positions:
· End-to-end encrypted private groups supporting up to 1024 members
· End-to-end encrypted voice communication
· One-click position sharing
· One-click trade copying
On the execution side, Mixin aggregates liquidity from multiple sources and accesses decentralized protocol and external market liquidity through a unified trading interface.
By combining social interaction with trade execution, Mixin enables users to collaborate, share, and execute trading strategies instantly within the same environment.
Mixin has also introduced a referral incentive system based on trading behavior:
· Users can join with an invite code
· Up to 60% of trading fees as referral rewards
· Incentive mechanism designed for long-term, sustainable earnings
This model aims to drive user-driven network expansion and organic growth.
Mixin's derivative transactions are built on top of its existing self-custody wallet infrastructure, with core features including:
· Separation of transaction account and asset storage
· User full control over assets
· Platform does not custody user funds
· Built-in privacy mechanisms to reduce data exposure
The system aims to strike a balance between transaction efficiency, asset security, and privacy protection.
Against the background of perpetual contracts becoming a mainstream trading tool, Mixin is exploring a different development direction by lowering barriers, enhancing social and privacy attributes.
The platform does not only view transactions as execution actions but positions them as a networked activity: transactions have social attributes, strategies can be shared, and relationships between individuals also become part of the financial system.
Mixin's design is based on a user-initiated, user-controlled model. The platform neither custodies assets nor executes transactions on behalf of users.
This model aligns with a statement issued by the U.S. Securities and Exchange Commission (SEC) on April 13, 2026, titled "Staff Statement on Whether Partial User Interface Used in Preparing Cryptocurrency Securities Transactions May Require Broker-Dealer Registration."
The statement indicates that, under the premise where transactions are entirely initiated and controlled by users, non-custodial service providers that offer neutral interfaces may not need to register as broker-dealers or exchanges.
Mixin is a decentralized, self-custodial privacy wallet designed to provide secure and efficient digital asset management services.
Its core capabilities include:
· Aggregation: integrating multi-chain assets and routing between different transaction paths to simplify user operations
· High liquidity access: connecting to various liquidity sources, including decentralized protocols and external markets
· Decentralization: achieving full user control over assets without relying on custodial intermediaries
· Privacy protection: safeguarding assets and data through MPC, CryptoNote, and end-to-end encrypted communication
Mixin has been in operation for over 8 years, supporting over 40 blockchains and more than 10,000 assets, with a global user base exceeding 10 million and an on-chain self-custodied asset scale of over $1 billion.

$600 million stolen in 20 days, ushering in the era of AI hackers in the crypto world

Vitalik's 2026 Hong Kong Web3 Summit Speech: Ethereum's Ultimate Vision as the "World Computer" and Future Roadmap

On the same day Aave introduced rsETH, why did Spark decide to exit?

Full Post-Mortem of the KelpDAO Incident: Why Did Aave, Which Was Not Compromised, End Up in Crisis Situation?

After a $290 million DeFi liquidation, is the security promise still there?

ZachXBT's post ignites RAVE nearing zero, what is the truth behind the insider control?







