Anthropic Open Letter: The Hypocritical Sam Altman, PUA Master

By: blockbeats|2026/03/05 18:00:01
0
Share
copy
Original Article Title: Read Anthropic CEO's Memo Attacking OpenAI's 'Mendacious' Pentagon Announcement
Original Article Author: The Information
Translation: Peggy, BlockBeats

Editor's Note: Just hours before OpenAI announced a partnership with the Pentagon on AI, the Pentagon abruptly terminated the collaboration citing Anthropic's insistence on security terms. Subsequently, Anthropic CEO Dario Amodei issued an unusually strong-worded internal memo to employees, directly criticizing OpenAI's claimed "security mechanisms" as largely mere "security theater" and questioning its stance on autonomous weapons and mass surveillance.

In this approximately 1600-word email, Amodei not only revealed some details of the negotiations with the U.S. defense establishment but also squarely aimed at OpenAI CEO Sam Altman, accusing him of masking the true collaboration structure through PR narratives. This controversy surrounding AI military applications, security red lines, and political relationships is bringing the disagreements between the two Silicon Valley AI giants to the forefront.

Below is the original text:

I want to make it very clear about the information OpenAI is currently putting out and the hypocrisy within that information. This is their modus operandi, and I hope everyone sees it for what it is.

While there is still much unknown about the contract they signed with the Department of War (DoW) (perhaps even to themselves, as the contract terms are likely quite vague), there are a few things we can be certain of: based on public descriptions from Sam Altman and the DoW (although the contract text would need to be seen to confirm definitively), their collaboration model is roughly as follows: the model itself has no legal use restrictions, i.e., "all legal purposes"; alongside this, there is a so-called "security layer." This "security layer," in my view, is fundamentally a model rejection mechanism designed to prevent the model from completing certain tasks or engaging in certain applications.

The so-called "security layer" may also refer to schemes that partners (such as Palantir, Anthropic's commercial partner when serving U.S. government clients) have tried to sell us on during negotiations. They proposed a classifier or machine learning system that claims to allow certain applications while blocking others. Furthermore, there are indications that OpenAI would have staff (FDE, i.e., Frontline Deployment Engineers) overseeing the model's usage to prevent improper applications.

Our overall assessment is: these solutions are not completely ineffective, but in a military context, about 20% is real protection, and 80% is security theater.

The root of the problem is: whether the model is used for large-scale monitoring or fully autonomous weapon systems often depends on broader contextual information. The model itself does not know what kind of system it is in, it does not know if humans are "in the loop" (a key issue for autonomous weapons); it also does not know what the source of the data it is analyzing is. For example, whether it is domestic U.S. data or foreign data, data provided by companies with user consent, or data purchased through gray channels, and so on.

Those involved in security work have long been aware of this: model rejection mechanisms are not reliable. Jailbreak attacks are very common, and many times all it takes is to lie about the nature of the data to bypass these restrictions.

There is a key difference here that makes the problem more complex than regular security protection: determining whether a model is conducting a network attack can often be deduced from input and output; but determining the nature of the attack and the specific context is an entirely different matter, and this is precisely the judgment capability needed here. In many cases, this task is extremely difficult, if not impossible to accomplish.

The "security layer" that Palantir pitched to us (I think they also pitched similar solutions to OpenAI) is even worse. Our assessment is that this is almost entirely a form of security theater.

Palantir's basic logic seems to be: "There may be some dissatisfied employees in your company, you need to give them something to appease them, or make what is happening invisible to them. This is exactly the service we provide."

As for the issue of having Anthropic's or OpenAI's employees directly overseeing deployments, we had internal discussions a few months ago about acceptable use policies (AUP) in an expanded confidential environment. The conclusion was very clear: this approach is only feasible in very few cases. We will try our best, but it is by no means a reliable core safeguard, especially difficult to implement in a confidential environment. By the way, we are indeed doing our best in this regard, and in this respect, we are no different from OpenAI.

So what I want to say is: the measures taken by OpenAI are essentially unable to solve the problem.


The reason they accept these solutions, while we do not, fundamentally lies in: they are focusing on how to appease employees, while what we truly care about is preventing misuse.

These schemes are not without value, as we ourselves use some of them, but they are far from sufficient to meet the required security standards. At the same time, the Department of War has shown clear inconsistency in its treatment of OpenAI and us.

In fact, we had tried to include some security clauses similar to OpenAI's in the contract (as a supplement to the AUP, which we consider to be the more important part), but the Department of War refused. The relevant evidence is in the email chain at the time. As I am currently swamped with work, I may ask a colleague to look up the specific wording later. Therefore, the claim that "OpenAI's terms were offered to us and rejected by us" is not true; likewise, the claim that "OpenAI's terms could effectively prevent large-scale domestic surveillance or fully autonomous weapons" is also untrue.

Furthermore, Sam and OpenAI's statements also imply that our proposed red lines, namely fully autonomous weapons and large-scale domestic surveillance, are already illegal, making the related usage policies redundant. This assertion aligns almost perfectly with the Department of War's stance, making it seem pre-coordinated.

But this is not in line with the facts.

As we explained in yesterday's statement, the Department of War does indeed have the authority to conduct domestic surveillance. In the past, in the absence of AI, the impact of these authorities was relatively limited, but in the AI era, their significance is entirely different.

For example: the Department of War can lawfully purchase a large amount of private data of American citizens from vendors (who usually obtain resale rights through obscure user consent agreements), then use AI to conduct large-scale analysis of this data to build citizen profiles, assess political leanings, track real-world movements, and the data they can access even includes GPS information, and so on.

One more point to note: as negotiations neared conclusion, the Department of War proposed that if we removed a specific mention of "analysis of bulk acquired data" from the contract, they would be willing to accept all our other terms. And this happens to be the only clause in the contract that precisely addresses the scenario we are most concerned about. We find this very suspicious.

On the issue of autonomous weapons, the Department of War claims that "humans in the loop" is a legal requirement. But this is not the case. This is actually just a Pentagon policy from the Biden era that mandates human involvement in weapon launch decisions. This policy can be unilaterally modified by the current Secretary of Defense, Pete Hegseth — and this is exactly what we are truly worried about. So, from a practical standpoint, this is not a real constraint.

OpenAI and the Department of Defense have engaged in extensive PR spin on these issues, either lying or deliberately obfuscating. These facts reveal a behavioral pattern, one that I have seen many times in Sam Altman. I hope everyone can recognize it.

This morning, he first signaled his alignment with Anthropic's red lines, the purpose of which was to appear supportive of us, taking some credit, while also avoiding criticism when they took over the contract. He also tried to portray himself as someone who wants to "establish a unified contract standard for the entire industry" - a peacemaker and dealmaker.

But behind the scenes, he is signing contracts with the Department of Defense, preparing to replace us the moment we are tagged as a supply chain risk.

At the same time, he must ensure that this process does not look like "when Anthropic held the line, OpenAI gave up its bottom line." He can do this because:

First, he can sign all the "security theater" measures that we have rejected, and the DoD and its partners are willing to cooperate, packaging these measures credibly enough to appease his staff.

Second, the DoD is willing to accept some terms he proposes, which were rejected when we proposed them initially.

It is these two points that allow OpenAI to make a deal, which we could not.

The DoD and the Trump administration do not like us for genuine reasons: we did not donate to Trump (while OpenAI and Greg Brockman did); we did not engage in sycophantic praise of Trump (while Sam did); we support AI regulation, which goes against their policy agenda; we choose to tell the truth on many AI policy issues (e.g., AI's impact on jobs); and, we did hold the line instead of engaging in "security theater" to placate employees.

Sam is now trying to characterize all of this as: we are hard to work with, we are adversarial, we lack flexibility, and so on. I hope everyone realizes that this is classic gaslighting.

The vague notion of "someone is difficult to work with" is often used to mask the real unsightly reasons - the political donations, political loyalty, and security theater I just mentioned.

Everyone needs to understand this and push back against this narrative when privately speaking with OpenAI staff.

In other words, Sam is undermining our position under the guise of "supporting us." I want everyone to stay alert to this: by weakening public support for us, he is making it easier for the government to penalize us. Furthermore, I suspect he may even be fueling the fire behind the scenes, although I currently have no direct evidence of this.

On the public and media front, this rhetoric and manipulation tactic seems to have backfired. Most people view OpenAI's deal with the Department of War as concerning, if not alarming, while seeing us as the principled party (by the way, we are now number two on the App Store download charts).

[Note: Subsequently, Claude rose to number one on the App Store.]

Of course, this narrative has resonated with some fools on Twitter, but that is not significant. What I am truly concerned about is ensuring that it does not take hold among OpenAI's own employees.

Due to the filtering effect, they are already a relatively easily persuaded group. However, pushing back against the narratives Sam is peddling to our staff remains crucial.

[Original Article Link]

-- Price

--

You may also like

Popular coins

Latest Crypto News

Read more