Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.

I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.

Am I missing something here?

EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:

> Two such use cases have never been included in our contracts with the Department of War

So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.

My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.

[1]: https://www.anthropic.com/news/statement-department-of-war

 help



The administration's approach to contracts, agreements, treaties and so on could be summed up as 'I am altering the deal. Pray I do not alter it further.'

The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetics of being bad guys, performative cruelty, committing fictional atrocities, and so forth. Some MAGA influencers have even adopted the Imperial iconography from Star Wars as a means of differentiating themselves from liberal/democratic adoption of the 'rebel' iconography. So you have have influencers like conservative entrepreneur Alex Muse who styles his online presence as an Imperial stormtrooper. As Poe's law observes, at some point the ironic/sarcastic frame becomes obsolete and you get political proxies and members of the administration arguing for actual infringements of civil liberties, war crimes, violations of the Constitution and so on.


I think it's the other way around. They have always wanted to do those cruel things that have real victims. It took them many years of dedicated, coordinated efforts as they slowly inched many systems to align with their insane ideas. The villain branding is just that - branding. Many of them actually like the 'bad guys' in those stories, especially if those bad guys are portrayed as strong, uncompromising, militaristic, inhumane, and having simple, memorable iconography that instills fear - the more allusions to real life fascists, the better. But that enjoyment follows from their ideology and what they want to do in the world, not the other way around.

And as an aside to this: even the people coopting the rebel iconography are supporting genocide, atrocities and war crimes.

Like, Mark Hamill himself is a massive Israel + Biden supporter [0].

Guys, George Lucas didn't make the Empire thinking about Trump, or Republicans. He made it about America.

0 - https://www.nme.com/news/film/hollywood-stars-sign-open-lett...


Ehh, Hamill's take on Israel is pretty middle of the road and diplomatic[1]: support for the people of Palestine and Israel while not at all supporting the governments of those places.

[1] https://xcancel.com/MarkHamill/status/1725979647991537786?la...


The writeup here[1] was pretty clear to me.

> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.

> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.

[1]: https://www.astralcodexten.com/p/the-pentagon-threatens-anth...


I just wish there was a stronger source on this. I am inclined to agree you and the source you cited, but unfortunately

> [1] This story requires some reading between the lines - the exact text of the contract isn’t available - but something like it is suggested by the way both sides have been presenting the negotiations.

I deal with far too many people who won't believe me without 10 bullet-proof sources but get very angry with me if I won't take their word without a source :(


That's a fair point, but I think Dario's quote in GP corroborates ACX's story quite well:

> "Two such use cases have never been included in our contracts with the Department of War..."


> "Two such use cases have never been included in our contracts with the Department of War..."

While I agree with Anthropic's position on this regardless, the original contract wording does matter in terms of making either the government look even more unreasonable or Anthropic look a little less reasonable.

The issue is a subtle ambiguity in Dario's statement: "...have never been included in our contracts" because it leaves two possibilities: 1. those two conditions were explicitly mentioned and disallowed in the contract, or 2. they weren't in the contract itself - and are disallowed by Anthropic's Terms of Service and complying with the ToS is a condition in the contract (which would be typical).

If that's the case, then it matters if the ToS disallowed those two uses at the time the original contract was signed, or if the ToS was revised since signing. Anthropic is still 100% in the right if the ToS disallowed these uses at the time of signing and the ToS was an explicit condition of the contract, since contracts often loop in the ToS as a condition while not precluding the ToS being updated.

However, if the ToS was updated after contract signing and Anthropic added or expanded the wording of those two provisions, then the DoD, IMHO, has a tiny shred of justification to complain and stop using Anthropic. Of course, going much further and banning the entire US government (and contractors) from using Anthropic for any use, including all the ones where these two provisions don't matter - is egregiously punitive and shitty.

While the contract wording itself may be subject to NDA, it would be helpful if Anthropic's statements could be a bit more precise. For example, if Dario had said "have always been disallowed in our contracts" this ambiguity wouldn't exist.


It does not matter. If Anthropic had been precise in this narrow way, there would have been some other nitpick to raise.

You're trying desperately to find a way that things can be at least a little normal, and I really do get it. It would be great if such a way existed. But it doesn't. I recommend you take a social media break like I'm about to, take the time you need to mourn the era of normal politics, and come back with a full understanding that the US government is not pursuing normal policy objectives with bad decisions. They hate you and they hate me for not being on their side, and their primary goal is to ensure that we're as miserable as they can make us.


I'm in a weird spot where I do agree with your assessment of the core claim. But putting that aside, in the world where the DoW's claim _is_ correct -- I think you don't have any choice other than to designate them a supply chain risk.

Disregarding who is right or wrong for a moment, if the DoW are right (which I'm not personally inclined to believe, but we're ignoring that for the moment) -- how else can they avoid secondhand Claude poisoning?

Supposing they really want to use their software for things disallowed by Claude's (now or future) ToS, it seems like designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude (either indirectly as a wrapper or tertially through use of generated code etc)


> designating it a supply chain risk is the only way they can ensure that their contractors don't include Claude

I agree that if the DoW claim is correct (and I doubt it is), then, sure, the DoW dropping Anthropic and precluding the DoW's suppliers from using Anthropic for any DoW work would be expected. However, the "supply chain risk" designation they are deploying goes far beyond that to block Anthropic use by any supplier to any part of the entire U.S. government for anything.

For example, no one at Crayola can use Anthropic for anything because Crayola sells crayons to the Education Dept. The DoW already has much less draconian ways to restrict what their direct suppliers use to build things for military applications. But instead of addressing the actual risk in a normal measured way, they are choosing to use a nuke against a grenade-sized problem. This "supply chain risk" designation is rarely used and has never been used against a U.S. company. It's used against Chinese or Russian companies when in cases where there's credible risk of sabotage or espionage. That's why that particular designation always blocks all products from an entire company for any application by any part of the U.S. Government, contractors and suppliers (which is why it's never been used against a U.S. company).


One positive thing I will say about this administration is that they have really drawn into focus the difference between de jure and de facto law.

My hope is that this gets us some real concern for things that have been defended with de facto arguments (i.e. privacy) going forward.

edit: Anthropic argues that your Crayola analogy is fundamentally incorrect.

> Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.

https://www.anthropic.com/news/statement-comments-secretary-...


> Anthropic argues that your Crayola analogy is fundamentally incorrect.

Yes, I just saw Dario's latest post with that more detailed info. My understanding was informed by news reporting in a couple different outlets but those reports may have been conflating the "supply chain risk" designation (under 10 USC 3252) with the net effect of statements from the pentagon and white house which go substantially further.

Even if it's not in the legal scope of 10 USC 3252, the administration has made clear they intend to ban Anthropic from use across the federal government. AFAICT doing that is probably within the discretionary remit of the executive branch, even though I believe it's unprecedented - to your point about de jure and de facto law.

To me, if there's a silver lining to all this, it's making a strong case for restricting executive branch power.

Edit to add: Per the Wall Street Journal's lead story (updated in the last hour): "The General Services Administration, which oversees federal procurement, said it is removing Anthropic from its product offerings to government agencies... Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon."


How would this risk be mitigated by signing a contract? Seems like “supply chain poisoning as treason” is probably not going to stopped by a piece of paper. You either trust anthropic or you don’t but the deal has nothing to do with it.

Isn't the point that they aren't entering into a contract with them, they are just ensuring that none of their still trusted suppliers repackage Anthropic without their knowledge?

I’m not sure, but I think you’re right. I was thinking about the logical implications of the. If they are a supply chain risk without a contract, how does the existence of a contract suddenly make them not a risk? Especially if the DoD strong arms them into a deal.

Because the act that the SCR designation would “protect” against is treason, so I don’t think people would care too much whether there’s a contract.


Also, Trump's own words complaining about being forced to stick to Anthropic's terms of service:

> The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.


His M.O. is to accuse his opponent of the very thing he is doing. It’s the party of bad-faith.

[flagged]


In this case, do you really believe that we should trust an EA less than this administration? EA as bad people is a stereotype; corruption, fraud, and breaking the law is the standard MO for this administration.

(Or maybe it’s catchier to respond glibly with “never trust a child rapist and convicted felon.”)


Not comparing. Sometimes, there are 2 bad apples.

In this case, the choice is between the two apples, so I’d pick the one less obviously rotten. Sadly that is the current administration that operates in pure lawlessness.

This administration needs the benefit of the doubt always. This administration deserves the benefit of the doubt never.

Those people are dealing with you in bad faith, and you need to cut them off before they try to overthrow your government again.

Yeah, that should have been in the contract too -- no using our software to overthrow the government or to implement a fascist state.

I think a big question mark here, is whether anything said on Anthropic's side if in the framing of "We have a thing going on that we are trying to communicate around where a canary notice if it existed would no longer be updated"

It isn't about commercial agreements, it's about patriotism. The national industry is supposed to submit to the military's wishes to the extent that they get compensated. Here it's a question or virtue.

The Pentagon feels it isn't Anthropic to set boundaries as to how their tech is used (for defense) since it can't force its will, then it bans doing business with them.


If anthropic is saying “you can use our models for anything other than domestic spying or autonomous weapons” and the pentagon replies “we will use other models then”, I'd say Anthropic are the patriots here...

I like the endless consideration for spying on allies. or wait...

One battle at a time

I'm guessing you're being down voted because people don't know if you think that's a good thing or not. I do not think it's a good thing. Do you?

I absolutely do not think that's a good thing. Was stating some sad facts.

I had the same thing happen to me when I posted about how unbridled capitalism requires external costs in the form of pollution and what not. I didn't make it clear that I thought it was a terrible truth.

Once the hive decides you're being serious without checking, they turn the down vote button into an I disagree with you button.

This is actually one of the reasons I left Reddit. I hate to see it here.


It likely helps to take in the cultural moment or context around the statements or the nature of the statements you're making. It's fine to state a fact but it's also helpful to make it clear whether you are saying "it is what it is " or "I wish things were different" or "I am doing X, Y, and Z to try and help and I recommend others do so". Jokes are an exception and I think misunderstandings are fine there. But it's unreasonable to think that on the Internet, people will "check to see if you are serious".

The comment was serious. It didn't feel the need to take a side.

The DoD declaration reflects a certain context, we had the patriotic act, a whistleblower exiled in Russia for defending the constitution, etc etc. We didn't need to wait a MAGA movement to be expecting such comment from the DoD.

If hackernews threads turn into mouthpieces for opinions then we have no use posting anything in here.

The comments are naively claiming commercial agreements make Anthropic right, as if contracts had more weight than the constitution.

I would rather call out a "virtuous signalling" entity in the valley simply standing for something aligned with civil liberties, and using it as a political stance in what nobody would deny is an unfortunate polarized political climate.

What to make of OpenAI then. Should I give my opinion that they took a falsely constitutional stance, or simply made for-profit move to land a juicy government contract, while making the public think they kept the same red lines as their main competitor?

Or just stick to the fact: The DoD will, as always, get away with its liberticide demands to get what it wants, because other big tech will fall inline.


[flagged]


Personally, I'd like to do everything in my power to make nationalists feel unwelcome on this site. (But I think OP was merely being descriptive.)

[flagged]


Bravo. It does take real courage to bully people anonymously while safely posting from your mom's basement.

I fully acknowledge that it doesn't take much courage to bully people anonymously on HN. I don't claim to have any deep well of courage in real life either - many of my friends were already radicalized against OpenAI for other reasons, I don't expect to face professional consequences for being angry about this, and I might not be so willing to go scorched earth if either of those weren't true. Just wanted to explain where the world is at and why people should expect to see further incivility about this.

What's your definition of "patriotism" and why do private companies need to be "patriotic"? How do you reconcile this with the Constitutional guarantees of freedom of speech, freedom of association, and so on?

The US isn't Iran, North Korea, or even China, as much as some people, including the US president, seem want to emulate those models.


>The national industry is supposed to submit to the military's wishes to the extent that they get compensated.

According to whom?


He's reading the room.

No, not this room. The one with Hegseth in it.

Look at his other comments. He's not wrong.


No one cares if the Pentagon refuses to do business with Anthropic. But Hegseth has declared that effective immediately, no one else working with the DoD can either--which includes the companies hosting Anthropics models (Amazon, Microsoft, and Alphabet).

So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".

Which miiight impact the amount of inference the DoD would be able to get done in those six months.


> So it's six months to phase out use of Anthropic at the DoD, but the people hosting the models have to stop "immediately".

> Which miiight impact the amount of inference the DoD would be able to get done in those six months.

Which might not be by accident looking at the Truth Social posts which state "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."

I would not be surprised to see this being used as an excuse to nationalize Anthropic.


To attempt to nationalize Anthropic. I'm sure there would be court cases filed almost immediately, restraining orders, months of cases and then appeals and then appeals of the appeals.

I think you were downvoted due to your use of "patriotism" (specifically without scare quotes) because that word is usually used with an intended positive connotation. So the reader gets the impression that you think that submitting to the DoD’s wishes is how things ought to be.

Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.

Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.


This is quite literally the norm for things with known dangerous use cases.

Go look at the package on a kitchen knife and it says not to be used as a weapon


Playing devil's advocate: if I did in fact grab one of my kitchen knives to defend myself against a violent intruder into my kitchen, I wouldn't expect to be banned from buying kitchen knives.

I'm not sure this is still a useful analogy, though...


And if you grabbed the knife and went on a violent spree, I'd absolutely expect the knife manufacturer to refuse to sell to you anymore.

The knife manufacturer isn't obligated to sell to you in either case, I'd expect them not to cut ties with you in the self defence scenario. But it is their choice.


The knife manufacturer would be more than happy to continue to sell to you, except for that minor little detail that you're in jail.

Any knife vendor who

1. Found out you used their knives to go murdering

2. Sells knives in a fashion where it's possible for them to prevent you from buying their knives (i.e. direct to consumer sales)

Would almost certainly not "be more than happy to continue to sell to you". Even if we ignore the fact that most people are simply against assisting in murders (which by itself is a sufficient justification in most companies), the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.


Meh. Not sure why knife dealers would be assumed to be more moral than firearms dealers. See, e.g. Delana v. CED Sales (Missouri)

> the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.

That... Doesn't happen.

Boycotts by people who weren't going to buy your product anyway are immaterial to business. The inevitable lawsuits are costly, but are generally thought of as good publicity, because they keep the business name in the news.


People who buy luxury kitchen knives are exactly the type of people who would choose not to buy a product because it is associated with crime.

People who buy (and make) firearms are... pretty close to the exact opposite.


So now it's "luxury" kitchen knives?

Goalposts moved.


Direct to consumer sales of kitchen knives are entirely luxury products... the goalposts are exactly where they've always been.

Ahhh, direct to consumer.

Where either it's a computer program (website) that knows nothing about you, or cutco.

If you think you wouldn't find a cutco representative to sell to you, you're on some good reality-altering drugs.


sotto voce the knives are a metaphor

Doesn't matter.

There will always be some company willing to sell to even the worst person, in any product category.

The response that companies have to boycotts, and the results of the boycotts themselves, are fractally chaotic at best.

But even most nominally socially-aware companies are reactive, rather than proactive.


Since the knife vendors were metaphors for AI vendors, is the comparison you want to make "AI vendors & weapons manufacturers"? That's the standard we should judge them by?

It's not about the standard we should judge them by, which is equivalent to how we think they should act.

It's about how we think they will act.

Especially when it comes to sales to the US military, I have no expectations about how companies will act.

Hell, just look at how many companies willingly helped China with their Great Firewall.


> Not sure why knife dealers would be assumed to be more moral than firearms dealers

What I mean is that you _did_ judge them by a standard used for weapons manufacturers. How you react to their actions _is_ your judgement.

But perhaps that is the standard we should use. Weapons manufacturing is a well regulated industry after all. Export controls, dual-use technology restrictions, if it has applications for warfare it should be appropriately restricted.


> is that you _did_ judge them by a standard used for weapons manufacturers.

I think any of these companies will attempt to get away with whatever the fuck they can.

That has fuckall to do with your rhetorical question of:

> That's the standard we should judge them by?


If I shoot someone, something that is explicitly warned against in firearm safety materials that come with every purchase of a new firearm, I am no longer allowed to purchase any more firearms.

There are many situations in which you can shoot someone and still be allowed to buy a gun.

Also, in the cases you can't, it's generally the government stopping you, not the gun companies.


That's for a different reason though--you broke the law.

The specific shape of a kitchen knife would make it a particularly poor fighting knife, and knives in general are bad for self defense, due to the potential for it to be turned against the user. So, there is a good argument that such a suggestion is really in the user's best interest rather than a cynical play for the manufacturer to limit liability.

These knife and lead analogies don't map well to the reality of AI. Note: just talking about the analogy itself not the point you are making.

Edit: hell I get downvoted and look where the knife analogy got us. A load of weird replies miles away from anything related to AI or DoD.


I agree. I hoped people would get my point, but instead are arguing about gun laws for some reason?

You should give it longer than an hour before you start complaining about downvotes. Or just let your comment stand on it's own.

Seconded. You can't see all the up and down votes, only the balance at the moment you look, and it's not too uncommon to be negative or even dead and be upped or vouched back to life later.

No it isn't. There are warnings, but once a knife is yours you are free to do whatever you want with it, including reselling it to someone else. The idea of terms of service of using something is not something that typically exists with physical objects that one can own. They can't take your knife away from you because you decided to use it for a medical purpose without purchasing a medical license for the knife.

They also have other vendors.

Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.


Or how to best direct the power of the military against the US civilian population. They keep trying.

I have used chatgpt 5.2 thinking for health, gemini hallucinates a lot, specially with dna analysis. Never tried using the new claude even though i have access through antigravity. Might give it a try. Do you have any tips on how to approach it for health ‘analytical power’?

I just made a project, added all my exams (they were piling up, me and my psychiatrist had been investigating for a year this to no avail) and started talking to it about my symptoms.

Within a few iterations of this it gave me a simple blood panel, then I did that one and it kept suggesting more simple lab or at home tests and we kept going through them until I was reasonably certain of “something” and now that I have hypothesis I am going to a doctor. I think it’s done a great job. I also kept asking it for simple lifestyle interventions to prevent progression of my issue and it consistent nailed it - one particular interverntion (adding salt to water and drinking it to prevent symptoms) made a huge improvement to my life - I was barely working before that.

I added in some text the instructions box (project master prompt) for it to realise - it’s not medical advice and I am aware of that (prevents excessive guardrails) - add confidence intervals and probability to all diagnostic statements (prevents me + Claude going into rabbit holes so easily, it often has 70-80% certainty of what it’s saying, but it’s clear that it doesn’t use the right language) - that It was talking to an non expert, to use simple language but to go into detail when necessary. I also ask it to stop doing unnecessary constant follow up questions to every answer as that causes me anxiety. I can share the prompt, in fact I might do so later as it might be useful to others.


Here is the prompt and a few notes on operation.

Make sure your first chat is about the exams in the project files. Make sure it reads them all. It has a tendency to read a few and go “is this good”. Ask for a summary and note any absences.

Try using the research and extended thinking features a lot if you think it’s not fully aware of anything. It might not be aware of more recent research. If it’s a serious condition you are researching, just ask it to do sweeps / use research to look for new info about it and find new papers. It might also deepen its understanding.

After you do research you can make a simple artefact and throw it onto the project files. That allows it to refer to it and gain more knowledge about a condition or issue that might not be as rich in the training data.

So, I find GPT to be so so bad for this it made me realise a bit on why the USG is so insistent. Claude Opus is just on a different class.

Here’s the master project prompt:

Act as an expert who’s talking to an interested layman. Engage in detail when requested but be overall succinct in your answers. Short sentences are fine, no need into be lengthy. Do deep research. When arriving at any kind of conclusion or hypothesis assign it a probability and a confidence interval - define this in percentages as in “90%”

On Artefacts - all artefacts should be just text and markdown. Never do anything more complicated with formatting, unless by explicit request.

Don't ask follow up questions unless it's to make for better diagnosis. I.e. don't keep asking questions just to maintain conversation going please. But never hesitate to ask questions if it makes for better outcomes.


Yep. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is respecting that provider's choice and acting accordingly.

The thing is nobody is saying the government is bad for not renewing the contract. Like it or not, that's definitely the administration's prerogative.

What we're seeing here is that when a vendor declines to change the terms of its contractual agreement for ethical reasons, the government publicly attacks it.


Perhaps for ethical reasons but a stated reason by Anthrophic is technical. "But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons."

With the other stated reason being legal. "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI."

I don't think we should lessen Anthrophic's stance from technical/legal to ethical. Just as we shouldn't describe what the department of war is doing as "not renewing a contract".


Not in software though. Clear precedent has been established via EULAs. Software companies set the rules and if users don't like, they can piss off. I don't see why it would be any different for the government.

I'm not a fan of EULAs, I think if you acquire some software anonymously and run it on your own systems you should be able to do whatever you want. however if you want software hosted on someone else's machines, or want to enter into a contractual relationship with them then government or not you should not have the right to compel work from them.

A lot of things are different when it comes to national security, and military.

Congress could come up with an act it it's for national interest.

The military isn't the typical End User.


Congress could, but didn't. Instead, the federal government made threats to retaliate if Anthropic doesn't comply.

Agreed they haven't and it will be difficult to see them voting in favour. But there are precedents. The Patriot act was more radical than a potential mandate for AI providers to prioritize national security.

Depending on the country, their legal value is limited: https://en.wikipedia.org/wiki/End-user_license_agreement#Enf...

The government is armed and can exempt itself from prosecution either by judicial means and/or by naked force. So it isn’t just a cut and dry licensing problem.

Because it's the government? Companies need to follow the rules the government sets, if they like it or not

The government cannot set arbitrary rules, it has to follow the law. (And, at least with a functioning separation of powers, it cannot change the law arbitrarily.)

Um. No, that's not how it works...

> Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.

Utter nonsense. When the US built the Blackbird, it could only use titanium because of the heat involved in traveling at that speed. But they didn't have enough titanium in the US. So the the US created front companies to purchase titanium from the Soviet Union.

Do you think the US should have informed the Soviet Union what it wanted to do with the metal?


What does the customer informing the vendor have to do with the vendor informing the customer?

Your comparison seems backwards


I don't believe they can change the name to Department of War without an actor Congress. It remains the DoD.

Yes, it's officially still the Department of Defense.

If this were a news outline writing "Department of War" I would be concerned. But in the case of the Anthropic CEO's blog post, I can understand why they are picking their fights.


I first read about DoW on a post by Anthropic and thought it was some kind of jab to the government.

It's a silly shibboleth, but I automatically ignore anyone who calls it the Department of War or Gulf of America. Hasn't steered me wrong yet. They're telling me they're the kind of people who only care about defending fascism.

I call it department of war, because I think it is a great self-own on their part to do such a rename.

There will be no fighting in the war room!

I think it's worth giving people a tiny bit of grace on this. I've surprised people by explaining that the "Department of War" is just fascist fanfic and that the legal name has not changed.

It's a testament to the broken information ecosystem we're in that many people genuinely don't know this. Most will correct themselves when told. I agree with you that those who don't are not worth engaging.


Google Maps calls it Gulf of America, pretty difficult to ignore Google.

Only in America, in the rest of the world Google calls it "Gulf of Mexico (Gulf of America)".

Don't deadname the Gulf!

Gulf of Amy


I ignore Google quite easily. Besides, as soon as Trump is out they will change the name back.

Because Google are bootlickers.

They literally complied with this request immediately and without question.

I would not defend all of Google's decisions in the Trump era, but complying immediately with politicized name changes has always been the status quo. Even in healthy democracies, the precise names of geographic features can be extremely controversial, and no sane company wants to get in a debate with the Japanese government about the real names of various islands.

It's almost like the democratically elected government gets to decide the name, not Google!

It's almost like the democratically elected Congress gets to decide the name, not the President!

(Spoiler: it's still legally called the Gulf of America)


People like democratically elected governments... until it's not their side.

Well I think we have an actor congress

He is just a symptom. The problem is far deeper and more severe than just him.

They can, however, rename their Twitter/X accounts and vacate the @SecDef handle, which seems to be up for grabs now, if anyone wants to do the funniest thing...

I tried to grab @SecDef just now, they appear to have it blocked/internally reserved

Huh. Maybe they just do that automatically when a verified account renames itself, to keep the old one reserved? Who knows.

I got a "something went wrong" error and then it auto assigned me @SecDef48372 or something similar.

Sad.


Or all the stupid shit this regime has done, this is the most sane.

They want the department to fight wars. At least they’re being honest.


Except they don’t, because fighting a war requires congressional approval.

No, fighting a war requires only engaging in international armed conflict.

Declaring a war requires Congress, and fighting a war other than in response to an invasion may be illegal under US law if Congress has not exercised its power to declare war, but that doesn't prevent wars from happening it just makes it illegal (though the only actual remedy is impeachment) for the President to wage war without authorization. And, in any case, that’s largely moot because Congress has exercised that power in an open ended (in terms of when and against whom) but limited (in authorized duration of any particular action without subsequent authorization) manner via the War Powers Act, giving every President since Nixon a blank check to start wars with full legal authority and then allow Congress an opportunity to vote to pull support from forces already in combat and hope the enemy already engaged is willing to treat the war as over as the only after-the-fact constraint.


Given today’s new war, I think it’s clear he can start a war whenever he wants

Of all the silly things that Trump did, I think this one is the most reasonable. This has always been a department of war. Calling it defense was propaganda.

Calling it Department of The Armed Forces or Department of Military would be neutral. Putting War in the name is as propaganda-like as Defense.

After it was changed from DoW the first time (in 1947), it was called the National Military Establishment (NME). They renamed it in 1949, potentially because "NME" said aloud sounds like "Enemy"

Gulf of America and department of war are nothing but propaganda and dick measuring. Prove me wrong please.

the entire administration negotiates in bad faith. literally every agreement they sign whether it's international trade or corporate contracts is up to the whim of a toddler with twitter

You pretty much nailed it. I can't even get outraged at any given instance now that the trendline is so staggeringly clear.

I can't see anyway this ends well for the US. I say this as both an American and a military veteran.


Never in history has an authoritarian ceded power without massive violence.

The dissolution of the USSR was not massively violent.

Frederick VII of Denmark, an absolute monarch, introduced parliamentarism without any violence or even broad public pressure.

And thats just what I can remember without digging.


And they don’t think anything through. If they do this then Amazon, Google and the rest will need to terminate their involvement with Anthropic. Trump will be getting a call from some Wall Street bigwigs imminently and it’ll get rolled back, I bet.

Alternately, they COULD terminate their involvement with the pentagon.

Contract law will certainly be a casualty once Rule of Law has completely been broken. I don’t understand why the business sector isn’t pushing back more. Surely they must all know that the legal legal context itself, within which they all operate, is at mortal risk and that Business as Usual will vanish once autocratic capture is complete.

They still think they can bribe their way out

My main takeaway from all of this is that Hegseth seems deeply unfit for his job. First there was the Signal leak and now this.

Look, Anthropic is not going to be designated a supply chain risk. 80% of the Fortune 500 have contracts with them. Probably a similar percentage of defense contractors. Amazon is a defense contractor for example. They'd have to remove Claude from their AWS offerings. Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts, and for what? Why? Because Hegseth had a bad day? Because he's a sore loser?

If he's decided he doesn't like the DoW's contract then he can cancel it, fine. To try and exact revenge on the best American frontier model along with 80% of the Fortune 500 in the process, to go out of his way to harm hundreds or perhaps thousands of American firms, defies all reason. This is behavior you would expect any adult would understand as petty and foolish, let alone one who's made it to the highest ranks of government.

So I think it's just not going to happen, Trump's statement on the matter notably didn't mention a supply chain risk designation. This suggests to me that Hegseth went off half cocked. The guy is a liability for Trump at this point, I'm guessing he won't last much longer.


> Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts

seriously? :)


| then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.

So one thing to call out here is that the assumption that DoW is working on specifically these use cases is not bullet proof. They simply may not want to share with anthropic exactly what they are working on for natsec issues. /we can't tell you/ could violate the terms.

It is also dumb that DoW accepted these terms in the first place.


Is this matter about publicly available model or private model? For publicly available model like opus 4.6, bad actors can do whatever they want and Anthropic won't know. If this is only about private custom model, designating public model as supply chain risk doesn't make sense as others can use it.

It's the Department of Defense.

[1] "only an act of Congress can formally change the name of a federal department." https://en.wikipedia.org/wiki/Executive_Order_14347

(edited to add the url I omitted)


Only Congress can declare war and Congress has the "power of the purse".

"You can just do things" (evil edition).


Contracts typically have escape clauses, especially for govt work.

They will just have to recompete!


Yeah, but in Might v Right, well, there’s only ever one victor.

With this administration, after all their proven lies, when in doubt, assume bad faith on their part. Assuming good faith at this point is Lucy and Charlie Brown and the football, but now the football is fascism (i.e., state control of corporations, e.g., what Trump administration is doing here).

Trump has historically stiffed his contractors. Why do you think his administration would be any different with adhering to a contract?


If anyone is the epitomy of arrogance, it is Hegseth.

No doubt the US Gov't will be using A I to perform automated military strikes without human supervision. and spying on US citizens (which they already have been doing for decades now).

Look no further than the case of patriot Mark Klein, a former AT&T technician, exposed a massive NSA surveillance program in 2006, revealing that AT&T allowed the government to intercept, copy, and monitor massive amounts of American internet traffic. Klein discovered a secret, NSA-controlled room—Room 641A—inside an AT&T facility in San Francisco, which acted as a splitter for internet traffic.


It’s the Department of Defense

I assume those agreements were probably signed before the current fascist regime running the US government and now they want to upend the terms of said agreement to allow in more fascism to aforementioned contract.

You nailed it.

It's so fishy, I spent the morning reading sam'AMA and it's a classic whitewashing act. OpenAI is claiming their setup is stronger and that DOW has agreed to their red lines but read the agreement below, it only says use in compliance with laws and executive order.

Anthropic wouldn't have walked away from a multi million contract if their two redlines could be respected. OpenAI on the other hand is a fast, willing and ready company. I would love to see Anthropic's proposed contract

In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.

We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools.

Our agreement includes:

1. Deployment architecture. This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with “guardrails off” or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons).

Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.

2. Our contract. Here is the relevant language:

The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.


[flagged]


It's not recent news that Anthropic has (had?) DoD contracts. This is a lot of words to write while seeming ignorant of basic facts about the situation.

The argument isn't that nobody knew Anthropic had DoW contracts. The argument is that there's a difference between "publicly known if you follow defense-tech procurement" and "trending on social media where Anthropic's core audience is now actively discussing it." Both can be true simultaneously.

A fact being technically available and that fact commanding widespread public attention are very different things. Anthropic's communications team understands this distinction even if you don't find it interesting. The blog post wasn't written for people who already track federal AI contracts, it was written for the much larger audience encountering this story for the first time and forming opinions about it in real time.

If the point you're making is just "I already knew this," that's fine, but it doesn't address anything about the incentive structure behind the public response.


This is an interesting perspective, but I think the fallout from sticking to his guns here is probably greater than the public blowback he would receive from serving the DoD. Without this specific sticking point, the public would know that Anthropic was serving the DoD, but not what specifically the model was being used for, and it would be difficult to prove it wasn't something relatively innocuous.

> if the directive had never been made public, would that blog post exist?

You're ignoring the sequence of events on the ground.

If there hadn't been any been any internal pushback from Anthropic, would the directive have ever been made public?


That's a fair point about sequencing, but it actually reinforces the argument rather than undermining it. If Anthropic pushed back internally, and that pushback is what led to the directive going public, then Anthropic had every reason to anticipate that this would become a public story. Which means the blog post wasn't a spontaneous act of transparency, it was a prepared response to a foreseeable escalation. That's more strategic rather than less so.

Internal pushback and public damage control aren't mutually exclusive. A company can genuinely disagree with a client's demands behind closed doors and simultaneously craft a public narrative designed to make itself look as good as possible once those disagreements surface. In fact, that's exactly what competent communications teams do, they plan for the scenario where private disputes become public, and they have messaging ready.

The real question isn't who went public first or why. It's whether Anthropic's stated position, "we support these military use cases but not those ones", reflects a durable ethical framework or a line drawn precisely where it needed to be to keep both the contracts and the brand intact. Nothing in the sequencing you've described answers that question. It just tells us Anthropic saw this coming, which, if anything, means the messaging was more carefully engineered, not less.


I already suspected the first comment was by an LLM, but deleted that from my reply as it didn't feel like a productive accusation. However, with "that's a fair point" as an opener, plus the sheer typing speed implied by replies, and the way that individual sentences thread together even as the larger point is incoherent, I'm now confident enough to call it.

I actually use assistive voice transcription as I am unable to type well with a keyboard.

[Edit: update]

I use assistive voice transcription because I'm unable to type well with a keyboard. But I'd point out that "you must be an AI" has become the new way to dismiss an argument without engaging with it. It's the modern equivalent of "you're just copy-pasting talking points", it lets you discard everything someone said without addressing a single word of it.

The fact that my sentences "thread together" is not evidence of anything other than coherent thinking. And speed of response says more about the tools someone uses than whether a human is behind them. Plenty of people use dictation, accessibility tools, or just happen to type fast.

^^^ This took me 30 seconds to speak aloud.


Ok, good to have that explanation. Your larger point, though, remains incoherent. Whether Anthropic saw this coming has nothing to do with the substance of the conflict here and is very much not "the real question".

Thanks. I saw everybody responding as if there might be at least a modicum of gravitas there, and thought I was suffering a stroke, or was pulled into another dimension.

I was pondering the same thing and to me the answer is a contractor sold something to the DoD and Anthropic pulled the rug out from under that contractor and the DoD isn't happy about losing that.

My speculation is the "business records" domestic surveillance loophole Bush expanded (and that Palantir is build to service). That's usually how the government double-speaks its very real domestic surveillance programs. "It's technically not the government spying on you, it's private companies!" It's also why Hegseth can claim Anthropic is lying. It's not about direct government contracts. It's about contractors and the business records funnel.


Yes, I assumed a mass surveillance Palantir program also. Interesting take on how it allows them to claim “we are not doing this” while asking Anthropic to do it.

Of course they can just say - we aren’t, Palantir is.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: