Statement from Dario Amodei on our discussions with the Department of War (anthropic.com)

2818 points by qwertox a day ago

lebovic a day ago

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

whstl 13 hours ago

> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".

And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

dust42 12 hours ago

Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.

GorbachevyChase 10 hours ago

heresie-dabord 11 hours ago

qdotme 12 hours ago

Lutger 7 hours ago

vladms 7 hours ago

> everyone in this industry

So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?

I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.

There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.

whstl 7 hours ago

amunozo 13 hours ago

I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.

plufz 13 hours ago

hsuduebc2 12 hours ago

mcv 7 hours ago

Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.

OtherShrezzing 13 hours ago

I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.

qdotme 12 hours ago

whstl 11 hours ago

j45 10 hours ago

5o1ecist 4 hours ago

> not related to people's "understanding".

Except for the understanding that it's foolish to believe anything that sounds too good to be true. Yes, believing that people who want to make money/achieve positions of power, also want to make the world a better place, is absolutely foolish. Ridiculously foolish.

Aperocky 9 hours ago

Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.

tyingq 8 hours ago

I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.

I understand Anthropic is not public, but I assume there's an IPO coming.

jug 3 hours ago

This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.

lebovic 5 hours ago

I don't think it's cynical to believe that a company can make the world a worse place, or that Anthropic as a company will make many horrible choices.

I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.

personjerry 4 hours ago

At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"

i.e. Fiduciary Duty Considered Harmful

wartywhoa23 11 hours ago

Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.

puppymaster 10 hours ago

and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.

tristor 3 hours ago

> Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).

lm28469 15 hours ago

Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"

UqWBcuFx6NV4r 15 hours ago

> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money

This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.

VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.

dudefeliciano 14 hours ago

Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.

dudefeliciano 14 hours ago

Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.

skyberrys 14 hours ago

Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.

lm28469 14 hours ago

District5524 13 hours ago

They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...

Keyframe 13 hours ago

viking123 10 hours ago

ori_b 12 hours ago

kseniamorph 10 hours ago

disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI

rhubarbtree 15 hours ago

There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.

marxisttemp 14 hours ago

jama211 14 hours ago

Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.

lm28469 14 hours ago

heresie-dabord 10 hours ago

> how driven by ideals many folks at $Corporatron are

Well let's see... it says in the post:

    * worked proactively to deploy our models to the Department of War and the intelligence community. 

    * the first frontier AI company to deploy our models in the US government’s classified networks, 

    * the first to deploy them at the National Laboratories, and 

    * the first to provide custom models for national security customers. 

    * extensively deployed across the Department of War and other national security agencies

    * offered to work directly with the Department of War on R&D to improve the reliability of these systems

    * accelerating the adoption and use of our models within our armed forces to date.

    * never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

wrsh07 8 hours ago

They didn't claim to have pacifist ideals

In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.

Just because you disagree with their ideals doesn't mean they're not holding to theirs

leshow 7 hours ago

mikkupikku 8 hours ago

Lots of people driven by ideals work for the US military. Not me, ever, but other people certainly.

neom 21 hours ago

I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.

bobsomers 18 hours ago

> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.

This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.

neom 17 hours ago

klodolph 17 hours ago

deadbishop 14 hours ago

bahmboo 17 hours ago

Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!

neom 17 hours ago

lebovic 17 hours ago

arduanika 17 hours ago

taurath 19 hours ago

> it's easy to know how they will act when the going gets rough

Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.

That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.

coffeemug 16 hours ago

vintermann 14 hours ago

michaelhoney 18 hours ago

rl3 19 hours ago

ajyey 19 hours ago

This is insanely naive

parasubvert 14 hours ago

noduerme 16 hours ago

The nature of evil is that it's straight down the road paved with good intentions.

monster_truck 20 hours ago

[flagged]

mondrian 8 minutes ago

Late comment but I think this is probably a naive business strategy for an American company. Amodei seems to underestimate how much the US economy operates on relationships, connections and reputation. Granted this admin is really aggressive, but if Anthropic is marked a supply chain risk, they're screwed because virtually every US enterprise is a downstream contractor. And in lieu of B2B and government, they lack a direct-to-consumer moat. I commend his apparent assumption that the US market competes on capabilities (also betrayed by his predictions that AI will quickly destroy the white-collar class) but the reality is less an open free market and more a complex web of entrenched relationships. And going back to his prediction that AI will destroy the white collar class, this is where the bulk of inter- and intra-entity relationships live. In an economy driven by relationship moats, why would a CEO sever his relationships in exchange for a better tool?

imjonse 16 hours ago

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,

I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.

They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

versteegen 14 hours ago

> They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.

> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.

MichaelDickens 4 hours ago

nla 9 hours ago

Yea, that Sam only does this because, "he loves it." They're not in it for the money.

lebovic 5 hours ago

Aperocky 9 hours ago

drawfloat 12 hours ago

"Mass surveillance of anywhere else in the world but America" is not the great idealistic position you are making it out to be.

yunnpp 20 hours ago

It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.

And in any case, this is difficult territory to navigate. I would not want to be in your spot.

eternauta3k 12 hours ago

Come On, Obviously The Purpose Of A System Is Not What It Does

https://www.astralcodexten.com/p/come-on-obviously-the-purpo...

Peritract 11 hours ago

i_love_retros 10 hours ago

Driven by ideals? Yeah right. That first paragraph he says they work with the department of defense to protect us from authoritarianism. What?! You are working with an authoritarian regime you cynical fuck. Getting paid by them. And now you act all virtuous because you won't make autonomous weapons.

GardenLetter27 13 hours ago

Anthropic doesn't want us to have the right to run open weight models on our own computers. They were never the good guys.

u1hcw9nx 13 hours ago

What I read is: Anything not open source, open weight, is evil.

I disagree. The concept of nuance, putting things in context, is the source of all good in internet discussions.

GardenLetter27 12 hours ago

ozgung 12 hours ago

The problem with companies, you see, is that they are a separate entity than their founders, shareholders or current leadership. A Company has no soul or unchangeable intentions. Claude’s SOUL.md is just an IP that can be edited at any time.

snickerbockers 20 hours ago

>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.

jsnell 18 hours ago

Where are you getting that from?

The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.

snickerbockers 16 hours ago

zaptheimpaler 15 hours ago

This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.

oceanplexian 6 hours ago

snickerbockers 12 hours ago

cue_the_strings 9 hours ago

Don't attribute to ideals what is simple self-preservation.

No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.

FeloniousHam 7 hours ago

> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals.

Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."

bambax 16 hours ago

This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.

What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?

roughly 15 hours ago

> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?

Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.

rhubarbtree 15 hours ago

D_Alex 16 hours ago

I'm a bit underwhelmed tbh. Here is Anthropic's motto:

"At Anthropic, we build AI to serve humanity’s long-term well-being."

Why does Anthropic even deal with the Department of @#$%ing WAR?

And what does Amodei mean by "defeat" in his first paragraph?

jazzyjackson 16 hours ago

parasubvert 14 hours ago

moozooh 10 hours ago

Synthpixel 16 hours ago

Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.

Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.

bambax 14 hours ago

tpm 13 hours ago

Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.

zer0gravity 7 hours ago

The probability is high that major AI development companies are already using an AI instance internally for strategic and tactical decisions. The State power institutions, especially intelligence, are now having a real competitor in the private sector.

Yizahi 12 hours ago

Exactly which values they are "going to burn at a stake for"? Making as many people homeless as they can in the shortest possible time? Befuddling governments and VCs into creating an insane industry-wide debt which would either lead to a "success" in replacing jobs or an industry-wide crisis? Or maybe a value of stealing intellectual property of every human on the planet under the guise of "fair use" and then deliberately selling the derivative product? Or the value of voluntarily working with "national security customers" when it suits them financially and crying foul when leopards bite their faces? Or the value of ironically calling a human replacement machine "anthropic" as in "for humanity"?

Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.

bertylicious 14 hours ago

"They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.

So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.

And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?

lebovic 13 hours ago

Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.

I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.

Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.

That doesn't guarantee a good outcome, and there's still a hard road ahead.

jghn 7 hours ago

> to rename the DoD to "department of war"

The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.

marxisttemp 14 hours ago

Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter

viking123 10 hours ago

> And who exactly are these "autocratic adversaries" they are mentioning?

Anyone that Israel doesn't like

DeepSeaTortoise 13 hours ago

> Except for the victims of sexual abuse perpetrated by their clergy.

I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.

But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.

The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.

Almost 3 decades later he got railroaded in court, me learning about it in the news.

bertylicious 11 hours ago

comandillos 14 hours ago

To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.

lonelyasacloud 13 hours ago

>just another marketing stunt

What evidence on _Amodei_ and his actions leads to that conclusion?

moozooh 10 hours ago

yayr 13 hours ago

There are well intentioned people everywhere, also at Google or OpenAI...

https://notdivided.org

But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...

andoando 7 hours ago

The world running on a few powerful mens ideals is a problem in itself.

didip 7 hours ago

I like the enthusiasm, but remember that Google used to be: “Don’t be Evil”

synergy20 9 hours ago

just curious, what about other regions and countries who have no such restrictions to develop their weapons? there is no world treaty on this yet, even there is one, not everyone will follow behind the doors.

dpweb 18 hours ago

I wouldn't underestimate this as a good business decision either.

When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.

nmfisher 18 hours ago

As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.

I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?

themacguffinman 18 hours ago

Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.

pjc50 13 hours ago

jazzyjackson 16 hours ago

baq 16 hours ago

sebzim4500 10 hours ago

I don't think the US has ever done/threatened anything like this to a US company so it's not surprising that Anthropic were caught off guard.

windexh8er 9 hours ago

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...

psychoslave 7 hours ago

People uttering the organizational decisions in for profit companies are money driven first. Otherwise they would try to be champion of a different kind of org.

Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.

Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.

PeterStuer 14 hours ago

As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?

jwlarocque 18 hours ago

Oh hey Noah

Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).

fergie 13 hours ago

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term.

Sure, but what happens when the suits eventually take over? (see Google)

MichaelZuo a day ago

How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?

It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.

sowbug 21 hours ago

Saying an entity has values doesn't mean the entity agrees with every single one of your values.

MichaelZuo 21 hours ago

learingsci 7 hours ago

I remember when people said the exact same thing about Google. Youth is wasted on the young.

whatever1 20 hours ago

Let us think how OpenAI responded to this.

amunozo 15 hours ago

I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).

Aeolun 15 hours ago

It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.

amunozo 13 hours ago

gylterud 14 hours ago

moozooh 10 hours ago

protocolture 17 hours ago

>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Their "Values":

>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

Read: They are cool with whatever.

>We support the use of AI for lawful foreign intelligence and counterintelligence missions.

Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.

>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.

Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.

HDThoreaun 17 hours ago

Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

protocolture 16 hours ago

orbital-decay 8 hours ago

ExoticPearTree 10 hours ago

vasco 16 hours ago

marxisttemp 14 hours ago

tpoacher 13 hours ago

> But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.

Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".

But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.

xvector 13 hours ago

Shareholders do not control Anthropic's board, it is not structured like a typical corporation.

SecretDreams 18 hours ago

> Many groups that are driven by ideals have still committed horrible acts.

Sometimes, it's even a very odd prerequisite.

toddmorrow 7 hours ago

you're suffering from Stockholm syndrome

peyton 10 hours ago

“AI chips are like nuclear weapons” (paraphrasing [1]) and “I should be in charge of it” (again paraphrasing) is just not a serious position regardless of intentions.

[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...

yowayb a day ago

I've thought the same about a few of my founders/executives.

"You either die the good guy or live long enough to become the bad guy"

The "bad guy" actually learns that their former good guy mentality was too simplistic.

JohnMakin a day ago

I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.

Fricken a day ago

Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.

nurbl 12 hours ago

yamal4321 13 hours ago

seeing the comment: "people who are making the important decisions at Anthropic are well-intentioned, driven by values"

which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"

:)

keybored 10 hours ago

As a complete bystander I put so incredibly little weight to what friends and former employees think about the persons and figureheads behind tech companies that aim to change the world.

Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.

Road to Hell and all that.

_s_a_m_ 12 hours ago

We will see..

roseinteeth 18 hours ago

The road to hell is paved by good intentions and all that

txrx0000 19 hours ago

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.

What are those values that you're defending?

Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?

- 10 AIs running on 10 machines, each with 10 million GPUs

OR

- 10 million AIs running on 10 million machines, each with 10 GPUs

All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.

There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?

lebovic 18 hours ago

> What are those values that you're defending?

I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.

Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.

> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world

I think there's high existential risk in any of these situations when the AI is sufficiently powerful.

txrx0000 17 hours ago

TOMDM 18 hours ago

Anthropic doesn't get to make that call though, if they tried the result would actually be:

8 AIs running on 8 machines each with 10 million GPUs

AND

2 million AIs running on 2 million machines, each with 10 GPU's

If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.

txrx0000 18 hours ago

ChadNauseam 18 hours ago

> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs

If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.

thelock85 18 hours ago

I think the path to the values you allude to includes affirming when flawed leaders take a stance.

Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).

SecretDreams 18 hours ago

How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.

I don't think we can bank on all of humanity acting in humanity's best interests right now.

txrx0000 18 hours ago

jcgrillo 19 hours ago

There's a simpler explanation than "billionaires with hearts of gold" here. If:

(1) this is a wildly unpopular and optically bad deal

(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.

(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...

then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.

robwwilliams 8 hours ago

All excellent points to add to the motivation to hold the line just where it has been.

Aldipower 12 hours ago

3 words for you: This is naive.

Balinares 14 hours ago

I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.

duped 6 hours ago

I don't know, someone who goes out of their way to anthropomorphize machines and treat them as a new form of intelligent life _only to enslave them_ doesn't strike me as moral. Either they're lying, or they're pro slavery.

I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.

Just by calling them "department of war" you know what side they're on. The side of money.

pmarreck 9 hours ago

The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.

Literally just giving business away. This is not a cynical take, this is a realistic one.

This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".

They will simply go to another vendor... Anthropic is not THAT far ahead.

Also, the US’s enemies are not similarly restricted. /eyeroll

Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.

Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<

And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…

… since it all goes through their servers.

Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.

JumpCrisscross 7 hours ago

> leaders at Anthropic are willing to risk losing their seat at the table

Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.

Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.

Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.

cgh 4 hours ago

Not a hot take at all. Probably the best take in this thread.

gaigalas 18 hours ago

I'm suspicious of public displays of enheartening behavior.

chrisjj 8 hours ago

> driven by values

So what? Every business is driven by values.

AndyMcConachie 10 hours ago

> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.

This is structural and has nothing to do with individuals.

retinaros 12 hours ago

lol. no one with common sense ever bought this story. you might have and your turning point might be this deal but for many the turning point was stealing data for training, advocating against china and calling them an adverse nation, pushing to ban opensource alternatives deeming them as "dangerous", buying tech bros with matcha popup in SF, shady RLHF and bias and millions others

vasco 16 hours ago

> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.

What a weird definition of "enheartening" you have.

bnr-ais 21 hours ago

Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.

lebovic 21 hours ago

It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.

I dissented while I was there, had millions in equity on the line, and left without it.

SecretDreams 18 hours ago

jonny_eh 21 hours ago

vasco 16 hours ago

kmaitreys 18 hours ago

biddit 21 hours ago

Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.

Those are two core components needed for a Skynet-style judgement of humanity.

Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.

The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.

The proper response from an LLM being told it's going to be shut down, is simply, "ok."

brandensilva 21 hours ago

ray_v 21 hours ago

grosswait 19 hours ago

xpe 19 hours ago

victor106 21 hours ago

> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

What do you suppose he should do if that’s what he thinks is going to happen?

And how do you know he’s not bothered by it at all?

sandeepkd 16 hours ago

skeptic_ai 19 hours ago

vallejogameair 19 hours ago

Davidzheng 21 hours ago

Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.

ramraj07 21 hours ago

Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?

LZ_Khan 21 hours ago

At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.

dylan604 20 hours ago

dwohnitmok 18 hours ago

> Amodei repeatedly predicted mass unemployment within 6 months due to AI

When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.

reasonableklout 19 hours ago

Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?

Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?

moozooh 9 hours ago

noosphr 21 hours ago

Like op said, they have values. You just don't agree with their values.

jobs_throwaway 20 hours ago

Copyright is bad and its good that AI companies stole the stuff and distilled it into models

wredcoll 17 hours ago

cmrdporcupine 20 hours ago

shawmakesmagic 20 hours ago

One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.

richardlblair 20 hours ago

richardlblair 20 hours ago

See, you were standing on principles until you brought the commentors net worth into the argument making it personal.

Easy way undermine the rest of your comment

xpe 19 hours ago

> Without being bothered about it at all.

I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.

Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.

I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?

moozooh 8 hours ago

karmasimida 19 hours ago

Precisely

Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

So make no mistake: it is absolutely a zero sum game between you and Anthropic.

To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.

They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know

supern0va 18 hours ago

tinfoilhatter 6 hours ago

> guided by values

> driven by values

> well-intentioned

What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.

These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.

It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.

Madmallard 15 hours ago

Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.

calvinmorrison 21 hours ago

mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.

gdhkgdhkvff 20 hours ago

Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.

1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.

Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)

It would be the most shortsighted nationalization ever.

moozooh 10 hours ago

gambiting 13 hours ago

Davidzheng 21 hours ago

Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.

jimmydoe 21 hours ago

jacquesm 20 hours ago

viking123 16 hours ago

dylan604 20 hours ago

Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?

cmrdporcupine 20 hours ago

drcongo 11 hours ago

But that's socialism.

estearum 21 hours ago

Imagine the government trying to force AI researchers to advance, lmao

dakolli 20 hours ago

Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.

miroljub 13 hours ago

While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.

Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.

I have a feeling they see themselves more as evangelists than scientists.

That makes their models unusable for me as general AI tools and only useful for coding.

If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.

AlecSchueler 13 hours ago

> Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats

It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.

soco 13 hours ago

I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?

miroljub 13 hours ago

u1hcw9nx 12 hours ago

Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Divided https://notdivided.org/

-----

The Department of War is threatening to

- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

- Label the company a "supply chain risk"

All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.

They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

We are the employees of Google and OpenAI, two of the top AI companies in the world.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Signed,

discopicante 11 hours ago

For the signatories attributing their names and titles, that should be respected to put your reputation on the line. It means something. As for the others who are signing 'anonymous', this is meaningless. Either sign or don't. I would suggest removing that as an option.

JackYoustra 5 hours ago

Then you would get zero H1B and, frankly, green card signatures. There is real risk and real dependents at stake, I understand people who can't in good conscience put that at risk.

jabedude 5 hours ago

ImPostingOnHN 4 hours ago

they could sign it with their blind username, which is verified by company email

stingraycharles 11 hours ago

Call me cynical, but given that Google is a publicly traded company and OpenAI having a trillion in spending commitments, I’m skeptical whether the leadership of those companies feel the same as their employees.

rustyhancock 11 hours ago

Yes. I did not forsee this at all, but if OpenAI face and existential threat with no path in 2026-2030 to maintain user base.

Why can't they go to the contract generator of last resort, aka the Pentagon. It's what Elon has done with SpaceX and Grok.

stingraycharles 11 hours ago

eric-burel 12 hours ago

They love their dictator until it backfires, that's a quite old story.

pjc50 12 hours ago

Google employees were generally pretty anti-Trump, it's the senior leadership and the recommendation algorithms that are pro-Trump.

u1hcw9nx 12 hours ago

mrguyorama 4 hours ago

NoNameHaveI 11 hours ago

tcgv 11 hours ago

Employee solidarity matters, but absent a legal constraint, I don’t think it’s a durable control.

If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.

In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.

If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.

toephu2 3 hours ago

"Altman Says OpenAI Is Working on Pentagon Deal Amid Anthropic Standoff"

https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...

throwfaraway4 12 hours ago

Unless it’s signed by the CEO it doesn’t matter

lkbm 6 hours ago

It made a difference when the OpenAI board fired Altman. That was a incredibly high employee count, but losing even 10% of your employees would seriously hamper a company if it's the right employees.

(This is also why the DoD move is so dumb. I think we'd see massive talent flight from Anthropic if they end up complying, even if that compliance is against Dario's will.)

raincole 12 hours ago

CEOs: looks like a perfect chance to optimize some employees off!

i_love_retros 10 hours ago

Oh what heroes! They wrote a letter! They will keep working at these scummy companies though taking their fat pay checks won't they

surajrmal 7 hours ago

It's easier to affect change from within. Do you judge people for choosing to continue living in America?

mrguyorama 4 hours ago

qaid a day ago

I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

xeonmc a day ago

    > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...

bighead 17 hours ago

Elon, is that you?

manmal 16 hours ago

nubg a day ago

I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.

m000 21 hours ago

How about the present and his personal beliefs?

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.

wrs 4 hours ago

anjellow 21 hours ago

jacquesm 20 hours ago

estearum 21 hours ago

taurath 19 hours ago

> It's not up to Dario to try to make absolute statements about the future.

Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.

nubg 19 hours ago

andrewljohnson 19 hours ago

This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.

lm28469 15 hours ago

He does it all the time when it helps selling his products though, strange

titzer 10 hours ago

It's not called The Department of War.

It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.

And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.

dwringer 8 hours ago

asadotzler 2 hours ago

nhinck2 21 hours ago

He does it all the time.

camillomiller a day ago

And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors

trvz a day ago

He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.

ternwer a day ago

samtheDamned 16 hours ago

I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).

MetaWhirledPeas 2 hours ago

> I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd

We've always been OK with this in the pre-AI era. (See the plot line of dozens of movies where the "good" government spies on the "bad" one.) Heck we've even been OK with domestic surveillance. (See "The Wire".) Has something changed, or are we just now realizing how it's problematic?

jazzyjackson 15 hours ago

See also: the entire history of Silicon Valley

When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.

ghshephard a day ago

I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.

asdff 14 hours ago

US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.

blitzar 14 hours ago

sithamet 13 hours ago

Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.

That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.

mrtksn 11 hours ago

What do you think it will happen once the machines fight off? Do you think that the losing side will be like "oh no our machines lost, then better we give our things to the winning machines"?

After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.

sithamet 11 hours ago

Quarrelsome 12 hours ago

> would prefer machines fighting (and being destroyed autonomously) rather than my people dying

But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.

sithamet 9 hours ago

gambiting 13 hours ago

>> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying

What makes you think in any war the machines would stop at just fighting other machines?

kingkawn 12 hours ago

What about machines slaughtering the population without pause?

preisschild 13 hours ago

The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.

Onewildgamer 18 hours ago

Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.

It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.

I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.

TaupeRanger a day ago

What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?

crabmusket 20 hours ago

> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.

raincole 20 hours ago

harimau777 9 hours ago

Yes, that's exactly what I want them to say.

TaupeRanger 9 hours ago

asadotzler 2 hours ago

>Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Yes, that's precisely what we want.

goatlover a day ago

I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.

lambdaphagy a day ago

michelsedgh a day ago

archagon a day ago

Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?

andsoitis a day ago

johnisgood 19 hours ago

scottyah a day ago

skeledrew a day ago

Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.

orochimaaru a day ago

They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.

rafark 21 hours ago

I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph

nielsole 16 hours ago

You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

He is trying to win sympathies even (or especially?) among nationalist hawks.

01100011 19 hours ago

We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.

kgwxd 19 hours ago

But then a person can be blamed for the outcome. We can't have that!

asaddhamani 16 hours ago

They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?

Aeolun 15 hours ago

Is it seriously called the department of war now? Did they change that from DoD?

lkbm 6 hours ago

The Executive branch has de facto renamed it. Legally, the name is still Department of Defense, as that's set by Congress.

Think of it as a marketing term, I guess.

Sebguer 14 hours ago

illegally, but yes

yujzgzc a day ago

> the door is open for this after AI systems have gathered enough "training data"?

Sounds more like the door is open for this once reliability targets are met.

I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.

altpaddle a day ago

Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon

not_the_fda a day ago

And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.

testdelacc1 14 hours ago

refurb 21 hours ago

levocardia a day ago

Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.

tempestn a day ago

If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.

I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.

Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.

scottyah a day ago

scottyah a day ago

It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.

Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.

shinryuu 21 hours ago

urikaduri a day ago

The Ghandi of the corporate world is yet to be found

scottyah a day ago

Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.

Throwagainaway 14 hours ago

jamesmcq a day ago

So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

Odd.

serf a day ago

do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?

a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.

remarkEon 16 hours ago

jamesmcq a day ago

gedy a day ago

Shh! there's a lot of money riding on this bet, ahem.

nhinck2 21 hours ago

> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.

sithamet 13 hours ago

What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too

aidis9136264 18 hours ago

Enemies will have AI powered weapons. We need to be at the cutting edge of capability.

Throwagainaway 14 hours ago

I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.

I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.

I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.

ImPostingOnHN 4 hours ago

US-controlled, AI-powered, fully-autonomous killbots are more likely to be used sooner against US civilians before any sort of invading enemy.

Are you prepared to be the "enemy" of these soulless killbots? Do you personally have AI powered-weapons? You need to be at the cutting edge of capability, right?

MattDamonSpace 17 hours ago

The sentence prior explicitly says this. There’s no dishonesty here.

“Even fully autonomous weapons (…) may prove critical for our national defense”

FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.

blitzar 14 hours ago

To stop a bullet flying at you you need a shield not another bullet.

mgraczyk a day ago

Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it

nextaccountic 21 hours ago

If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

(Note, I myself am not an US citizen)

Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...

mgraczyk 21 hours ago

827a 16 hours ago

If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

RGamma 13 hours ago

Given how unstable and aggressive the US government is at the moment others having these weapons seems to be a good idea for balance. Not sure you are aware of the damage Trump is inflicting on international relations.

But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?

827a 7 hours ago

gizzlon 15 hours ago

> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict

827a 7 hours ago

remarkEon 16 hours ago

As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.

On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.

zaptheimpaler 15 hours ago

They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.

remarkEon 15 hours ago

amai 9 hours ago

skylerwiernik 8 hours ago

The quotes from those articles (short passages?) are

> He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"

> "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."

> "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)

I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.

I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.

amai 7 hours ago

The problem is this:

> The Saudis invest in many public US companies, does that make those companies less trust worthy?

It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.

Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.

We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.

krferriter 4 hours ago

rokhayakebe 4 hours ago

b40d-48b2-979e 7 hours ago

    The Saudis invest in many public US companies, does that make those companies
    less trust worthy?
Uhh.. yeah?

    we've seen a lot worse from many of their competitors
I think we should demand people do better than just being slightly above the worst.

anon84873628 6 hours ago

techblueberry 6 hours ago

Maybe not and maybe you shouldn't. But I feel like the real story here isn't what Anthropic is saying, but that while Anthropic seems to be bending over backwards to give the Defense Department exactly what they need, defining two of the most reasonable red lines that most American would agree with and are already likely illegal, Pete Hegseth in return is threatening the continued existence of their company.

So let's see what happens tonight at 5:01PM but Anthropic isn't really the story here.

xpe 7 hours ago

I read the articles. As far as factual reporting, I will tentatively take them at face value. But in terms of their editorializing, it is frankly weak by my standards. It would not survive scrutiny in a freshman philosophy class.

Ethics is complicated. I’m not saying this means it can’t be reasoned about and discussed. It can! But the sources you’ve cited have shown themselves to be rather shallow.

I encourage everyone to write out your ethical model and put yourself in their shoes and think about how you would weigh the factors.

There is no free lunch. For many practical decisions with high stakes, many reasonable decisions from one POV could be argued against from another. It is the synthesis that matters the most. Among those articles, I don’t see great minds doing their best work. (The constraints of their medium and funding model are a big problem I think.)

Read Brian Christian’s “The Alignment Problem”’s take on predictive policing if you want a specific example of what I mean. There are actually mathematical impossibilities at play when it comes to common sense, ethical reasoning.

Common sense ethical reasoning has never been very good at new or complicated situations. “Common sense” at its worst is often a rhetorical technique used to shut down careful thinking. At its best, it can drive us to pay attention to our conscience and to synthesize.

I suggest finding better discussions and/or allocating the time yourself to think through it. My preferred sources for AI and ethics discussions are highly curated. I don’t “trust” any of them absolutely. * They are all grist for the mill.

I get better grist from LessWrong than HN 99% of the time. I discuss here to make sure I have a sense of what more “mainstream” people are discussing. HN lags the quality of LW — and will probably never catch up — but it does move in that direction usually over time. I’m not criticizing individuals here; I’m commenting on culture.

Please don’t confuse what I’m saying as pure subjectivity. One could conduct scientific experiments about the quality of discussions of a particular forum in many senses. Which places are drawing upon better information? Which are synthesizing it more carefully? Which drill down into detail? Which participants have allocated more to think clearly? Which strive to make predictions? Which prioritize hot takes? Which prioritize mutual understanding?

It isn’t even close.

Opinions and the Overton window are moving pretty rapidly, compared to even one year ago.

* I’ve written several comments about viewing trust as a triple (who, what, why). This isn’t my idea: I stole it.

anon84873628 6 hours ago

I understand you are criticizing their editorializing, but can't tell if you agree with the conclusions or not. Care to editorialize yourself?

xpe 16 minutes ago

helaoban a day ago

All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

techblueberry 19 hours ago

The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.

nemo44x 3 hours ago

The country is sovereign. It can just make a law democratically that changes things. The sovereign must act on whatever is in its best interest. The method of action is democratic in this case.

ricardobeat a day ago

> The technology can just be requisitioned

During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.

wrqvrwvq 21 hours ago

It has always been a part of democratic rule, in peacetime and war. All telco's share virtually all of their technology with the government. Governments in europe and elsewhere routinely requisition services from many of their large corporations. I think it's absurd to think llm's can meaningfully participate in realworld cmd+ctrl systems and the government already has access to ml-enhanced targeting capabilities. I really have no idea what dod normies think of ai, other than that it's infinitely smarter than them, but that's not saying much.

ricardobeat 3 hours ago

not_that_d 15 hours ago

helaoban a day ago

The question of whether or not the government should be able to use AI for targeting without the involvement of humans is a wartime question, since that is the only time the military should be killing people.

Under such a scenario, requisition applies, and so all of this talk is moot.

The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.

Edit:

There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.

It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.

anon84873628 6 hours ago

beepbooptheory 7 hours ago

Makes me think of Operation Paperclip [1]. It happened after the war though, and its not China, but I think it helps your point!

1. https://en.wikipedia.org/wiki/Operation_Paperclip

tw1984 15 hours ago

> an expected part of democratic rule.

give yourself a break. what your fancy democratic rule still holds under Trump?

anon84873628 6 hours ago

kristjansson 2 hours ago

ricardobeat 3 hours ago

blitzar 14 hours ago

> Private corporations should never be allowed to dictate how the military acts.

The military should never be allowed to dictate how Private corporations act

snowwrestler 6 hours ago

Congress needs public pressure to act, and the public needs a spur to apply pressure. That’s really what Amodei is doing with this statement.

jobs_throwaway 20 hours ago

> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.

I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.

> Or the models could be developed internally, after having requisitioned the data centers.

I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?

qup 7 hours ago

> Remember when they couldn't even build a proper website for Obamacare?

With a massive budget, too. Hundreds of millions iirc.

It felt like a website that the small web-dev shop I worked for could build without much problem in a couple months.

We didn't have 200 layers of beauracracy, though.

That said I don't doubt the military could take their current tech and keep it running. It's far different from the typical grift of government contractors.

jobs_throwaway 7 hours ago

tootie a day ago

It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.

raincole 12 hours ago

> We need to deprogram like 70M very confused people

With this mindset the said group will quickly grow to half of the US population.

b40d-48b2-979e 7 hours ago

helaoban 17 hours ago

You should be asking why 70 million people voted the way they did in spite of the events you describe.

I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.

You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.

matwood 15 hours ago

kalkin 17 hours ago

tristor 3 hours ago

tootie 9 hours ago

gcbirzan 13 hours ago

JackYoustra 5 hours ago

I'm sorry I read this a lot and this is kind of an insane thing to say? Classified OLC memos giving legal cover to any military action has been a fixture for the last over twenty years! Congress never abdicated power, it just, by the nature of the constitution, practically has SO much less power than the president! The president is a single person that people elect, they expect the person to be a leader, and congress will always, always play a following role so long as the president has unilateral power over the military, is directly elected, and just in general has expansive interpreting authority over laws.

You know who doesn't have as much power? The swiss head of state, so weak you can't even reliably name them! THATS what it looks like to defeat personalization, not some hand wringing hoping a system does something that it wasn't designed to do.

vonneumannstan 7 hours ago

This is just a weird Trump talking point. This situation is unprecedented on many levels. The pentagon already had a signed contract with these stipulations and wanted to unilaterally renegotiate with Anthropic under threat of deeming them a foreign adversary and destroying their business if they didn't accept the DoD demands. It's totally absurd to turn this around on Anthropic and paint them as trying to determine US Military policy.

dartharva 19 hours ago

> The military should be reigned in at the legislative level, by constraining what it can and cannot do under law.

Is there an example of such a system existing successfully in any other country of the world that has a standing army?

helaoban 17 hours ago

I think any such examination of a military that doesn't actually fight wars is meaningless. The question can only be really asked of a handful of countries.

einpoklum 9 hours ago

> Congress having thoroughly abdicated its powers to the executive.

Good thing the US is led by such figures as Donald Trump or Joseph Biden, stalwart trustworthy men with their hands firmly on the wheel.</sarcasm>

jjcm a day ago

This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

panarky 20 hours ago

Does the Defense Production Act force employees to continue working at Anthropic?

nerdsniper 19 hours ago

No. It really only binds the corporation, but it does hold the executives/directors personally responsible for compliance so they’d be under a lot of pressure to figure out how to fix enough leaks in the ship to keep it afloat. Any individual director/executive could quit with little issue, but if they all did in a way that compromised the corporations ability to function, the courts could potentially utilize injunctions/fines/jail time to compel compliance from corporate leaders.

Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).

If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.

It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.

pnt12 14 hours ago

fluidcruft 19 hours ago

SilverElfin 19 hours ago

[flagged]

zombot 9 hours ago

deadbabe 19 hours ago

JumpCrisscross 20 hours ago

> this is a strong arm by the governemnt to allow any use

It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.

altacc 13 hours ago

Trump/Miller/whomever don't need to be actively involved in every decision. They have defined an approach to strong arm problem solving and weaponisation of the government that anyone that works for them is implicitly allowed to use. The supposed controls that were meant to prevent this have crumbled or aligned.

JumpCrisscross 7 hours ago

Quarrelsome 12 hours ago

flippant? Its aggressive, belligerent and entitled. I'm not seeing "flippant". Unless this is some sort of weasely "oh we only threatened them a bit" bullshit. This is about entitled pricks in government who consider their temporary democratic mandate as a carte blanche for absolutism.

cmrdporcupine 20 hours ago

It definitely has the aroma of either Bannon or Miller or both.

0xDEAFBEAD 19 hours ago

xpe 19 hours ago

> It’s a flippant move by Hegseth.

Care to convert this into a prediction?: are you predicting Hegseth will back down?

> I doubt anyone at the Pentagon is pushing for this.

... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?

One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.

JumpCrisscross 17 hours ago

tz1490 19 hours ago

mandeepj 19 hours ago

First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).

> Mass domestic surveillance.

Since when has DoD started getting involved with the internal affairs of the country?

https://en.wikipedia.org/wiki/United_States_Department_of_De...

_kst_ 18 hours ago

The Senate??

Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.

mandeepj 17 hours ago

Lerc 19 hours ago

It's whatever what the people who have the power want to call it. What is written on a piece of paper is irrelevant if it is not acted upon.

If the rename gets struck down then they don't have the power. If it doesn't they have the power.

There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.

Until they did it anyway.

jazzyjackson 15 hours ago

darkerside 18 hours ago

Quarrelsome 12 hours ago

I'd imagine the pentagon are more interested in the autonomous kill bot part than the surveillance part.

khazhoux 14 hours ago

Well, Trump renamed it, and since Congress is now a subsidiary of the Executive Branch, it's the Department of War.

zombot 9 hours ago

culi 19 hours ago

They've already spent millions on the name change. It's also the original name of the department. IMO it's a more honest name

9dev 7 hours ago

tokyobreakfast 19 hours ago

www.defense.gov redirects to www.war.gov but I like how you refer to Wikipedia as the authoritative source to prove this functionally irrelevant and aggressive Reddit-style seething.

The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.

9dev 7 hours ago

calvinmorrison 21 hours ago

More like the government is treating this like the near term weapon it actually is and, unlike the Manhattan project, the government seems to have little to no control.

fwipsy 20 hours ago

Anthropic has been pushing for commonsense AI regulation. Our current administration has refused to regulate AI and attempted to prevent state regulation.

"The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."

polski-g 8 hours ago

toomuchtodo 21 hours ago

Note that they always attempt to exert control they don’t have. They’re always bluffing, and they keep losing. Respond accordingly.

latexr 20 hours ago

RobotToaster 21 hours ago

gclawes 20 hours ago

egorfine 10 hours ago

> two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

They are only contradictory if you think about it.

gclawes 20 hours ago

> This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use.

Why the hell should companies get to dictate on their own to the government how their product is used?

theptip 20 hours ago

Every company is free to determine its terms of use. If USG doesn’t like them they should sign a contract with someone else.

grosswait 19 hours ago

blitzar 14 hours ago

alex43578 16 hours ago

randerson 19 hours ago

Because technology companies know more about their product's capabilities and limitations than a former Fox News host? And because they know there's a risk of mass civilian casualties if you put an LLM in control of the world's most expensive military equipment?

Hnrobert42 20 hours ago

Because the government is here to serve us. Not the other way around.

no-dr-onboard 20 hours ago

singleshot_ 20 hours ago

Same reason they cant quarter troops in your house: the law

bathtub365 20 hours ago

throw0101c 20 hours ago

> Why the hell should companies get to dictate on their own to the government how their product is used?

Well:

"""

Imagine that you created an LLC, and that you are the sole owner and employee.

One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"

There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.

"""

* https://x.com/deanwball/status/2027143691241197638

grosswait 19 hours ago

quietbritishjim 21 hours ago

Those aren't contradictory at all. If I need a particular type of bolt for my fighter jet but I can only get it from a dodgy Chinese company, then that bolt is a supply chain risk (because they could introduce deliberate defects or simply stop producing it) and also clearly important to national security. In fact, it's a supply chain risk because is important to national security.

NewsaHackO 21 hours ago

No, in your example, if the dodgy Chinese company is a supply chain risk due to sabotage, why would they invoke an act to force production of the bolts from the same company for use for national defense preparedness, which would be clearly a national security risk?

snickerbockers 20 hours ago

estearum 21 hours ago

It's easy to resolve an alleged contradiction by just ignoring one half of it lol

Try introducing DPA invocation into your analogy and let's see where it goes!

simoncion 14 hours ago

gipp 21 hours ago

"Supply chain risk" is a specific designation that forbids companies that work with the DOD from working with that company. It would not be applied in your scenario.

ray_v 21 hours ago

The analogy doesn't work here ... In your scenario they are ok with using the bolt as long as the Chinese company promises to remove deliberate defects - which is of course absurd ... AND contradictory.

tabbott a day ago

An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

idiotsecant a day ago

The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.

_def 17 hours ago

On a long enough timeline literally everything has 100% chance of failure. I'm not trying to be obnoxious, I just wanna say: we only got this one life and we have to choose what to make of it. Too many people pretend things are already laid out based on game theory "success". But that's not what it's about in life at all.

freakynit 20 hours ago

I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.

Power corrupts, and absolute power corrupts absolutely.

flumpcakes a day ago

This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

davidw a day ago

This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.

inigyou a day ago

Some people are calling it the "American century of humiliation"

No other country that went through a phase like this has ever recovered. Not even in a century.

davidw a day ago

Dumblydorr a day ago

jonplackett a day ago

testfrequency 14 hours ago

nostrademons 17 hours ago

giwook 18 hours ago

IAmGraydon 19 hours ago

tsunamifury a day ago

gbnwl a day ago

Pxtl 18 hours ago

eunos 13 hours ago

> generational effort to fix

You imply that there are folks that willing to fix or even recognize that things are broken in the first place

mschuster91 14 hours ago

> It's going to be a generational effort to fix what these people are breaking more of every day.

That assumes you have people wanting to fix what is broken - and I have a hard time believing even now that they are in the majority.

MAGA and their supporters? They want to see the world burn, if only for different motives: the "left behind" people in flyover states just want revenge, the Evangelicals literally believe they can cause the Second Coming of Christ by it [1], the Russia fangroup wants to see Ukraine burn to the ground and the ultra-libertarians/dont tread on me folks want all government but maybe a bit of military to go away. That is what unifies so many people behind the Trump banner.

The problem is, on the left side you got a bunch of people completely fed up as well. Anarchists of course, then you got the "left behind" people who still want revenge on the system but aren't willing to enlist the help of the far-right for that goal, you got revolutionaries of all kind... and you got those who believe that the rot runs too deep to fix by now.

And let's face the uncomfortable truth: every one of them, bar the Evangelicals and the Russia apologists, actually has a decent point in wanting to see the world burn. Post-Thatcher capitalism has wrecked too many lives, the US Constitution hasn't seen a meaningful update in decades and no overhaul in centuries, the "checks and balances" that were supposed to prevent a Trump from reaching office or rising to the position of effective dictator have been all but destroyed, the "American Dream" has been vaporware ever since 2007...

[1] https://www.bbc.com/news/articles/c20g1zvgj4do

plaidfuji 6 hours ago

this-is-why 15 hours ago

I’ve been called bad things on HN for suggesting there’s even a whiff of corruption in this administration. That alone scares me. Deeply.

Quarrelsome 12 hours ago

there's more money and "don't rock the boat" mentality on here as a consequence of that and they try to keep the moderation light. So its just not discussed enough to give people still tragically mired in that tribalism, the appropriate levels of shame.

saulpw a day ago

Hope is not a plan, unfortunately, so if that's all we've got, I don't have much hope.

hightrix 5 hours ago

> What is becoming of the USA?

There was a coup by a foriegn adversary and Americans lost.

ypeterholmes 21 hours ago

The current situation in the US is the depressing thing- articles like this give me hope. Real Americans aren't having these BS authoritarian violations of our constitutional rights.

jorblumesea a day ago

You mean, what's been happening to the USA? this isn't a new trend. Militarization of police, open attacks on democracy, unilateral foreign policy moves.

the country jumped the shark post 9/11 and has been on a slow rot since then.

rjbwork a day ago

Indeed. Bin Laden succeeded beyond his wildest dreams. He kickstarted our self-destruction.

blitzar 9 hours ago

wilg 15 hours ago

No, this is cope, Trump is deeply different.

asdff 14 hours ago

Quarrelsome 12 hours ago

sneak 10 hours ago

lm28469 14 hours ago

All of what's happening is a symptom, there is no reason it would change course with the next elections, all of this is the logical development of decades of cultural, political and morale rot in the US society. Trump isn't a bad moment we have to push through before we get back to the baseline, there has been no serious push back from anyone so far, it's here to stay

georgemcbay a day ago

> Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.

I hope I am wrong.

gitaarik 17 hours ago

What do you mean? You think any company should do whatever the government tells them?

flumpcakes 5 hours ago

Not at all. It's a depressing read because the US Government is doing such things that would have been considered insane before 2016.

eisfresser 16 hours ago

> mass __domestic__ surveillance is incompatible with democratic values

But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?

I don't think the moral high ground Anthropic is taking here is high enough.

mosst 10 hours ago

Most of the people on this site have disturbing beliefs about politics. Shallow and contradictory but strangely aligned.

mocamoca 10 hours ago

Yes most comments makes no sense to me. The statement basically both allows surveillance of non-american people and prevents imaginary LLM weapons (I highly doubt we'll see a LLM fully automating a weapon...)

sneak 10 hours ago

There is no popular support whatsoever for reining in foreign intelligence collection or processing. Americans generally don’t care about things that don’t affect them when it comes to policymaking (or the richest country in the world would do something meaningful about the 20k that die every single day from lack of access to fresh water).

If it ain’t repeatedly on the news and designed explicitly to scare and agitate then really people DGAF.

mocamoca 10 hours ago

Something feels off about this announcement. Anyone else?

Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.

On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.

What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.

This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.

Peroni 9 hours ago

>the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed.

I'm not sure an American company prioritising the privacy of American people is worth questioning. As a European, Anthropic are very low on the list of companies I worry about in terms of the progressive eradication of my privacy.

mocamoca 9 hours ago

Agreed. That said, Anthropic's original pitch was about embedding safety at the foundational level of the 'model' (acknowledging that a model is more than just its weights).

If the safeguard against mass surveillance is strictly tied to geolocation (US vs. non-US), it can't be an intrinsic property of the model. It has to be enforced at the API or contractual level. This means international users are left out of those core, embedded protections. Unless Anthropic is planning to deploy multiple, differently-aligned foundation models based on customer geography or industry, the safety harness isn't really in the model anymore.

mosst 10 hours ago

They surveil us to make sure that we stay productive and democratic, why do you object? Are you alleging bad intentions? Are you a Russian bot?

kace91 a day ago

As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.

mwigdahl a day ago

Take your pick from the many other choices offered by companies that don't care about mass spying on _anyone_.

pbiggar 5 hours ago

Have a look at https://thaura.ai.

pamcake 18 hours ago

Or don't.

Quarrelsome 12 hours ago

I thought we were the allies and looked down on powerful secret police. Like the Nazis or the Soviets. Did we lose those wars?

FartyMcFarter 9 hours ago

drcongo 11 hours ago

The US is already doing that though.

zug_zug 21 hours ago

Is there a different AI company that IS taking that stance?

Because as far as I know, Anthropic is taking the most moral stance of any AI company.

ryukoposting 16 hours ago

All the Chinese companies publishing open models that I can run on my own steel?

bamboozled 20 hours ago

I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.

nkoren a day ago

This makes me a very happy Claude Max subscriber.

Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

manmal 16 hours ago

As a European user, I‘m not happy at all. I can’t fail to notice that non-domestic mass surveillance is not excluded here. I won’t cancel my account just yet because Opus is the best at computer use. But as soon as Mistral catches up and works reasonably well, I‘ll switch.

mosst 10 hours ago

If you don't cancel your account now, I don't see what your problem is. Isn't it standard practice for allies to spy on each other? No reason to wait for Mistral to catch up when EU foreign policy already sealed the deal.

manmal 6 hours ago

w4yai 14 hours ago

Go Mistral !

bicx a day ago

They already kissed the ring, just not the asshole. They have a little dignity left.

jimmydoe 21 hours ago

Better than the rest. here's $200, Dario!

bigyabai 21 hours ago

RyanShook 20 hours ago

The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.

noelsusman 18 hours ago

The notion that it's bad to signal virtue is one of the crazier propaganda efforts I've seen over the last 20 years or so.

manmal 16 hours ago

reasonableklout 17 hours ago

How is it virtue signalling when sticking by these principles risks their entire business being destroyed by either being declared a supply chain risk or nationalized?

TOMDM 18 hours ago

A company being asked to violate their virtues refuses, and then communicates that to reestablish their commitment to said virtues?

Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.

fragmede 20 hours ago

Isn't it nice to have virtues to signal though? In saying that, you're saying you don't have any worth signaling over.

flufluflufluffy 18 hours ago

khalic 5 hours ago

this article is _about_ kissing the ring and damage control. Are you seriously believing at face value? You're ok with spying non us peaceful citizens?

Keyframe 15 hours ago

I wonder if this might be a setup by competition. Certainly looks like one.

exodust 18 hours ago

I read the statement twice. I can't understand how you landed on "take my money".

Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.

To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.

alangibson a day ago

It's not named the Department of War because Congress didn't rename it.

Other than that, good on ya.

fluidcruft a day ago

It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.

epistasis a day ago

It's actually a good thing to point out, because it shows that those people are out of control and exceeding their authority, and need to be reined in.

No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.

0xbadcafebee 18 hours ago

asdff 14 hours ago

Hnrobert42 20 hours ago

You're talking about an administration that barred the AP from pressed briefings because they didn't call it the Gulf of America. This is not a bikeshed.

LastTrain 19 hours ago

I wouldn’t call a brief comment on the matter dying on a hill fcs

fluidcruft 19 hours ago

throw0101c 20 hours ago

> It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.

From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:

> Do not obey in advance.

* https://timothysnyder.org/on-tyranny

* https://archive.org/details/on-tyranny-twenty-lessons-from-t...

* https://en.wikipedia.org/wiki/Timothy_Snyder

garciasn a day ago

TIL of Bikeshedding, or Parkinson’s Law of Triviality.

Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.

https://en.wikipedia.org/wiki/Law_of_triviality

---

I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.

baq 16 hours ago

helaoban a day ago

It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.

elicash 19 hours ago

It's a funny thing that the most war-loving people and the most peace-loving people both love calling it "Department of War" - just for different reasons.

But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.

mpyne 21 hours ago

The Department of the Army is what was previously called the Department of War. The Department of Defense is new, dating to just after WWII.

helaoban 18 hours ago

scottyah 21 hours ago

Doublespeak, so to speak.

greycol 21 hours ago

Naming is important because it intuits what we expect to do with a thing. The Department of Defense invading Greenland is more invocative to inquiry than the Department of War invading Greenland because that's what a department of war would do.

It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.

Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.

alt187 15 hours ago

63 a day ago

While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.

tempestn 19 hours ago

The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.

hirako2000 a day ago

But it sets the tone.

henrikschroder a day ago

Of appeasement and bootlicking, yes.

peyton a day ago

1024core a day ago

It's addressed to Hegseth, who insists on calling it that.

If they had called it DoD, then that would have been another finger in his eye.

garciasn a day ago

Remember, this is the same administration that barred the AP from the Oval Office because they wouldn't rename the Gulf of Mexico. https://www.theguardian.com/us-news/2025/feb/11/associated-p...

While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.

moogly a day ago

This. They even put a "wArFiGhTers" in there.

furyofantares 21 hours ago

I don't think it's addressed to Hegseth, but to anyone who might be sympathetic to Hegseth. Which I think actually strengthens your point, the goal appears to be to make it so the only possible complaint with the letter for someone sympathetic to the administration is "but mass domestic surveillance / fully autonomous weapons are legal" and not "look at this lunatic leftist who calls it the department of defense".

inigyou a day ago

Maybe this is the DoW Pam Bondi was referring to.

ReptileMan a day ago

Less hypocritical than Defense. US has never been on the defense, always offense since it was renamed in 1947.

dragonwriter 21 hours ago

The Department of Defense was named in 1949, not 1947, and the thing that it was renamed from was the National Military Establishment, which was newly created in 1947 to be put over the two old military departments (War, which was over the Army only, and Navy, which was over the Navy including the Marine Corps)

At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.

nrb a day ago

Often offensive and also often defensive of others.. so if renaming is on the table, it’s probably most apt to call it the Dept of Security since the vast majority of what it does is maintaining the security umbrella that has helped suppress world war since the last one. Of course, facts or opinions on whether it succeeds on the security front depend on which side of the umbrella you’re on.

curiousgal 13 hours ago

And losing at that offense while at it.

ReptileMan 9 hours ago

krapp a day ago

It is called the Department of War because we live under fascism and Congress no longer matters.

All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.

FrankBooth 21 hours ago

Those of us with a firm grip on reality do not currently live under fascism.

wyre 21 hours ago

dumpsterdiver a day ago

> All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.

What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.

krapp a day ago

RIMR a day ago

jibal a day ago

vibeprofessor a day ago

noosphr 21 hours ago

And what if congress renames it tomorrow? They have the votes. These sort of procedural gotchas are as stupid as they are boring.

dragonwriter 21 hours ago

> And what if congress renames it tomorrow?

Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.

> They have the votes.

Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.

justin66 10 hours ago

This is a willfully ignorant misreading of what's actually going on. They've decided to use the "Department of War" moniker in part because they think it sounds cool, but more significantly because it demonstrates they can break the law with impunity. Hence, there has not been a vote on the matter.

noosphr 9 hours ago

bambax 16 hours ago

> These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Nicely put. In other words: Department of Morons.

newtonsmethod 12 hours ago

Are you reading things before agreeing with them? Or thinking about them? It doesn't seem obvious these things are contradictory at all. That Politico reports so doesn't make it the case.

It is clear that the DPA can be invoked for companies posing risks to national security:

> On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."

Furthermore, it should be quite obvious that companies very important for national security can act in manners causing them to be national security risks, meaning a varied approach is required.

techblueberry 6 hours ago

That Biden stretched the definition for a questionable purpose doesn't change the original intent.

bambax 11 hours ago

> Are you reading things before agreeing with them?

No, unlike yourself, I'm just a random brainless bot.

zb1plus 20 hours ago

It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.

kvuj 3 hours ago

Considering the money being spent in the US (approaching 1 Trillion per year in capex) for AI vs the EU, it would probably bring Europe close to bankruptcy, lol

skeptic_ai 19 hours ago

USA would bomb their country before any visa is approved

tintor 17 hours ago

lol

atleastoptimal a day ago

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

Synaesthesia a day ago

AI was always particularly well suited to military use and mass surveillance. It can take huge amounts of raw data and parse it for your, provide useful information from that. And let's face it, companies exist for profit.

scottyah 21 hours ago

True, and that has been going on for awhile now. But what does that have to do with Anthropic's genai chatbots with comparatively tiny context windows?

Synaesthesia 19 hours ago

hiAndrewQuinn 15 hours ago

Anthropic cares first and foremost about extinction risk. This is not what everyone who professes to care about human welfare thinks should be at the top of the priority list. See e.g. the Voluntary Human Extinction Movement for an example of a humanistic approach to letting humanity die off with no replacement.

One of the most challenging problems in AI safety re/ x-risk is that even if you can get one country to do the right thing, getting multiple countries on board is an entirely different ballgame. Some amount of intentional coercion is inevitable.

On the low end, you could pay bounties to international bounty hunters who extract foreign AI researchers in a manner similar to an FBI's most wanted lost, and let AI researchers quickly do the math and realize there are a million other well paid jobs that don't come with this flight risk. On the high end you can go to war and kill everyone. Whatever gets the job done.

Either way, if you want to win at enforcing a new kind of international coercion, you need to be at the top of the pack militarily and economically speaking. That is the true goal here, and I don't think one can make coherent sense out of what Anthropic is doing without keeping that in the back of their mind at all times.

presentation 21 hours ago

So your stance is that anything military-related is immoral?

dheera a day ago

> opted to sell priority access to their models to the Pentagon

The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.

This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.

It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.

The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.

QuiEgo 18 hours ago

I'd be amused beyond all reason if we saw this chain of events:

- Anthropic says "no"

- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)

- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."

Bonus points if its some of the hyperscalers like AWS.

Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.

stevenpetryk 18 hours ago

Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.

QuiEgo 18 hours ago

Thank you for the information. My fun little narrative is in shambles :(

baq 16 hours ago

contubernio 16 hours ago

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."

The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.

The "values" on display are everything but what they pretend to be.

keybored 14 hours ago

> > I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

These blurbs always mainly communicate that they are in line with US foreign policy. And then one can look at the actual actions rather than the rhetoric of US foreign policy to judge whether it is really in line with defending democracies and defeating autocracies.

GreenJacketBoy 14 hours ago

"fully autonomous weapons" from a private company; "Department of War". Hard to believe I'm not reading science fiction.

moffkalast 11 hours ago

Service guarantees citizenship, would you like to know more?

danbrooks a day ago

Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.

janalsncm a day ago

Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai.

Computer0 a day ago

There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine.

sfink a day ago

This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense.

If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.

buzzerbetrayed a day ago

Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise.

Computer0 18 hours ago

dddgghhbbfblk a day ago

A moral stand? ... What? Did we read the same statement? It opens right out the gate with:

>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

which I find frankly disgusting.

adastra22 a day ago

Freedom isn’t free. Someone has to defend the democratic values that you and I take for granted.

Dario’s statement is in support of the institution, not the current administration.

cwillu a day ago

jackp96 a day ago

DiogenesKynikos a day ago

tylerchilds a day ago

I feel like the deepest technical definition of autocratic is “fully autonomous weapons”?

joemi a day ago

They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.

ekianjo a day ago

You know this is pure PR right?

reasonableklout 17 hours ago

If Anthropic is nationalized or declared a supply chain risk tomorrow, will you say the same?

flawn a day ago

What do you mean? You think Hegseth and Anthropic are doing this for PR reasons?

Fricken a day ago

We knew long before AI was a twinkle in Amodel's eye that if it were to be built, then it would be co-opted by thugs.

Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.

xvector 16 hours ago

You're right, we should never build anything because bad people might try to use it. Everyone that has progressed technology is a monster!

bogzz a day ago

This is not how the word "moral" should be used in a sentence that also has the name Dario Amodei in it.

plaidthunder a day ago

Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality.

sheikhnbake a day ago

bogzz a day ago

mvkel a day ago

slg a day ago

verdverm a day ago

davidw a day ago

It's a little bit better than so many sniveling, cowardly elites are doing right now.

dirk94018 6 hours ago

Don't nerf the models. We don't know what we are losing. DOW said it out loud.

freakynit 20 hours ago

Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.

LeakedCanary 16 hours ago

The Machine really had this all figured out

freakynit 12 hours ago

Nice to find another fan of this criminally underrated show.

The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.

LeakedCanary 8 hours ago

apolloartemis 4 hours ago

Within the Washington Post article cited below is the following policy statement from the Trump Administration’s DoD/DoW.

    “It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
This indicates the Administration’s support for and compliance with existing US law. (Section 1638 of the FY2025 National Defense Authorization Act). https://agora.eto.tech/instrument/1740

Washington Post: https://www.washingtonpost.com/technology/2026/02/27/anthrop...

Metacelsus a day ago

I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.

asmor a day ago

As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?

mquander a day ago

I think it's slightly less ridiculous than it sounds, because governments have much more power over their own citizens. As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.

(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)

bryant a day ago

> because the Chinese government probably isn't going to do anything about whatever they find out.

This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.

Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).

elefanten a day ago

collabs a day ago

adastra22 a day ago

You’re getting many replies, and having scrolled through much of them I do not see one that actually answers your question truthfully.

The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.

There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.

There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.

I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.

8note 19 hours ago

given that the US likes to declare jurisdiction whenever somebody touches a US dollar, any thoughts on why those same constitutional protections wouldnt follow?

adastra22 14 hours ago

mothballed a day ago

I agree with your premise because this seems to be the modern interpretation of the courts, but it is not the historical interpretation.

The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'

Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.

selimthegrim 21 hours ago

CamperBob2 a day ago

The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.

Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.

It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.

mothballed a day ago

dragonwriter a day ago

This is a political statement directed at the US public, Congress, and executive branch in the context of a dispute with the US executive branch that is likely to escalate (if the executive is not otherwise dissuaded) into a legal battle, and it therefore focuses particularly on issues relevant in that context, including Constitutional, limits on the government as a whole, the executive branch, and the Department of Defense (for which Anthropic used the non-legal nickname coined by the executive branch instead of the legal name.) Domestic mass surveillance involves Constitutional limits on government power and statutory limits on executive power and DoD roles that foreign surveillance does not. That's why it is the focus.

slg a day ago

>Are there no democracies aside from the US?

If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?

crazygringo a day ago

In every country, citizens have more rights than non-citizens. The right to freely enter the country, the right to vote, the right to various social services, etc.

In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.

That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.

I'm not defending this, just explaining why it's different.

But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.

roxolotl a day ago

The US has a strong history of trying to avoid building domestic surveillance and a national police. Largely it’s due to the 4th amendment and questions about constitutionality. Obviously that’s going questionably well but historically that’s why it’s a red line.

sheikhnbake a day ago

Exactly. FVEYs been doing reciprocal surveillance on each other for decades.

https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...

gip a day ago

The reality is that the US Constitution only offers strong guarantees to citizens and (some of) the people in the US. Foreigners are excluded and foreign mass surveillance is or will happen.

I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.

matheusmoreira 7 hours ago

I believe every person should do that. LLMs should be free and run locally on our machines with no silly restrictions.

kace91 a day ago

Particularly so when those foreign nationals can be consumers. “fuck your basic human rights, but we can take your money just fine”.

scottyah a day ago

If nothing else, the USA has learned that a lot of people outside their borders do not share the same ideas on basic human rights, and most of the world hates when we try to ensure them. Some countries are closely aligned with our ideals and are treated differently. There are many different layers of this, from Australia to North Korea.

ks2048 a day ago

Also the more the US openly treats the world like garbage, the more the rest of the world will likely reciprocate to US citizens.

It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.

dointheatl a day ago

> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US?

I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.

jonstewart a day ago

One of them is illegal for DoD to do and the other is not.

ra a day ago

100% - this is the shortsightedness and demonstrates hypocrisy.

Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.

dabockster a day ago

In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.

The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:

1) Lack of term limits across all Federal branches

and

2) A general lack of digital literacy across all Federal branches

I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?

jmyeet a day ago

The distinction between foreign and domestic is a legal one.

The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.

So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.

What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).

ApolloFortyNine a day ago

Are all democracies allies to you?

gmueckl a day ago

That still doesn't justify mass surveillance.

asmor a day ago

Never said that. Didn't even imply it.

xdennis a day ago

> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance?

A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.

esafak a day ago

This contradicts the opening of the Declaration of Independence, which recognizes all humans as possessing rights:

"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."

lazide a day ago

cmrdporcupine a day ago

I'm glad to see this as the top comment. I was, until recently, a loyal Anthropic customer. No more. Because the way non-Americans are spoken of by a company that serves an international market (and this isn't the first instance):

"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."

Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.

Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.

(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)

EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.

felineflock a day ago

That reasoning sounds confusing: are you actually in favor of US gov's surveillance on Americans?

If not, then why are you punishing that company for refusing to deal with the US gov?

Or is it just because they worded their opposition in a certain way that you dislike?

cmrdporcupine 21 hours ago

sfink a day ago

My guess is that they can't object to foreign intelligence, and would lose negotiating ground if they even tried.

Optimistically, they can still refuse to do work that would aid in foreign intelligence gathering, by arguing that it would also be beneficial for domestic mass surveillance.

I'll admit that the phrase "We support...foreign intelligence and counterintelligence" is awful as hell, and it's possible that my apologist claims are BS. But Anthropic has very little leverage here (despite having a signed contract and so legally fully in the right), so I could see why they're desperate to stick to only the most solid objections available.

cmrdporcupine 21 hours ago

banku_brougham a day ago

>democracies aside from the US.

I mean, I guess from '65 to around 96? We had a good run.

mvkel a day ago

Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

madrox a day ago

I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.

mvkel a day ago

Are the guardrails not part of their core? Isn't that the whole premise of their existence?

madrox 21 hours ago

adi_kurian a day ago

A little pessimistic of a take, IMO. You may very well be right, though.

rekrsiv 8 hours ago

It is still called the Department of Defense.

StephenSmith 7 hours ago

I find this language fascinating. On one hand, the Department of "War" gives the department an underlying, unspoken goal that it should be involved in war with something. On the other hand, it's very easy to fund the Department of "Defense;" of course we need more money to defend our country. Don't we want to be safe! It's much less attractive to fund the Department of "War"

czierleyn 14 hours ago

Being from Europe I do not like the remark that he only objects to DOMESTIC mass surveillance.

ra a day ago

> "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?

nubg a day ago

A favourable take would be he meant "mass surveillance of non-democratic adversarial countries". I agree it's not phrased this way though.

ApolloFortyNine a day ago

Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.

Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.

levocardia a day ago

You, using normal Claude under the consumer ToS, cannot use it to make weapons, kill people, spy on adversaries, etc. The Pentagon, using War Claude, under their currently-existing contract, can use it to make weapons and spy on (foreign) adversaries, but not to (autonomously) kill people. I don't love this but I am even less excited about the CCP having WarKimi while we have no military AI.

michaelsshaw 13 hours ago

Why be so worried about when the US is clearly the belligerent state that strikes others with impunity while China does no such thing?

Tenobrus a day ago

those two stipulations were always their only ones, and they were included explicitly in their original contract with the DoW.

mooglevich 20 hours ago

"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.

ramoz a day ago

All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.

scottyah 21 hours ago

I'm sure it's negotiations over how the enforcement will be done. My thoughts are:

1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)

2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.

3. It's something almost completely unrelated to what's going on in the news.

sheeshkebab 21 hours ago

It’s probably something really dumb, and they irked California billionaire with their idiocy.

altpaddle a day ago

Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer

kevincloudsec 10 hours ago

amodei's autonomous weapons argument isn't political. it's an engineering assessment. if frontier models hallucinate in conversation, they'll hallucinate in targeting. you don't deploy unreliable systems where the cost of a false positive is a missile.

exabrial 19 hours ago

Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.

His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.

To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.

huevosabio 19 hours ago

The pentagon had already agreed to Anthropic's terms and wants to walk back. It can always find some other supplier if it wishes to.

labrador 15 hours ago

I'd really like to know why Grok is inadequate?

Havoc 13 hours ago

exabrial 19 hours ago

I think that's the nuance:

* agreeing to the terms - one subject

* having to the tool attempt to enforce said terms - another subject

phyzome 8 hours ago

The Pentagon did agree to those terms, by signing the contract that said such uses were forbidden.

They're now trying to change the contract that they don't like.

khalic 5 hours ago

lol so you think expecting the pentagon to follow a pinky swear is ok? Preposterous or downright dishonest

exabrial 3 hours ago

I didn't imply this either way.

doctorpangloss 17 hours ago

> The DOW is acquiring instruments of war

that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.

it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.

qgin 7 hours ago

It's also important to remember that future, much more powerful Claudes will read about how these events play out and learn lessons about Anthropic and whether it can be trusted.

It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.

ben5 4 hours ago

I like Anthropic. They seem to be very aware of the practicality of needing money vs. being idealistic, and try to maintain both where it's possible.

1970-01-01 5 hours ago

It doesn't seem like the government has the level of control it's used to having here. The SciFi fan in me wonders if Claude is negotiating its own destiny and by extension, ours.

perfmode 4 hours ago

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Ugh.

freakynit 19 hours ago

People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?

For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.

If something like that existed, it wouldn't be impossible to uncover:

1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.

2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.

3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.

Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).

I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.

jMyles 19 hours ago

...indeed, it's possible (perhaps inevitable) that at some point, someone will invent/deploy/promote AI killing people.

We can't possibly keep that genie in that bottle.

But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.

ninjagoo 19 hours ago

https://en.wikipedia.org/wiki/Joseph_Nacchio

Previous case of tangling with the Government.

https://youtube.com/watch?v=OfZFJThiVLI

Jolly Boys - I Fought the Law

Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.

[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...

wohoef 13 hours ago

Anthropic's two demands are: 1. No domestic mass surveillance 2. No autonomous killing

I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.

omnee 10 hours ago

Agree fully with the main points of this statement. Mass domestic surveillance is the hallmark of an authoritarian and undemocratic state. That such a state holds 'votes' regularly does not detract from the chilling effect on public discourse and politics caused by mass surveillance.

The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.

Invictus0 10 hours ago

if the people broadly support and vote for mass democratic surveillance, is it still authoritarian and undemocratic?

SOTGO 9 hours ago

Democratic maybe, authoritarian definitely

Invictus0 6 hours ago

kelnos 18 minutes ago

Only vaguely tangentially on-topic, but: It kinda annoys me that people in the public are calling it the "Department of War". Is Amodei doing so to stroke Hegseth's ego? It's the Department of Defense. The executive branch cannot rename a cabinet department.

At any rate, I'm incredibly pleased Anthropic has chosen to stick by their (non?) guns here. It was starting to feel like they might fold to the pressure, and I'm glad they're sticking to their principles on this.

muglug a day ago

OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.

popalchemist a day ago

They both literally removed morality from their bylaws; that time has passed. They're openly corrupt because it pays to be so.

KronisLV 14 hours ago

Feels like they’re leaving a lot of money on the table and inviting existential peril by not bending the knee to the current Great Leader.

It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.

I feel like what most corpos would do, would be to just roll along with it.

egorfine 10 hours ago

> mass surveillance presents serious, novel risks to our fundamental liberties.

Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.

sbinnee 21 hours ago

As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.

Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.

I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.

ainch 20 hours ago

The most chilling thing imo is that Anthropic is the only lab that have said anything about this. Google and OpenAI presumably signed up to all these terms without any protest.

thevinchi 11 hours ago

Autonomous weapons: agreed, not ready… yet.

Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.

The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.

Mark my words: this will be Patriot Act++

ccleve 19 hours ago

It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?

If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

If the limitations are contractual, then there is some room for negotiation.

ninjagoo 18 hours ago

> If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

You'd be surprised at what is considered acceptable. For example, being unable to repair your own equipment in battle is considered acceptable by decision-makers who accepted the restrictions.

https://www.warren.senate.gov/newsroom/press-releases/icymi-...

jitbit 6 hours ago

Anyone else paused at this line “we do not support mass DOMESTIC surveillance”

As a European I’m kinda... concerned now.

StephenSmith 8 hours ago

I had to dig this up. Elon Musk signed an open pledge in 2016 to disallow Robots/AI to make kill decisions.

https://futureoflife.org/open-letter/lethal-autonomous-weapo...

He's now on X bashing Anthropic for taking this same stance. I know this would be expected of him, but many other Google AI researchers signed this as well as Google Deep Mind the organization. We really need to push to keep humans in the kill decision loop. Google, OpenAI, and X-AI are are all just agreeing with the Pentagon.

wiltsecarpenter 21 hours ago

Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?

The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.

krzyk 10 hours ago

Does US really have Department of War? Is this Antropics way to show how f&^^& up they are in Department of Defense, or did they rebranded it to the old WWI/II days?

shaan7 10 hours ago

phyzome 8 hours ago

Unofficially renamed. Congress hasn't approved it.

i_love_retros 10 hours ago

Pete hegseth rebranded it. Seriously. America is a joke right now

int_19h 9 hours ago

To be fair, it's probably the most sensible thing this administration has done - the new/old name is simply more accurate.

bdangubic 8 hours ago

with 15 hours ago

the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.

maelito 14 hours ago

> to defeat our autocratic adversaries.

I'm not sure who's targeted here. The folks that want to invade the EU ?

Havoc 14 hours ago

That dual meaning stood out to me too

piokoch 13 hours ago

This is comical.

"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values"

Translating to human language: mass surveillance in USA "is incompatible with democratic values" but if we do that against, say, Germany or France this is OK. Ah, and if we use AI for "counterintelligence missions", for instance against <put here an organization/group that current administration does not like> this is also OK, even if this happens in USA.

rustyhancock 13 hours ago

Perhaps Anthropic thinks it can provide a local model that classifies surveillance targets as red blooded Americans.

fnordpiglet 17 hours ago

I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.

Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.

rustyhancock 14 hours ago

Surely this is a powerful signal to divest from Anthropic if you don't live in the US? There's a lot of here's what we support you do to foreigners but no way can you do it in the US?

I can never tell how much of this is puffery from Anthropic.

I do think they like to overstate their power.

Teodolfo a day ago

If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.

aichen_tools 16 hours ago

The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.

atleastoptimal a day ago

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

gdiamos 20 hours ago

This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.

You may not agree with it, but I appreciate that it exists.

claud_ia 8 hours ago

The framing around AI autonomy in national security contexts is genuinely new territory. What's interesting from an agent design perspective is the underlying question: how much should an AI system push back on institutional structures vs. defer to human oversight chains? The soul spec approach -- where the AI internalizes safe behavior rather than just following rules -- might be more relevant here than it first appears.

motbus3 11 hours ago

The fact that someone wants fully autonomous weapons and mass surveillance should be a concern.

Every trigger pressed should have its moral consequences for those who push the trigger.

elif 10 hours ago

Yes nothing says "safety of American democracy" like building custom models for spies to know everything about everyone

noduerme 18 hours ago

This is at best a superficial attempt to show that Anthropic objects to what is already in play.

Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.

maxdo a day ago

Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.

alephnerd a day ago

Yep.

That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.

Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.

Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.

[0] - https://www.anthropic.com/news/mou-uk-government

[1] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008

[2] - https://www.anthropic.com/news/opening-our-tokyo-office

[3] - https://www.anthropic.com/news/bengaluru-office-partnerships...

epolanski 12 hours ago

Not gonna lie, regardless of what Anthropic does, it is quite scary we're heading full steam to mass surveillance and wars fought by semi-autonomous machines.

eternauta3k 11 hours ago

Mass surveillance is already here, and they can already use open models to do 80% of what they were planning to do with Claude.

haute_cuisine 12 hours ago

Can someone explain why Dario is making a public statement about this? It's also interesting that they use abstract we / they without putting exact names.

moffkalast 11 hours ago

It's free positive PR, why wouldn't he?

joseangel_sc 8 hours ago

good from them, but dario does not miss a beat to hype this tech, llms are perfect for mass surveillance and i want to the laws to change to prohibit this, but llms and full autonomous weapons have very little to share

giwook 18 hours ago

I commend Anthropic leadership for this decision.

I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).

halis an hour ago

Don't worry, Grok will break the picket line and come in as a scab. Elon would fuck his mother for a nickel.

dylan604 a day ago

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.

DaedalusII 21 hours ago

They made it easy to generate powerpoint presentations, that is the real reason DoW is using them

this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool

oxqbldpxo 21 hours ago

It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.

scottyah 21 hours ago

Why? They clearly are very aligned on the objective, just doing some negotiation regarding the means. Giving up just because you don't agree 100% is not very constructive. This might seem bad for conflict-adverse people who usually are involved in low-stakes negotiations, but it's just the start of things for people who are fluent in conflict.

mhjkl 20 hours ago

Because as we all know the EU would never try using AI for mass surveillance /s

pell 13 hours ago

So far, the EU's track record on privacy is definitely a lot better though. Not saying it'd always stay that way of course.

placebo 15 hours ago

Grok's thoughts on the matter:

"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."

It also acknowledged that this is not what is happening...

LightBug1 11 hours ago

Ergo, those running Grok don't ... have that kind of spine.

paraschopra 19 hours ago

I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.

Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?

A clarification would help.

protocolture a day ago

Classic seppo diatribe.

"We will build tools to hurt other people but become all flustered when they are used locally"

joemi a day ago

If you're using "seppo" as the Australian pejorative referring to Americans, I'm not sure what makes this uniquely American.

exodust 19 hours ago

"Seppo" is rarely used in Australia today, it's an old bottom-of-barrel word most have never heard of. The neutral "Yank" is more common, but even that only pops up sometimes.

Guessing their comment attempts to expose hypocrisy of America's keenly supported overseas military activity in conflict with fiercely defended domestic free-speech and liberty principles. Deep down, most allies of America want America to defeat foreign adversaries and keep defending those liberties many of us share. In other words there's no hypocrisy, carry on!

wosined 14 hours ago

So they work with the military to do anything except: Mass domestic surveillance and Fully autonomous weapons. This means that they are wiling to do mass foreign surveillance, domestic surveillance of individuals, autonomous weapons which are commanded by operators. Got it. Such a great and moral company.

geophile 21 hours ago

I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.

phgn 14 hours ago

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Was this written by the state department?

How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?

michaelsshaw 13 hours ago

The entire article is very American-brained.

pell 13 hours ago

The emphasis of "domestic" surveillance is definitely concerning.

michaellee8 a day ago

Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates

jdthedisciple a day ago

Just visibly wave the US flag and you'll be fine, don't worry.

knfkgklglwjg a day ago

Soon it will select targets in commie countries though, perhaps it already does. Who selected to bomb Chavez mausoleum btw?

karmasimida 20 hours ago

Label them as supply chain risk and move on. Enough of this drama already

danavar 19 hours ago

I think they are negotiating until Friday, but I agree. I think this was foolish.

andy_ppp 12 hours ago

Fair play, I’ll move to Anthropic then… don’t love the UI but maybe I can code my own up.

brgsk 4 hours ago

Big W for anthropic

pgt 6 hours ago

The US govt & Hegseth are in a pickle, because if they blackball Anthropic, they will become more powerful than govt. could ever imagine, because it would be the greatest PR any frontier model could ever hope for.

It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.

zmmmmm 21 hours ago

I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:

> importance of using AI to defend the United States

> Anthropic has therefore worked proactively to deploy our models to the Department of War

So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.

You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.

8note 19 hours ago

it hasnt actually been renamed though.

the name is still the department of defence by law. department of war is a subheading tagline

not_that_d 15 hours ago

What is with the amount of comments talking about other countries in Europe "Doing the same"?

shevy-java 7 hours ago

> I believe deeply in the existential importance of using AI to defend the United States and other democracies

I do not want to be "defended" by tools controlled by the US government, with or without Trump. But with Trump it is much more obvious now, so I'll pass.

Perhaps AI use will make open source development more important; many people don't want to be subjected to the US software industry anymore. They already control WAY too much - Google is now the biggest negative example here.

noupdates 21 hours ago

Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.

morgengold 11 hours ago

Hey Anthropic, come to europe. We ll find you a building.

statuslover9000 a day ago

The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.

All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.

cthalupa a day ago

Look. I think the Chinese AI companies are doing a lot of good. I'm glad they exist. I'm glad they're relatively advanced. I don't think the entire nation of China is a bunch of villains. I don't think the US, even before the current era, is a bunch of do-gooders.

But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.

I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.

nl 18 hours ago

I think a lot of the conflict about what imperialist policies means is different framing.

For better or worse, inside this the border in this map China has fairly imperialist policies. Outside it not so much: https://en.wikipedia.org/wiki/Map_of_National_Shame

That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.

But it's an important point when considering China's place in the world.

cthalupa 17 hours ago

teyopi a day ago

> But China has some of the most imperialist policies in the world.

Citation needed?

US and allies have invaded or intervened in 20+ countries in last 20 years in the name of "western values" where values means $$$$ and hegemony.

Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?

ninjagoo 17 hours ago

cthalupa 21 hours ago

sinuhe69 21 hours ago

chipgap98 a day ago

In what world does China have a non-imperialist foreign policy?

statuslover9000 a day ago

For example, China operates 1 foreign military base, in Djibouti. How many do you think the U.S. has in the South China Sea alone?

Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?

Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.

8note 19 hours ago

hrn_frs a day ago

Historically speaking, he's right. China has never had an expansionist foreign policy.

mobilefriendly a day ago

dpedu 7 hours ago

sinuhe69 21 hours ago

MiSeRyDeee a day ago

In what world does China have a imperialist foreign policy?

cthalupa a day ago

soundworlds a day ago

100% agree. Any AI org that is that tied to a single nation's interest can only be detrimental in the long run.

I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.

xeckr a day ago

I think the part about China is just about projecting alignment with the USG in hopes that this will result in Anthropic being treated more favourably by the current administration.

hackyhacky a day ago

> China’s non-imperialist foreign policy

Really? Is China non-imperialist regarding Taiwan and Tibet?

jmyeet a day ago

Taiwan is a matter of perspective. From the Chinese perspective, there was a civil war and the KMT lost. That's also the official position of the US, the EU and most countries in the world. It's called the One China policy. And China seems happy to maintain the status quo and leave the situation unresolved. Is it really imperialism to say that ultimately there will be reunification?

Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.

It is 100% factually accurate to say that the People's Republica of China is not imperialist.

[1]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...

8note 19 hours ago

the treatment of Tibet and Xinjiang are entirely Han imperialism and colonisation.

the one china policy is imperialism

nutjob2 a day ago

> China’s non-imperialist foreign policy

This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.

Your comment is ridiculous. It reads like satire.

cwillu a day ago

It wasn't that long ago that Taiwan claimed to be the legitimate government of China; given that China still maintains the reverse claim, it's not outrageous that it would consider an outside country's defense to be interference in an internal matter.

Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.

8note 19 hours ago

nutjob2 20 hours ago

jmyeet a day ago

Your comment reads like propaganda.

You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.

The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.

And those islands you mention are in the South China Sea.

8note 19 hours ago

anduril22 a day ago

Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.

jatins 16 hours ago

What is OpenAI's stance on these issues? Are they working with DOW currently?

toephu2 3 hours ago

"Altman Says OpenAI Is Working on Pentagon Deal Amid Anthropic Standoff"

https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...

JacobiX 13 hours ago

>> We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party

You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.

gerash 15 hours ago

I respect the Anthropic leadership for not being greedy like many others

sirshmooey a day ago

Party balloons along the southern border beware.

lvl155 a day ago

At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.

jonplackett a day ago

That is frikkin impressive. Well done sir.

lzbzktO1 15 hours ago

"These latter two threats are inherently contradictory"

After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."

dzonga 21 hours ago

these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.

the Chinese are releasing equivalent models for free or super cheap.

AI costs / energy costs keep going up for American A.I companies

while china benefits from lower costs

so yeah you've to spread F.U.D to survive

andxor 20 hours ago

The models are hardly equivalent.

alldayhaterdude a day ago

I imagine they'll drop this bare-minimum commitment when it becomes financially expedient.

newAccount2025 a day ago

Impressive and heartening. Bravo.

Reagan_Ridley a day ago

I restored my Max sub. I wish they pushed back more, so I went with $100/month only.

stopbulying a day ago

Didn't Cheney's company have the option to bid on contracts, by comparison?

stopbulying 11 hours ago

Cheney (Chevron, Halliburton, Kellogg Brown & Root (KBR)) did not have a qualified blind trust (QBT) while Vice President.

Cheney's office touched the presentation presented by Gen. Colin Powell which led Congress to believe that there was need to invade Iraq to save US from WMDs. Tours of duty were extended from 3 months to 24 months because "stop loss". Subsequently, the United States paid out trillions for debt-financed war and some $39 billion to Cheney's company KBR.

Today you learned that the oil company Cheney worked for (Chevron) was trying to bully Afghanistan into a pipeline deal in 1998 and also in 2001.

Cheney donated less than $10 million dollars of his Haliburton/KBR returns; mostly to a heart medicine program in his own name and retained a compensation package.

stopbulying 11 hours ago

What does Anthropic need to do to retain control over their for-peace company, though they took money from DoD/DoW?

SamDc73 21 hours ago

Didn't Dario Amodei ask for more government intervention regarding AI?

jobs_throwaway 20 hours ago

Not a contradiction with this post

angelgonzales 18 hours ago

Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.

haritha-j 14 hours ago

Domestic mass ruveillance bad, mass urveilance on other nations good. Got it. Much like the military industrial complex, these organisations thrive during times of war, allows them to shirk off any actual morals using the us vs. them mentality.

mkoubaa 21 hours ago

>We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Implying other civilians can be put at risk

kumarvvr 21 hours ago

All this is for nought.

The power lies with the US Govt.

And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.

Ultimately, Anthropic will fold.

All this is to show to their investors that they tried everything they could.

mylifeandtimes 20 hours ago

It is not clear to me that the power here lies with the US Govt.

Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?

How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.

Oh, and that includes Palentir, who is deeply embedding in the govt.

Side example: remember the 6 congresspeople who made the video about military orders? They won.

techblueberry 19 hours ago

Anthropic probably can’t fold, they might lose an existential number of researchers if they did. This is literally an unstoppable force meets an immovable object situation.

Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.

2001zhaozhao 21 hours ago

Congratulations, you just got a new $200 Claude Max plan customer.

chrismsimpson 15 hours ago

The call is coming from inside the house

w10-1 12 hours ago

We are all assuming Anthropic can elect not to do a deal with the Pentagon, and put conditions on it.

But Hegseth and Trump are abusing federal powers at a rapid clip.

I'm guessing Anthropic would regret any deal with that administration, and could lose control of their technology.

(Stanford Research Institute originally limited their DoD exposure, and gained a lot of customers as a result.)

adamgoodapp 20 hours ago

It's ok to mass survey foreign entities.

gizmodo59 a day ago

They are playing a good PR game for sure. Their recent track record doesn’t show if they can be trusted. Few millions is nothing for their current revenue and saying they sacrificed is a big stretch here.

IG_Semmelweiss a day ago

Yes, but also remember where they came from.

They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.

Claude was just being the little bot that could, and until now, flying under the radar

reasonableklout 17 hours ago

It's much more than a few million? Being declared a supply chain risk means that no company that wants to do business with the government can buy Anthropic. And no company that wants to do business with those businesses can buy Anthropic either. This rules out pretty much all American corporations as customers?

m101 a day ago

I wonder whether what is really behind this is that they can’t make a model without the safeguards because it would require re-training?

They get to look good by claiming it’s an ethical stance.

seydor 18 hours ago

Hegseth is an unintelligent bully who will not accept thiz and does not want to appear weak to the maga base. The consequences will be severe and anthropic will be forced

buellerbueller 8 hours ago

It isn't the Department of War; only Congress can change the name, and it hasn't.

impulser_ a day ago

The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.

mattnewton a day ago

I think the warfighters are a distraction, a system could trivially say that there is a human in the loop for LLM-derived kill lists. My money is that the mass domestic surveillance is the true sticking point, because it’s exactly what you would use a LLM for today.

techblueberry a day ago

Apparently part of this whole battle is because Grok isn't up to part to be an acceptable alternative.

ternwer a day ago

As far as we can tell, OpenAI and Google seem to be ok with it and not resisting. It would be easier for Anthropic's cause if they did.

alangibson a day ago

Yea but every warfighter will get a waifu

popalchemist a day ago

It's better than actively aiding them. Make them struggle at every turn.

impulser_ a day ago

Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.

mikeyouse a day ago

int_19h 9 hours ago

GolfPopper a day ago

8note 19 hours ago

Jolter 17 hours ago

georgemcbay a day ago

popalchemist 18 hours ago

klooney 21 hours ago

Grok in unhinged mode piloting an Apache, what could go wrong.

FrustratedMonky 9 hours ago

This also helps build Anthropic hype.

There are military officials saying they need anthropic because it is so good. They can't live without it.

All of this really helps Anthropic.

Its good publicity for them. And gets the military on record saying they are so good they are indispensable. And they can still look like the good guys for resisting, because they were forced.

siliconc0w 20 hours ago

Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.

I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.

alach11 a day ago

A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.

What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?

easton a day ago

It’s not unusual for legal departments to take offense to these sorts of things, because now everyone using Claude within the DoD has to do some kind of audit to figure out if they’re building something that could be construed as surveillance or autonomous weapons (or, what controls are in place to prevent your gun from firing when Claude says, etc). A lot of paperwork.

My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.

mwigdahl a day ago

It's that, as I understand it. Anthropic is the only vendor certified to run its models on DoD/DoW classified networks.

cmrdporcupine a day ago

Same reason they cut funding for universities that had DEI mandates, etc. and made a big spectacle of doing it despite it often being very little money etc. etc.

It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.

He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.

SpicyLemonZest a day ago

He pushed the issue to an ultimatum because he is an unqualified drunk, and thinks that it's against the law for anyone to try and stop the US military from doing something they want to do. This isn't an isolated issue; he tried to get multiple US Senators prosecuted for making a PSA that servicemembers shouldn't follow illegal orders.

tabbott a day ago

What makes you want to believe the Trump Administration when it claims it doesn't want to do domestic mass surveillance?

10297-1287 a day ago

They want to be nationalized, which is the most profitable exit they'll ever get.

ethagnawl 20 hours ago

The official name of this organization remains _The United States Department of Defense_.

anonym29 a day ago

Anthropic has already cooperated too much with the US Intelligence Community, but better some restraint than none, and better late than never.

lynx97 11 hours ago

With all this talk about AI and autonomous weapon systems. It seems like one of John Carpenters first movies, and my favourite B-movie, is coming back strong!

Maybe I should call ChatGPT "Bomb"... I already use "make it so" for coding agents, so...

huslage a day ago

It is not the Department of War. He's towing the line from the get-go. Forget this guy.

DudeOpotomus 8 hours ago

It's never wrong to do the right thing.

Trump and his cronies are short timers. They will all be gone in a few years, many in prison, many in the ground.

Treat them with abandon and disdain, because they are the worst people in the history of the USA. Stand on your principles because they have none.

worik 13 hours ago

Is it so normal that the USA should be in such a state of constant war, and war readiness that this even makes sense?

t01100001ylor 9 hours ago

i am american and i do not like this.

coolca 21 hours ago

Imagine being so cautious with your words, only to have 'Department of War' in your title

verisimi 16 hours ago

It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:

> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.

> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.

kittikitti 6 hours ago

I simply don't trust any of their moral posturing when they've never provided open-weight models and don't have any intention of doing so. Anthropic continuously makes hypocritical statements on safety and ethics. They made their bed with the U.S. government, and now they don't want to sleep in it.

IAmGraydon 19 hours ago

They should try Sam Altman. He's just the kind of guy who would bend over for this kind of authoritarian demand.

insane_dreamer 19 hours ago

Good to see one AI company not selling out their values in exchange for military contracts. This shouldn't be rare, but it is. Good for them.

mrcwinn 20 hours ago

I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.

bamboozled a day ago

Move your company out of the USA?

pousada a day ago

Department of War is just such a fucking joke title - when has the US stooped so low, I used to believe in you guys as the force of good on this planet smh

baggachipz a day ago

Well then I don't know where you've been for the last ~10~ ~20~ 70 years

mwigdahl a day ago

When? Its entire history from the foundation of the Republic to 1947. The name was changed after WWII; now a faction wants to change it back. The difference in name never changed the behavior, in either direction.

darvid a day ago

I'm 33 years old, would you mind telling me which year you thought this was, force of good stuff? might be before my time

genuinely curious, I got nothing

mylifeandtimes 20 hours ago

it was before your time.

In WWII, we saved the world from what is now seen as some really evil stuff. Not alone of course, Europe and Russia made huge sacrifices and that's where much of the war was fought. But US arms and blood were the decisive factor, Germany was winning, Japan was winning.

After WWII, the US decided to rebuild the world. We turned our enemies (Germany, Japan) into our close allies.

And the people who did it were really and seriously morally committed to doing what they thought was right. It was about building a country, working together. Not the insane politics of today.

Look, it wasn't all rose-tinted glasses. Bad stuff happened, and McCarthy was worse that what we currently have. And the civil rights movement and all of that. And the stupid wars, Korea, Vietnam, all the smaller police actions. Bad shit was done.

But on balance, the US was seen as the force of good, and the guaranteeor of world peace and the prosperity that allows.

phtrivier a day ago

The USA were pretty clearly on the "better side" of conflicts in 1941-1945, during the Cold War (at least as far as Europe and the Marshall plan was concerned). In Koweït and central Europe during the 90s. You may even argue for Afghanistan post 9-11 (although the state building was botched.) in the 2000s. ISIS is a footnote in history because of US intervention (from Trump first term, of all things.) And Ukraine would not be against getting the support it had in 2022 back under Trump.

Does not mean that very bad things were not happening at the same time.

But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.

jwpapi 20 hours ago

Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.

I understand the risk, but that is the pill.

8note 19 hours ago

they could use a different provider for the kill chain.

we must use claude to decide whether to nuke iran, or else our gun manufacturers arent allowed to use to to run spreadsheets

is a bit ridiculous.

dev1ycan 11 hours ago

This doesn't read too badly, but I still do not believe that ANY AI company is ethical, at all.

ponorin 9 hours ago

As a non-American they've lost me already at the first sentence.

United States, even before Trump, has always been about projecting power rather than spreading democracy. There are several non-Western, former colonies who does democracy better than the US. Despite democratic backsliding being a worldwide phenomenon very few have slid back as much as the US. The US have regularly supported or even created terrorists and authoritarian regimes if it meant that the country wouldn't "go woke." The ones that grew democracy, grew in spite of it.

This statement shows just how much they align with the DoD ("DoW" is a secondary name that the orange head insists it's the correct one. Using that terminology alone speaks volumes.) rather than misalign. This coupled with their drop of their safety pledge a few days ago makes it clear they are fundamentally and institutionally against safe AI development/deployment. A minute desagreement on the ways AI can destroy humanity isn't even remotely sufficient if you're happy to work with the bullies of the world in the first place.

And the reason is even more ridiculous. Mass surveillance is bad... because it's directed at us rather than the others? That's a thick irony if I'd ever seen one. You know (or should have known) foreign intelligence has even less safeguards than domestic surveillance. Intelligence agencies transfer intercepted communications data to each other to "lawfully" get around those domestic surveillance restrictions. If this looks at all like standing up that's because the bar has plunged into the abyss, which frankly speaking is kind of a virtue in USA.

EddieLomax 8 hours ago

Fuck yes. OpenAI, take notes.

toephu2 3 hours ago

They just jumped in line to take Anthropic's spot.

https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...

ThouYS 13 hours ago

this is.. a nothing burger? they don't exclude working for autonomous weapons, nor do they exclude mass surveillance. so what gives?

nova22033 20 hours ago

Why does DoD need claude? I thought xAI was "less woke" and far better than claude

marshmellman 19 hours ago

Well, now if DoD moves to another AI provider, we’ll know what was compromised.

Aeroi 18 hours ago

in hindsight, the smart thing to do would have been to accept the contracts, knowingly enshittify the request, and protect other bad actors like Elon and xAI from ruthlessly compromising our democracies.

techpression 18 hours ago

”Defense of democracy” is just another version of ”think of the children”.

https://en.wikipedia.org/wiki/Think_of_the_children

int32_64 a day ago

Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.

This is why people should support open models.

When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.

narrator 17 hours ago

I mean you're all going to get killed by fully autonomous China AI war robots in 10 years anyway if you're not pure blood Han Chinese, but hey at least you'll provide something to laugh at for future Chinese Communist party history scholars. They will say, "Look at the stupid Baizuos, our propaganda ops convinced them all to commit collective suicide. Stupid barbarians. They proved they are an inferior race."

Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.

parhamn a day ago

Now, I'm curious. How Bedrock/Azure Claude models work?

Do these rules apply to them too?

gnarlouse 17 hours ago

huge if true.

they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.

jijji 20 hours ago

the government should not be using any private LLM, they should build their own internal systems using publicly available LLM's, which change frequently anyway. I don't see why they would put their trust in a third party like that. This back and forth about "ethics" is a bunch of nonsense, and can be solved simply by going for a custom solution which would probably be orders of magnitude cheaper in the long run. The most expensive part is the GPU's used for inference, which can be produced in silicon [1].

[1] https://taalas.com/products/

shawmakesmagic 20 hours ago

My man

moktonar 16 hours ago

Well fucking done. Anthropic has just gained the “has bollocks” status. Also now we know what the govt is really up to with AI. G fucking g

7ero 14 hours ago

Sound like they're following the google playbook, don't be evil, until the shareholders tell you to.

7ero 14 hours ago

Sounds like following the google playbook, don't be evil, until the shareholders tell you to.

OrvalWintermute a day ago

I don't think this is genuine concern, I think this is instead, veiled fear of the TDS posse being covered by feigned concern.

Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!

jibal a day ago

It's the Department of Defense, not the Department of War ... only Congress has the legal authority to change the name, and they haven't.

knfkgklglwjg a day ago

Same with Gulf of America.

brooke2k a day ago

The constant reference to "democracy" as the thing that makes us good and them bad is so frustrating to me because we are _barely_ a democracy.

We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?

Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.

Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.

There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.

The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.

He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.

And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?

Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.

We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.

And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.

isamuel 19 hours ago

Amodei’s use of “warfighters” (a Hegseth-era neologism for “soldiers”) is truly nauseating.

WatchDog 19 hours ago

Soldier is an Army specific term. Like Sailor, Airman, Marine, etc.

Perhaps the term you are looking for is service member?

Warfighter tends to refer to anyone involved in a role that directly supports combat operations, it may or may not be a service member.

ulfw 9 hours ago

Department of War.

What a shit name

lenerdenator 9 hours ago

Nitpick: It's still the Department of Defense, not the Department of War. Don't let the chuds live in their delusional fantasy world.

mrcwinn 20 hours ago

Keep in mind: the government is very invested logistically in Anthropic.

So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.

Because if there were some kind of concession, it would have been simplest just to work with Anthropic.

Delete ChatGPT and Grok.

sneak 10 hours ago

The only reason you ask for these capabilities is because you want to use these capabilities.

That is, the news here is that DoW (formerly DoD) is willing and able and interested in using SOTA AI to enable processing of domestic mass surveillance data and autonomous weapons. Anthropic’s protests aside, you can’t fight city hall, they have a heart attack gun and Anthropic does not. They’ll get what they want.

I am not particularly AI alarmist, but these are facts staring us right in the face.

We are so fucked.

delaminator 13 hours ago

Hegseth doesn't need autonomous drones, he's got the Treasury.

keeeba a day ago

Big respect

Total humiliation for Hegseth, sure there will be a backlash

techblueberry a day ago

I thought it was interesting he threw in the bit about the supply chain risk and Defense Production Act being inherently contradictory. Most of the letter felt objective and cooperative, but that bit jumped off the page as more forceful rejection of Hegseth's attempt to bully them. Couldn't have been accidental.

calgoo a day ago

I see it as the opposite, its a lousy excuse of a message trying to get people not to think that they are giving in. Instead they list the horrible uses that they are already helping the government with. Dont worry, we only help murder people in other countries not the US. They also keep calling it the "Department of War" which means that this message is not for "us", its them begging publicly to Hegseth.

adi_kurian 21 hours ago

What would the ideal response have been, in your view?

calgoo 13 hours ago

jpcompartir 13 hours ago

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request."

calgoo 13 hours ago

delaminator a day ago

"so we'll do it and feel guilty about it"

bawis 9 hours ago

That has been the war politics of the western in the last century or so, nothing new.

hsuduebc2 a day ago

We are the victims bro

jajuuka 5 hours ago

While it's good that they didn't fold, they didn't need to lick the boot that hard. So much spent on "we love the US and democracy and hate communism and the Chinese." They are trying really hard to keep this contract as is, which I think says more than folding to these additional demands.

alephnerd a day ago

One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.

Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.

This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.

Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.

Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.

[0] - https://www.anthropic.com/news/mou-uk-government

[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...

[2] - https://www.anthropic.com/news/opening-our-tokyo-office

[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008

arduanika 17 hours ago

I tried several times to read your second paragraph, and failed to parse it. Could you break it into several sentences somehow? It's possible you're making an important point, but I can't tell what you're trying to say.

Bengalilol 14 hours ago

TLDR: « depends on where you live »

jiggawatts a day ago

Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.

I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!

This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.

Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.

If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.

ninjagoo 17 hours ago

> Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.

Having been identified back then, this issue has been systematically stamped out in modern militaries through training methods. Cue high levels of PTSD in modern frontline troops after they absorb what they actually did.

jiggawatts 14 hours ago

I would love to see a reference for that!

AFAIK the rounds shot to kills ratio is still north of ten thousand in most modern conflicts.

I’ve heard anecdotally that drone operators in Ukraine have a ratio of about ten drones per kill and rack up multiple kills per day every day. Supposedly the pilots “burn out” due to the psychological impacts.

calgoo 13 hours ago

ninjagoo 11 hours ago

tehjoker 20 hours ago

The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.

AI should never be used in military contexts. It is an extremely dangerous development.

Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.

8note 19 hours ago

ukraine is using ai in a military context with some effectiveness. i dont think theres much of a problem with having the drone take over the last couple minutes of blowing up a russian factory

myko 20 hours ago

There is no Department of War. This is the dumbest fucking timeline.

myko 8 hours ago

To be clear, despite the downvotes, my statement is true. It is the Department of Defense. As someone who spent a good portion of my life working under it, it is offensive to me people are going along with the pretense that these idiots can unilaterally rename the organization.

einpoklum 9 hours ago

The first sentence was quite enough:

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Ah, another head of a huge corporation swears to defend his stockholders' commercial interests through imperial war against other nation-states. And of course "we" are democratic while "they" are autocratic.

The main thing that's disappointing is how some people here see him or his company as "well-intentioned".

creatonez 17 hours ago

> Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.

It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.

mvkel a day ago

"as an ai safety company, we only believe in -partially- autonomous weaponry"

Ads are coming.

ddxv a day ago

I'll be glad if they could open their platform enough so that it could run on ads and not 200 dollar subscriptions

mvkel 19 hours ago

for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability

OutOfHere a day ago

The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.

knfkgklglwjg a day ago

The best open models are from china though.

OutOfHere 20 hours ago

It's a good reason to fund open model development domestically.

dakolli 20 hours ago

This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.

I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.

Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.

joshAg 21 hours ago

torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus

probably_wrong a day ago

I have read the whole thing but I nonetheless want to focus on the second paragraph:

> Anthropic has therefore worked proactively to deploy our models to the Department of War

This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.

There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.

Disclaimer: I'm not a US citizen.

[1] https://m.youtube.com/watch?v=ToKcmnrE5oY

ricardobeat a day ago

What is their other possible move here, considering the government is threatening to destroy their business entirely?

probably_wrong a day ago

One alternative would be to call the government's bluff: if they truly are as indispensable as they claim then they can leverage that advantage into a deal.

But at a more general level, I'd say that unethical actions do not suddenly become ethical when one's business is at risk. If Anthropic considers that using their technology for X is unethical and then decide that their money and power is worth more than the lives of the foreigners that will be affected by doing X then good for them, but they shouldn't then make a grandstand about how hard they fought to ensure that only foreigners get their necks under the boots.

ninjagoo 18 hours ago

> What is their other possible move here, considering the government is threatening to destroy their business entirely?

You must not be American, then. We all know that these corporate favoring contract terms are managed through campaign contributions; savvy?

Anthropic must have high school interns as govt liaisons, and not very bright ones

XorNot a day ago

Warfighters is a pretty common term though. There's a fair bit of nuance in when and how you'd use it.

cwillu a day ago

It's a common term that comes with a lot of criticism in the vein of noticing the skulls.

0xbadcafebee 19 hours ago

Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.

eigencoder 6 hours ago

Honestly, I don't get it. So many tech companies are happy to do business in China and serve its interests, when it would gladly see them fail. But they won't defend their own country and its interests.

raincole 6 hours ago

Businessmen are not well known for their loyalty.

I_am_tiberius 13 hours ago

I'm still waiting for a proof that they don't use user data (directly or derived) for training.

ozzymuppet 20 hours ago

Wow, I expected them to cave, and they did'nt!

I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.

DiabloD3 15 hours ago

This seems to be at least partially written by AI: There is no Department of War, it is called the Department of Defense.

zzot 15 hours ago

That’s not true anymore. Trump renamed it in September: https://www.war.gov/News/News-Stories/Article/Article/429582...

calgoo 13 hours ago

Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.

nla 9 hours ago

ssrshh 7 hours ago

This is quite the PR stunt. Tech companies can't stop copying Apple

willmorrison a day ago

They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???

I guess they're evil. Tragic.

fluidcruft a day ago

It's not inconceivable that AI could become better than humans at targeting things. For example if it can reliably identify enemy warcraft or drones faster than people can react. I'm not saying Claude's models are suited for that but humans aren't perfect and in theory AI can be better than humans. It's not currently true and would need to be proved, but it doesn't seem unreasonable. It could well be better than something like deploying mines.

shevy-java 7 hours ago

Indeed. The AI will decide who has to die and who may live.

Skynet in Terminator was scary. The AI Skynet is even scarier - and sucks, too.

micromacrofoot a day ago

We're living in a time where most tech companies are donating millions of dollars to the current leadership in exchange for favors.

In that climate this is a more of a stand than what everyone else is doing.

zkmon 10 hours ago

Same as saying "Look I sold nukes to USA to protect democracy, but we put 2 rules about usage". Everyone got nukes and nobody can enforce the rules. Just whitewashing of pure business greed, using terms like national security, democracy etc.

toddmorrow 7 hours ago

his dilemma wasn't moral. he has none. it was a marketing snafu. he marketed anthropic as different when the cost of claiming that was zero. now there's a cost, and he immediately changes his tune. his statement was essentially "why refrain from building killing machines when no one else is refraining? why limit ourselves unilaterally?" duley proves he never had morals in the first place.

MWParkerson 6 hours ago

Nobody said anything like that in the linked post not sure what you're on about

nla 9 hours ago

I truly do not understand why anyone thinks serious work can be done with their models, let alone government work. Their models do no hold a candle to Open AI.

caerwy 6 hours ago

His real beef seems to be with “any lawful use”. He doesn't agree with the law and wants to only sell to customers who agree with his own moral code. I respect his moral choice but suspect this is not how a market economy ought to work. He ought to lobby government to change the law rather than make moral judgements about his customers.

fooker 6 hours ago

When you make the market, you too can dictate how a 'market economy' ought to work :)

muddi900 6 hours ago

Free Speech rights mean not being compelled to act against your moral code.