The Rational Conclusion of Doomerism Is Violence (campbellramble.ai)

69 points by thedudeabides5 3 hours ago

MostlyStable 2 hours ago

It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action. "The ends don't justify the means" and literal entire religions have been built on this concept. It is not irrational or incoherent to believe that even something as serious as extinction does not justify arbitrary action.

Someone _may_ decide that it does, but it is not a necessary conclusion.

And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.

This article is a much better (and much longer) extension of the argument and direct refutation of the OP article

https://thezvi.substack.com/p/political-violence-is-never-ac...

hn_throwaway_99 2 hours ago

The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.

To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.

Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?

Aurornis 43 minutes ago

> The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.

In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.

I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.

hn_throwaway_99 29 minutes ago

solaarphunk an hour ago

This is just a version of individualism vs the state. Much of western society has become increasingly confused about what violence is acceptable, let alone who should be allowed to commit violence, or have a monopoly on violence.

If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.

antonvs a minute ago

nradov an hour ago

During WWII, the entire Allied leadership was willing to kill millions of Axis children if that's what it took to win the war and force the enemy to surrender unconditionally. There was at least some genocidal intent. Population centers were intentionally bombed to wipe out civilian factory workers. We can argue about whether that was right or wrong but the reality is that it's probably inevitable once armed conflicts involving nation states escalate to an existential level.

“Before we’re through with them, the Japanese language will be spoken only in hell.”

-- Admiral William F. "Bull" Halsey Jr., 1941

sublinear 27 minutes ago

> How do people square those 2 ideas?

If you're seriously trying to understand the nuance of the act itself, you should consider reading at what is standard issue for law enforcement and military.

"On Killing" by Dave Grossman is a classic.

If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.

What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.

From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first.

slopinthebag 2 hours ago

Even more simply put, if political violence is never the answer and the institution of government is the biggest single source of political violence, what does that say about the legitimacy of the institution of government?

These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.

janalsncm 2 hours ago

Your reasoning makes sense under a regime of infinite games. In other words, the goal is to continue playing the game rather than win once.

These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.

I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.

I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.

matthewdgreen 34 minutes ago

Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI. If that's not just talk, then this line reasoning only gives you a few possible modes of action. I would not be worried about the people with Molotov cocktails, but I'd be very worried about bio terrorism.

atmavatar 2 hours ago

> "The ends don't justify the means" and literal entire religions have been built on this concept.

Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.

The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.

I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.

    The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
    - Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.

janalsncm an hour ago

> The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.

When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.

morningsam 2 hours ago

Yudkowsky himself also posted a rebuttal today: https://x.com/ESYudkowsky/article/2043601524815716866

Aurornis 28 minutes ago

Anyone willing to read that wall of text should also read Yudkowsky's original piece on the topic: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:

> be willing to destroy a rogue datacenter by airstrike.

Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.

handoflixue an hour ago

I found the last paragraph a fairly great summary of a rather long post:

> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.

stickfigure an hour ago

guelo an hour ago

Jeebuz that was long, I only made it through about half of it. But I think he's calling for cold war nuclear treaties style international cooperation. But I believe those mechanisms are broken and unavailable to us for two main reasons:

1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.

2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.

xrd 2 hours ago

That was really fascinating. Thanks.

doctorpangloss 2 hours ago

I find all of this stuff very interesting but nonetheless these two voices sound like they could never win an election and aspire not to. That is the ultimate test of the worthlessness of a policy - it's all equally worthless until it wins an election, and that's what makes it reality.

AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.

What am I saying? The best rebuttal is, get elected.

Joker_vD 2 hours ago

> "The ends don't justify the means"

Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.

MostlyStable 2 hours ago

I personally believe quite strongly that some things are just immoral on their face and that I would rather fail/die without using them than succeed/live while using them. I agree that in very many cases where people do these things, they are, in the long run, counter productive, but I also believe that even if could be conclusively proven that this wasn't the case, I would still advocate against their use.

f1shy 2 hours ago

Thanks.

That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.

I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.

kgwgk an hour ago

BurningFrog an hour ago

I agree, but it's only half of the equation.

Your solution also can't be worse than the problem it solves!

Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.

nitwit005 2 hours ago

Mentally ill people often have a justification for their actions which is vaguely rational, but you'll notice the vast majority of people aren't doing what they're doing.

These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.

stickfigure 2 hours ago

I miss the days when people blamed all their woes on their parents circumcising them. Simpler times.

geremiiah 14 minutes ago

LLMs are dangerous in other ways (LLM psychosis and false confidence has probably already caused negligent deaths). However, I don't think we are close to a terminator scenario.

At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.

drivebyhooting 2 hours ago

Can LLMs design and build a chip foundry to manufacture semiconductors? No?

Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?

Can LLMs design and manufacture Shahed drones? No?

There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.

kurthr 3 minutes ago

Can LLMs convince a human who has power over each and everyone of those things to use them for a(n unstated) prompts goal?

Yeah, probably over 50% of the population already, and if not many of the rest soon.

georgemcbay an hour ago

> And yet the world hasn’t ended.

...yet

But we only need things to spiral out of control one time for that to change.

The world as we understand it would have ended if Vasily Arkhipov didn't veto the decision to launch a sub nuke during the Cuban Missile Crisis.

Is an emotionless AI system in his place ever going to make the same decision he did?

How confident are you we won't put an AI system in his place, particularly when we have to assume if we don't others will?

drivebyhooting 8 minutes ago

Sounds like your fear is not of artificial intelligence but artificial incompetence. That’s a very different position from the AI doomers.

linksnapzz an hour ago

I'm not surprised that the sort of individual prone to taking Yud too seriously is also likely to be a comically-inept assassin.

Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.

gradientsrneat 23 minutes ago

Maybe if the LLM CEOs stopped spreading doomer narratives to sell their products, these people would chill out.

rzmmm a few seconds ago

Too bad it's effective marketing strategy. Negative emotions are more powerful drivers than positive ones.

thephyber 14 minutes ago

This issue is more complicated.

Sam Altman has stated that the AI revolution will “be like an infinite number of immigrants”. That’s a dangerous thing to say when the country’s political environment has convinced half of the voters that all immigrants are rapey, murderey, immoral subhumans.

Also, Sam Altman helped create OpenAI with the original goals of being an ethical non-profit, only to pivot and kick out all of the people who still wanted that original vision. Now several of the LLM CEOs are screaming “we have to stay fully on the accelerator pedal or the Chinese will get there first”, all while abandoning the ethics that supposedly made us better than the Chinese. (And yes, I understand the issues with the Chinese government and that people are different than their government).

dd8601fn 7 minutes ago

That's definitely too victim blame-y... but it's fair to say they're not helping with all the "Our new product is so dangerously capable that we can't even let people use it!" stuff.

Great marketing, I guess? But also a bit like pouring gasoline on the pile of tinder that these Terminator doomsday folks seem to be.

beloch 22 minutes ago

"There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."

----------

There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.

Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.

Sam Altman doesn't need to worry, but his employees should probably keep their heads up.

hax0ron3 2 hours ago

I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.

The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.

I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

Aurornis 21 minutes ago

> I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.

tintor an hour ago

Humanity agreed, for example, that growing ozone hole is dangerous for everyone, and worked together to ban production of gases that damage ozone layer. See Montreal Protocol International Treaty. It was highly effective. Training powerful AIs isn’t different.

hax0ron3 5 minutes ago

I think that trying to stop AI development is more like trying to stop nuclear weapon proliferation than it is like fixing the ozone hole. I think the difference is that if one country works to fix the ozone hole, that doesn't make the other countries scared that they are falling behind in ozone hole fixing technology and might get conquered or reduced to subservience as a result.

Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.

squigz 2 hours ago

> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?

hax0ron3 2 hours ago

Those weapons are still all being developed and would be brought out in any actually existential war where they seemed useful. The agreements would last only as long as the wars were not existential, or as long as the various countries involved believed that use of them, and the resulting retaliation in kind, would be more destructive than not using them. But one way or another, countries still develop them.

dweinus 2 hours ago

switchbak 35 minutes ago

China is rapidly building out their nuclear arsenal as we speak, and the USA is undergoing an expensive replacement process of theirs as well.

That kind of idea might have held water in the 90's, but that's not the world we live in any longer.

dpark 2 hours ago

> Haven't many (most?) countries agreed to nuclear disarmament?

This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).

9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.

(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)

morningsam 2 hours ago

>The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it.

I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.

boothby an hour ago

Cold comfort: AGI will not genocide humanity until it can plausibly automate logistics from mining raw materials to building out compute and power generation.

necovek 2 hours ago

I am disappointed "Doomerism" is not an official name for the practice of putting Doom on anything and everything!

bjourne 2 hours ago

Yes, but against the angry dormers we have hordes of cheerful coomers who welcome the fruits of the labour of the AI with one open arm.

jmull 2 hours ago

People are basing their entire world view on not understanding the nature of exponential phenomena.

Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.

That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.

That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.

But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".

I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.

AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).

saulpw 2 hours ago

Agreed. But, many said the same thing about Moore's Law or its equivalents in 1985, 1995, 2005, 2015, and yet the pace of core hardware development has been relentlessly exponential. I keep thinking we must be approaching some kind of limit (and surely we must be!) but I've learned not to bet on it.

avidiax 2 hours ago

It's often constructive to consider the edges and corners of the space of possible positions, to understand the weaknesses of the various arguments.

For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.

How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.

And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.

Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.

So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.

greenavocado 2 hours ago

> People are basing their entire world view [on things getting worse because their leadership is abandoning them or actively working against their interests]

We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.

Fixed that for you.

jmull 2 hours ago

That's a completely separate point, is it not?

Maybe write it up and post a top-level comment if you think it's a point worth making.

AndrewKemendo 3 hours ago

Wouldn’t be a proper technology revolution without some version of labor realizing they are commodities and rejecting the collapse of the current form of labor power, so that tells me we’re actually in the transition from an old economic process to a new one.

Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.

https://en.wikipedia.org/wiki/Luddite

>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.

eemax 2 hours ago

> The Rational Conclusion of Doomerism Is Violence

No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:

https://x.com/ESYudkowsky/status/2043601524815716866

Which the author of this piece of slop appears to lack.

arduanika 2 hours ago

It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!

> this piece of slop

Citation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.

The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.

handoflixue an hour ago

I think it's rather relevant that the community itself rejects the logic you're trying to impose on it. You can straw-man any sort of conclusion on to any sort of philosophy. This will not actually help you much at all if you're trying to predict what people will actually do.

If the only people that reach your conclusion are ones that don't actually subscribe to the philosophy, then it doesn't matter, because no one is actually acting on those conclusions.

And if we want to hold people responsible because others pervert their ideas, then we have to accept that Jesus Christ was a horrific, evil person for preaching "Love thy Neighbor"; just look at the crusades that were somehow the "rational conclusion" of that philosophy!

eemax 36 minutes ago

> It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!

No, I am saying that Yudkowsky's views are straightforwardly compatible with bedrock principles of liberalism, and the author of the piece fails to acknowledge that compatibility or grapple with them himself. It's not about "rationalism" or who is "allowed" to speculate.

I called it slop because it says false things that have the hallmark of LLM style, e.g.

> The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.

Apocryphon 29 minutes ago

aaroninsf 2 hours ago

Hot take:

I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."

Sorry, lol, no.

The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?

That's not a rhetorical question.

To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.

How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?

How many of you have carried, or worked beneath, the banner, move fast and break things...?

What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?

And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?

One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.

Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."

When code is law, the law is buggy.

When there is no recourse through the law, you get violence.

arduanika 2 hours ago

This has been decades in the making. We had premonitions of the violence that would come, for example with the Zizians. Get ready for what happens when a million blogposts worth of bad philosophy, bad analogies, and anti-institutionalist hubris are deeply indoctrinated into a vast, decentralized network of highly capable engineering minds who lack common sense and normal restraints.

They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.

PaulHoule 3 hours ago

... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.

jmull 2 hours ago

No you wouldn't.

Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.

It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.

hax0ron3 2 hours ago

>casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.

I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.

That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.

throwaway27448 2 hours ago

Obviously, ineffective action will be counterproductive. I recommend effective action.

handoflixue an hour ago

PaulHoule 2 hours ago

I'm not advocating for that, I'm just saying the whole thing is performative and gets taken at face value in a way that it should not be.

If you wanted to be a contrarian concerned about x-risks go try to find $1B to pay Embraer or another minor aviation vendor to make a plane to do stratospheric aerosol injection or something.

---

If you want my diagnosis it is, in a time of lower social inequality cults frequently tried to steal labor and money from a broad base of people.

For instance in the L. Ron Hubbard age Scientology would treat you as a "public" if you had money to take and if you didn't or after you'd been bled dry you would be be recruited as "staff". Hubbard thought it was immoral to take donations without giving something in return so it was centered around getting people to spend on "auditing". Between 1950 Dianetics and the current Miscavige age, income and wealth has gotten concentrated and he changed that single element of the Hubbard doctrine and now it is all about recruiting money from "whales" who donated to the International Association of Scientologists (IAS)

https://tonyortega.substack.com/p/scientologys-ias-trophy-wi...

(A good backgrounder on pernicious cults is https://en.wikipedia.org/wiki/Snapping:_America%27s_Epidemic...)

In the case of the Yudkowsky thing the mass just doesn't have a lot of money to steal after paying the rent and turning the labor of the unskilled and ignorant (even if they think otherwise) is a case of the juice not being worth the squeeze, so the point is to build a Potempkin village that looks like a social movement that creates a frame where you can get money from sources such as "SBF steals it and gives it to the movement" as well as "rich kids who inherited a lot of money but don't have a lot of sense"

adjejmxbdjdn 2 hours ago

Your statement is incorrect.

If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.

Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?

An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.

Rallying people through speech is a far more successful way for an individual to enact change through violence

virissimo 2 hours ago

Does this apply to other domains or just AI? For example, if you think gain-of-function research accidents put millions of lives at risk, is the logical next step to quit your job and become a terrorist?

kelseyfrog 3 hours ago

Disagree. Just one more blog post. I swear, one more blog post will do it.

SpicyLemonZest 2 hours ago

They are! Yudkowsky sat down with Senator Bernie Sanders last month to explain what's at stake, successfully convinced him that it's a big deal, and Sanders has now proposed a national moratorium on AI data centers (https://www.sanders.senate.gov/press-releases/news-sanders-o...) to help slow things down. That's pretty direct, and a lot more useful than random violence by random people.

AndrewKemendo 3 hours ago

That pesky basilisk to worry about though

vrganj 2 hours ago

Yeah I mean Lenin recognized that a century ago.

The only meaningful way to affect change against the oligarchy is and always has been violence.

This is not a novel insight.

kelseyfrog 2 hours ago

War is a mere continuation of policy by other means[1]. When policy through legislation is empirically impotent[2], calls to continue attempts at a failed strategy are indistinguishable from being told, "continue losing."

There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.

1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...

2. https://archive.org/details/gilens_and_page_2014_-testing_th...

arduanika 2 hours ago

> "fix participatory democracy"

Ah yes, a popular codeword for "I did not get my way".

There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.

vrganj 2 hours ago

Maybe democracy is fundamentally flawed because the demos is? How should one act in such a situation?

TuringTest an hour ago

kelseyfrog 2 hours ago

Ah yes, "continue losing."

Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.

arduanika 2 hours ago

sleepybrett an hour ago

> There is no electoral majority behind the AI doomer cult.

how can you be sure? has anyone polled it? are they too scared to poll it?

unethical_ban 2 hours ago

"Those who make peaceful revolution impossible will make violent revolution inevitable."

Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.

I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.

tcoff91 2 hours ago

I have a different perspective on this given that I view climate change as the biggest threat we face as a species.

I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.

It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.

Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.

derektank 2 hours ago

Wouldn’t geoengineering through stratospheric aerosol engineering (likely with sulfates) be both cheaper and less technically challenging than changing the built environment? If we’re accepting massive climate changes anyways, it seems like taking the risk with solar radiation modifications would be the next step

dpark 2 hours ago

Ah, yes. Let us spray more sulfates into the air. Let’s fight global warming by poisoning all the waterways and oceans with more acid rain.

derektank 2 hours ago

tcoff91 2 hours ago

That would require global consensus and could ignite wars if there isn't global consensus. Seems very likely that this could have unanticipated consequences that could be worse, but admittedly this is an area I don't really know much about.

ACCount37 2 hours ago

graemep 2 hours ago

That is interesting, and I think you are right that emissions reductions will not happen any time soon (eventually, but it will take a while).

I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.

dpark 2 hours ago

> It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.

Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?

arduanika 2 hours ago

Upvoted because this is an interesting take, but I disagree at least somewhat. I think you should be wary whenever you've narrowed down your options to, "in order to solve the top-priority problem X, our only hope is solution Y."

I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.

tcoff91 an hour ago

I definitely am open to other possible developments and accept that I'm likely wrong just as basically everyone is wrong when predicting the future.

irishcoffee 2 hours ago

https://www.howeandhowe.com/civil/thermite

The firefighting robots of which you speak already exist.

tcoff91 2 hours ago

Hell yeah, those look awesome. I look forward to the autonomous versions that don't require fully manual remote operation. It'd be great if coordinators could have like an RTS-style view and command these like they're starcraft units.