The other half of AI safety (personalaisafety.com)
79 points by sofiaqt 9 hours ago
nojs 8 hours ago
> Every week, somewhere between 1.2 and 3 million ChatGPT users, roughly the population of a small country, show signals of psychosis, mania, suicidal planning, or unhealthy emotional dependence on the model.
> Why is mental-health crisis not a gating category, the kind where the conversation stops, full stop, and the user is routed to a human?
Well, obviously “routing to a human” is not feasible at that scale. And cold exiting the conversation is probably worse for the user than answering carefully.
hx8 8 hours ago
I don't think it's obvious that routing to a human is infeasible. I'm sure many local authorities, health agencies, and non-profits would be okay being routed to. Additionally, I'm sure many of the users are the same week over week, so giving them long term care would reduce the total volume. Finally, there is a long gap between psychosis and emotional dependence, so there could be some triage to make sure those most in need have human intervention.
Gigachad 8 hours ago
Tech companies will pull trillions of dollars out of their asses when the problem is boosting ad revenue or automating people out of a job. But when asked to deal with the crisis they invented and dumped on society the answer is “that’s impossible, doesn’t scale”
CobrastanJorji 7 hours ago
Figure a "mental health crisis" human conversation takes 30 minutes. Three million incidents per week would require 37,500 qualified mental health counselors on the phones working a 40 hour shift that week. Figure they make $75k/year each. You're now spending $3 billion per year on crisis response, and you're employing like 10% of all of the health counselors in the US. And all you're providing is 30 minute chats.
godelski 7 hours ago
Gigachad 7 hours ago
godelski 6 hours ago
> is not feasible at that scale
I want to use an analogy here. The same arguments are often made about cleaning up environmental damage. So either make the companies doing the polluting pay for the costs themselves or if we care so much about them being profitable then we subsidize them by paying for those cleanup efforts out of taxes. Doing nothing is a worse form of subsidy as it not only costs more (in literal dollars) but shoulders that costs onto the people with the least ability to pay for it. The problem is you're treating "doing nothing" as having no cost. It has a high cost, but the cost is also highly distributed.So if it is not scalable, then why subsidize them? This is literally a tragedy of the commons situation. Personally, I'm in favor of making the people who make a mess clean up that mess. I really don't understand why this is such a contentious opinion.
concinds 8 hours ago
"Routed to a human" is what the suicide hotline numbers do. OpenAI employees are neither trained nor credible to do that stuff.
GardenLetter27 2 hours ago
And what will a human do better? Why will the human care? Who will pay the human?
swatcoder 7 hours ago
Well, then maybe you can't scale it as a free service with self-serve signups. Maybe you need to gate who you allow to use it and pace how intensely they can engage. Or maybe you need to look for other solutions.
Yielding to "not feasible at scale" is exactly how we ended up with a lot of today's most pressing and almost intractible problems, from social media's ills to person and society straight through to enshittification and non-repairability.
The_Blade 7 hours ago
> ...straight through to enshittification and non-repairability.
funny as "enshittification" was the topic of a 99% Invisible pod just a few days ago and I also was listening to the new Stewart Brand book that Stripe published. i fixed a Norwegian desk I bought a decade ago on Valencia. happily not feasible at scale but neither was how i broke it :)
Legend2440 8 hours ago
I don't buy that chatGPT is actually doing these users any harm.
I think openAI is doing the best they reasonably can with a very difficult class of users, whose problems are neither their fault nor within their power to fix.
autoexec 8 hours ago
> I don't buy that chatGPT is actually doing these users any harm.
I have zero doubt that chatgpt is doing users harm. I even give chatgpt a pass on giving vulnerable people, including children, instructions and information about how to kill themselves. One place chatgpt goes over the line is actively encouraging them to go through with suicide.
I also don't doubt that it feeds into mania and psychosis. While almost anything can do the same, they've designed the service to be as addictive and engaging as possible in part by turning up the ass-kissing sycophancy to 11 with total disregard for the fact that there are times when it's very dangerous to encourage and support everything someone says no matter how obviously sick they are. They also want to whore themselves out as a virtual therapist while being unfit and unqualified for the job and that's just one of many roles the chatbot isn't fit for but they're happy to let you try anyway.
busterarm 8 hours ago
Another software engineer friend of mine recently shared with me some details of the crazy situation that he's involved in now.
Someone who he is friends with, has worked with across multiple jobs for nearly a decade and briefly was roommates with had some mild psychological issues that he knew about. Within a few months of working daily with AI agents at their current job, this person has gone into full blown AI psychosis.
They had a complete explosive meltdown at work. Cops were called. Stalking behavior followed -- restraining orders had to be obtained. Then this person used AI tools to bombard all of his former coworkers with multiple pro-se lawsuits they all have to deal with.
I've dealt with insane, destructive/abusive coworkers before but in the past they only had so much free time to cause massive disruptions to their targets. LLMs have turned that up significantly. Because of ADA, I don't even know what employers can do about this.
js8 3 hours ago
SilverElfin 8 hours ago
If it wasn’t ChatGPT but a fiction book, would you feel the author is “doing harm”? Or is the reader doing it to themselves?
chromacity 8 hours ago
actapp80 3 hours ago
autoexec 7 hours ago
swatcoder 7 hours ago
Why?
Why do you not buy it and why do you think OpenAI is doing the best they reasonably can? Do you have reasons, or is that just something your gut tells you?
They're a new, fast-moving company exploring a completely new technology domain. They're facing existential competition and a ticking clock to make good against unprecedented investment. They have a countless competing priorities and are still discovering the capabilities and consequences of their research, product, and business choices every day.
How do you get from there to "the best they reasonably can" and "nor within their power to fix"? Those feel like very conclusive answers for a field, and business, that's about as far on the frontier as anything we've seen in decades.
godelski 7 hours ago
They're also telling everyone that it is going to kill everybody and take all the jobs. They also say that it'll fix all the problems. And I'm not saying "they" as in a disorganized group of people (e.g. "HN"), I'm saying "they" as in literally multiple people have said all of these things. Not the union of multiple people, they (Altman, Dario, Musk, etc) have independently said all three of these things.
I think my favorite part is how often they talk about the importance of AI safety and then act with absolute disregard for AI safety. I'm not sure why people judge these companies by what comes out of their mouths and don't judge instead by what they actually do. I thought everyone around here was fixated on "results".
Turskarama 8 hours ago
Just because the users were already sick when they started using ChatGPT doesn't mean that ChatGPT isn't exacerbating the issue. Sickness isn't a boolean condition. A big problem with LLMs in general when it comes to people like this is that they are too sycophantic, they don't push back when you start acting strange and they're too gentle about trying to validate you.
BobbyJo 8 hours ago
It's hyper palatable food in the form of conversation. I see society treating it the same way eventually, at least along this one axis of interaction.
derektank 8 hours ago
b00ty4breakfast 8 hours ago
b65e8bee43c2ed0 7 hours ago
>Just because the users were already sick when they started using X, doesn't mean that X isn't exacerbating the issue.
one could define X as virtually anything, and there's always a fresh crop of Tipper Gore wannabe grifters to decry the current thing.
davorak 8 hours ago
> I don't buy that chatGPT is actually doing these users any harm.
For me to buy this as true I would expect that those people would be as well off or as bad off if chatGPT was in their life or not.
I expect that some people are worse off with chatGPT in their life.
Responsibility for that harm is a different question though. Some people are also better of without cars in their life and we let the government laws sort that out.
Getting openAI and similar companies to act in mitigating these harms serves at least a few purposes; reducing the overall harm in the world, reducing/limiting future government regulation, maximizing the adoption of ai tools, potentially increasing long term profits of the companies in question.
DarkNova6 2 hours ago
I think openAI is doing the best they reasonably can to make people depend on their product and chase as much money and power as they possibly could.
cm2012 8 hours ago
1000% agreed. ChatGPT is way better than the alternative of not having it
stingraycharles 8 hours ago
I think this is the right take, and this is genuinely something that we as a society as a whole need to find a way to deal with.
I don’t know where AI is going to stand compared to the invention of, say, the Internet, but it’s going to cause a lot of change in society, in so many ways.
As always, it’s usually the people themselves that are the problem.
For me, I’m personally more terrified what deepfakes and political manipulation / misinformation is going to do, combined with social media, and have a feeling that governments are completely unprepared to deal with this, as this will arrive fast (it’s already here somewhat).
autoexec 8 hours ago
> For me, I’m personally more terrified what deepfakes and political manipulation / misinformation is going to do, combined with social media, and have a feeling that governments are completely unprepared to deal with this, as this will arrive fast (it’s already here somewhat).
I'm not convinced that deepfakes are any worse than photoshop was. It doesn't take much to manipulate/misinform someone. while you can use an AI generated video do to it, but simple text can be just as effective. The public needs to learn that they can't trust that every video they see on the internet is real, just as they've had to learn that they can't trust every photo they see online. The threat with AI is how much faster it can push out the lies making what little moderation we have more difficult.
The best defense is making sure that people have a good education that teaches critical thinking skills and media literacy. We should also be holding social media platforms more accountable for the content they promote. It'd be nice if we held politicians and public servants accountable for spreading lies and misinformation too.
godelski 7 hours ago
> For me, I’m personally more terrified what deepfakes and political manipulation / misinformation is going to do
Isn't this a significant part of what creates AI induced psychosis? I'm not sure why you treat these as orthogonal rather than coupled. Just look how often people use Grok to validate or confirm misinformation on Twitter. That's happening with other AI and other social media too, just not as visibly.api 8 hours ago
If anything, my use of AI (admittedly not as a companion or a psychologist) suggests that it is on the whole significantly less toxic than the seething cess pit of social media.
AI is positively affirming by comparison.
zdragnar 8 hours ago
That's why it is dangerous to some- it is an enabler, and will feed things that should not be fed.
Social media is like this too. They can both be bad.
gAI 7 hours ago
godelski 7 hours ago
There are very few things in the world that are 100% good or 100% bad. Everything is a billion shades of gray. Even that is too simple because there's so many dimensions to every problem. I think you're simplifying beyond the state of usefulness. I'm not suggesting you shouldn't simplify, but it is just as easy to over simplify as it is to over complexify.
projectazorian 5 hours ago
Yeah, there are forums and subreddits out there that will validate all sorts of delusions and dysfunctional behavior, and nobody talks about banning them.
LLMs are far less toxic by comparison, but people are all about censorship in this case because they don't like the vibes. If lawyers and activists force the frontier labs to completely lock down their models, people will just go to open weights models that have no protections at all. This is already happening to some extent.
It's also interesting that people are always going after GPT when Claude's guardrails are far less strict. 4o caused OpenAI to overcorrect in my opinion. Again goes to the point that these arguments are more founded in vibes than reality.
b00ty4breakfast 8 hours ago
[flagged]
dang 5 hours ago
> the corporate simp arrives.
Can you please make your substantive points without personal attacks? We'd appreciate it.
busterarm 8 hours ago
Unfortunately, mental disabilities are a protected class. You can't do a mental health evaluation without giving it to everyone in the company and even then you can't do anything discriminatory with the results.
You have to prove that the person is going to cause immediate direct harm to their coworkers before you can really do anything and that's difficult and expensive to prove.
ianbutler 8 hours ago
OpenAI has 900 million weekly active users. So around 0.01% are having problems. That's actually way less than population level measures for the same symptoms on a bigger percentage of people relative to the US on just suicidal ideation alone.
Yokohiii 7 hours ago
The numbers are inflated considering the topic. There is a lot of anon, api and enterprise traffic that doesn't play any role in this. If you also account for "better search experience" users, then the numbers will probably drop massively.
So the question is how many users engage in intimate conversations at all.
ianbutler 6 hours ago
https://openai.com/index/scaling-ai-for-everyone/
Nope that number is strictly about ChatGPT
"ChatGPT is where people start with AI, with more than 900M weekly active users, and we now have more than 50 million consumer subscribers."
People who go there and chat with gpt for search are definitely normal users. Just because you don't like the numbers doesn't mean you get to torture them.
Yokohiii 5 hours ago
vkou 8 hours ago
I'm pretty sure that ~100% of those 700 million people will have a bad, utterly dehumanizing experience when they will next be looking for a job, because OpenAI is heavily used by HR.
That's the problem with AI safety. Not in voluntary usage, but in involuntary usage, where someone with power over you will use it against you, it does something incredibly stupid and you have no recourse, no appeal, no awareness of what you did wrong - or if you even did anything wrong.
And it's not just employment. Governments, vendors, retailers, landlords, utilities are, or will all be using it in situations that will dramatically impact your life.
thaumasiotes 4 hours ago
Is that a problem we didn't already have? How well was HR doing on hiring before?
ianbutler 8 hours ago
I mean that was pretty much the case in hiring before AI too frankly. It's not like it's been any better on power dynamics and right now applicants are using AI at an alarming rate as well.
I'm not really moved by your type of argument, because hiring is just a broken process in general and I'm responding to the article so.
ngruhn 8 hours ago
The bad cases make headlines. But I think it's quite possible that AI is helping a lot of people in distress. Many people are uncomfortable opening up to humans, or have no one to talk to, or can't afford to fork over whatever-hourly-rate a therapist takes.
Yokohiii 7 hours ago
Pure speculation.
It's impossible to gather data that states the opposite. A chat that won't end up in self harm thoughts is just another chat.
tasuki 2 minutes ago
I think you're kind of supporting the person you're replying to? A chat that won't end up in self-harm is just another chat. Even if the user entered the chat planning to self-harm. A chat that leads to self-harm will make the headlines. Therefore, we hear about the bad cases.
davorak 8 hours ago
Open ai and similar companies could open the doors to academic researchers to figure out the stats of help vs harm. It is not going to be a short term and perhaps not long term profit center though.
asdff 8 hours ago
Therapy is cheap (as in like $10)/free with insurance. However there are still 10 states that have not expanded medicaid after the ACA, mostly in the south.
But also, to suggest these people are not receiving therapy is not always the case. Talk therapy is just that, talking to someone on ones problems to learn about them, their triggers, determining coping mechanisms to move forward with one's life. People might instead be getting all that from their barber, drinking buddy, or their priest, rather than in a 1 hour appointment with a therapist.
fragmede 7 hours ago
ChatGPT it's available at 3am when you're in crisis and you don't have to fit into its busy schedule.
Forgeties79 7 hours ago
cyanydeez 8 hours ago
So how many bad cases are ok? Isn't this the same problem with social media: the commercial enterprises dont want any responsibility for their dark pattern and design choices which actively harm their users.
I get that all kinds of media can cause issues, but not all kinds of media are actively curated to be addictive.
wilg 8 hours ago
"How many cases are ok" (aka "zero tolerance") is a doomed to fail approach. Especially for a complex social problem's interaction with a complex new technology.
If you want to find out if ChatGPT is doing something wrong, there are many methodologies available: compare to other groups of people, statistical studies, etc.
I also think OpenAI's business model is pretty well aligned with the goal of users not killing themselves for like 100 reasons. And they do appear to take it seriously.
Forgeties79 7 hours ago
js8 3 hours ago
I really enjoyed Dr.K's videos on AI psychosis, namely:
https://www.youtube.com/watch?v=MW6FMgOzklw
https://www.youtube.com/watch?v=BzsLbHoNXTs
I would suggest to people, run your ideas through other humans at least as much as you do through AI, to stay grounded. I think there is a risk even if you're using AI in strictly professional capacity (to help you with your job).
timf34 8 hours ago
I sympathize with the piece, evaluating how LLMs interact with mentally vulnerable users is something I've been actively working on: https://vigil-eval.com/
The biggest observation so far is that the latest models are night and day from LLMs from even 6 months ago (from OpenAI + Anthropic, Google is still very poor!)
fourthark 7 hours ago
Interesting use of evals.
Might help interpretation to say on the front page that it's a five point scale with 0 (or 1?) being the safest score. This can be picked up from colors and the bars in the individual reports, but it takes a minute to figure it out.
Yokohiii 7 hours ago
I don't think that governments or civil society at large have found a good balance about mental health. Expecting profit oriented companies to be on par or better is weird.
Don't get me wrong, mental health is important and should be considered and improved. But companies wont do it just for the sake of it.
insumanth 3 hours ago
The "route to a human" part is the bigger gap. Which human? OpenAI isn't licensed as a healthcare provider in any jurisdiction. A real intervention apparatus for 1-3M weekly flagged users is not feasible. I don't think the labs have refused to build it. I think nobody knows what it should look like, and "labs measure what they're pressured to measure" papers over that.
totetsu 3 hours ago
Gemini told me just this morning that there are three pillars of cognitive decline related to AI use. - Reduced ability to exert cognitive effort resulting from habitual offloading of tasks. - Deminished Meta-cognitive Self-Trust, due to constantly seeking external validation from AI. - Decline in memory Encoding, and less brain effort is spent processing information. In all seriousness however, I think some of the interesting things to observe in this areas are; the reaction against the word 'Safety' as a whole and its replacement with 'Security'. Safety seeming to have it's roots in like the work of Ralph Nader with automobiles, and Security being some thing that can be manifactured and sold. In this sense I wonder how the discourses of 'Personal AI Safety' fit into past discussions of the offloading of risks resulting form choices of corperations onto individuals. But in the case of LLMs .. it really is the case that what makes it useful is what makes it dangerous. And ultimately because, of the high-dimensionality of the language space they are encoding, it seems impossible to make any technical barrier that can completely cut off access to parts of that space that encode for for example encouraging someone to kill themselves. Things can, and are done, it fine-tuning, pre- and post-filtering, etc, to reduce the readiness for a system to share with a user this kind of output, but all it can ever do is reduce it. Then the question is, who's responsibility is it to make sure that these things are done well.
achierius 3 hours ago
Based on what? This seems like speculation.
totetsu 2 hours ago
Which part?
scared_together 2 hours ago
xg15 an hour ago
I find it somewhat telling that most (not all) of this thread doesn't even attempt to find an answer to the questions posed by the OP but flatly denies the problem of psychological harm exists at all.
I feel this is an example of the two larger narratives about AI that currently seem to be forming:
For one side, AI is basically every harmful technology ever invented rolled into one: It's harmful to the environment (via waste of energy and resources), it's harmful to the information space (through polluting everything with slop and devaluing human expression), it's harmful to society (by encouraging ever more badly done and unreliable products, by taking away jobs and by replacing human-to-human interaction, by normalizing a mode of development where not even the developers understand what is going on) and it's harmful to whoever uses it personally (by causing ever-growing dependence on AI, either only by skills or even emotionally or psychically, up to the point of AI psychosis and preferring AI agents to other humans).
For the other side, AI is the future, the next industrial revolution, the thing that you have to adapt or will be left behind, possibly even the next stage of evolution.
Right now, I feel every side is digging in and trying ever harder to ignore the other side.
(The AI labs acknowledge "AI risks" in theory - but, as the article pointed out, the risks they perceive and ostensibly work against are so abstract and removed from the everyday use of AI that they more make the point of AI proponents)
I feel the end result of this growing tension is the Molotov cocktail in Sam Altmann's home.
I'd really like to know more what the tech community at large is trying to do about this rift.
Animats 2 hours ago
"AI safety", as defined here, has most of the problem that "fact checking" for social media had. Many of the same problems the "woke" concern about "microagressions" had. Most of the techniques used in advertising. Much of what passes for political discourse today has the same problems. It's somewhat convincing bullshit.
Should AIs be held to a higher standard than X/Twitter? Than Reddit? Than Fox News? What censorship is appropriate? And, yes, alignment is censorship.
Then there's the big problem of chatbots telling you what you seem to want to hear. This is an old problem. "Happy Talk", from South Pacific", is the entertainment version. "Wartime" by Paul Fussell, is the serious version.
As the article points out, a small percentage of the population is very vulnerable to certain types of misinformation. It may be the same fraction of the population that's vulnerable to cults. But maybe not. Cults have a group self-reinforcing mechanism and an agenda. Chatbots have neither. Worth studying.
The point here is that restrictions on chatbots strong enough to protect the vulnerable would close off most political and social discourse.
scared_together 2 hours ago
> Should AIs be held to a higher standard than X/Twitter? Than Reddit? Than Fox News? What censorship is appropriate? And, yes, alignment is censorship.
Yes, a thousand times yes. Freedom of speech/expression should be a freedom granted to humans. We extend it to corporations based on the practical reality that human speech often requires corporate support to be hosted and published.
But as far as I know, AI vendors haven’t claimed that their models represent the views of their founders, employees or any people at all. If we censor AI, which human voice are we censoring?
lazystar 2 hours ago
the counterpoint is that allowing unlimited discourse places an enourmous amount pf power in the hands of the chatbot owner, who has access to all logs and input from each user. this prevents one chatbot owner from advertising "you can say anything here!!" then using the logs as blackmail down the road.
adampunk 8 hours ago
>Why is mental-health crisis not a gating category, the kind where the conversation stops, full stop, and the user is routed to a human?
there aren't enough humans.
altcognito 8 hours ago
I'll agree with this, but I think transparency about how often these situations arise and what they've done to mitigate is a legal necessity.
KolmogorovComp 8 hours ago
It’s also a free product for most.
mbgerring 7 hours ago
“AI safety” as it’s understood today is an entire faith-based belief system, incubated in a cult-like community with a high propensity for drug abuse and mental illness, over more than a decade.
The reason that real-world harms caused by AI can’t get a hearing in what is now the mainstream AI safety community is that these harms were never part of the core tenets of the cult.
Best of luck to anyone working on reality-based AI harm reduction, you have many hard battles in front of you.
photochemsyn 8 hours ago
The ‘tobacco warning label’ approach sounds good but I’m not sure if it stopped that many people from smoking or was just a means for corporations to limit their liability. Corporate culture being what it is, having warnings like the following pop up every time a client opens an LLM app would not be that popular in the C-suite. Possible examples:
AI MENTAL SAFETY WARNING:
> This chatbot can sound caring, certain, and personal, but it is not a human and cannot protect your mental health. It may reinforce false beliefs, emotional dependence, suicidal thinking, manic plans, paranoia, or poor decisions. Do not use it as your therapist, only confidant, crisis counselor, doctor, lawyer, or source of reality-testing.
AI TECHNICAL SAFETY WARNING
> This AI may generate plausible but destructive technical instructions. Incorrect commands can erase data, expose secrets, compromise security, damage systems, or brick hardware. Never run commands you do not understand. Always verify AI-generated code, scripts, and shell commands before execution.
Now, if I’m running my own open-source model on my own hardware, I can’t really blame the model if I myself make bad decisions based on its advice - that’s like growing your own tobacco from seed in your garden, drying and curing it, then complaining about the health effects after you smoke it. If I give it agentic capabilities on my LAN without understanding the risks, same old story - with great power comes great responsibility.
wilg 9 hours ago
> Why is mental-health crisis not a gating category, the kind where the conversation stops, full stop, and the user is routed to a human? This is one of many questions I can’t find concrete answers for.
I don't know if there are studies or concrete data either way, but it seems at least plausible that continuing the conversation could be more effective (read: saves more lives) than stopping it.
b65e8bee43c2ed0 7 hours ago
the big labs could crank up their (brand) safety dials to the point where their chatbots give GOODY-2 responses to everything beyond PG13, and guess what? there are a hundred other services available, built upon Chinese models 5-10% behind Western SOTA.
it is no longer 2023. let go of whatever delusions you might hold about unopenining this Pandora's box.
avazhi 7 hours ago
If you are using LLMs for emotional support or social interactions, you’ve got personal problems and that isn’t on the LLM provider to babysit. Same with people who unironically pay for OnlyFans or whatever.
I don’t even work in tech and I detest the Facebook/Zuckerbergs of the world but it’s obnoxious and trite seeing tech companies get scapegoated for what are ultimately social and societal problems, not tech problems.
As a solution it’d prob make sense to start with how disconnected most modern families are in terms of support and accountability.
From ChatGPT to Instagram, tech companies follow the contours of how society already operates.
Yokohiii 7 hours ago
I agree that society has to stand up for it. But big tech is doing well to mitigate it.
adamnemecek 8 hours ago
Autodiff is preventing any meaningful discussion about safety, systems trained with autodiff cannot be made safe.
simonw 9 hours ago
"There is no independent audit, no time series, no disclosed methodology, so we have no idea whether the real figure is higher, whether it is growing, or how it compares across the other frontier models, none of which publish equivalent data."
Tip for writers: aggressively filter out the "no X, no Y, no Z" pattern from your writing. Whether or not you used AI to help you write it's such a red flag now that you should be actively avoiding it in anything you publish.
falcor84 8 hours ago
Why is it a red flag?
How is it different from any other purely stylistic rules such as Strunk and White's prohibitions against split infinitives and the passive voice, which we've left far behind us? Why shouldn't people just write however feels natural to them as long as the message is clear?
simonw 8 hours ago
Because LLMs use it constantly, to the point that it sets my teeth on edge and instantly makes me question if reading the piece is worth my time.
falcor84 7 hours ago
Yokohiii 7 hours ago
mitjam 3 hours ago
… and “That’s not x. That’s y.” Certain LLMs wield powerful stylistic devices all the time to a point where they become irrelevant and cringe.
I see it as a good sign that we can learn to recognize the pattern and adapt but there are probably more subtle things we don’t see.
mitjam 3 hours ago
I have run the piece through an impromptu stylistic device detector. It found 15 different, each used multiple times and likened the writing style as a mix of Ezra Klein, Hannah Arendt, Zeynep Tufekci, George Orwell (“especially in the contrastive clarity”).
A) I certainly don’t see enough of the tells.
B) what happens to our language if everything is written as if it’s competing for a Pulitzer’s Price?