ChatGPT won't let you type until Cloudflare reads your React state (buchodi.com)
926 points by alberto-m a day ago
MyNameIsNickT a day ago
Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
A big reason we invest in this is because we want to keep free and logged-out access available for more users. My team’s goal is to help make sure the limited GPU resources are going to real users.
We also keep a very close eye on the user impact. We monitor things like page load time, time to first token and payload size, with a focus on reducing the overhead of these protections. For the majority of people, the impact is negligible, and only a very small percentage may see a slight delay from extra checks. We also continuously evaluate precision so we can minimize false positives while still making abuse meaningfully harder.
vlovich123 16 hours ago
That still doesn’t explain why you can’t even start typing until that check proceeds. You could condition the outbound request from being processed until that’s the case. But preventing from typing seems like it’s just worse UX and the problem will fail to appear in any metrics you can track because you have no way of measuring “how quickly would the user have submitted their request without all this other stuff in the way”.
Said another way, if done in the background the user wouldn’t even notice unless they typed and submitted their query before the check completed. In the realistic scenario this would complete before they even submit their request.
mike_hearn 10 hours ago
I developed the first version of Google's equivalent of this (albeit theirs actually computes a constantly rotating key from the environment, it doesn't just hard-code it in the program!).
The reason it has to block until it's loaded is that otherwise the signal being missing doesn't imply automation. The user might have just typed before it loaded. If you know a legit user will always deliver the data, you can use the absence of it to infer something about what's happening on the client. You can obviously track metrics like "key event occurred before bot detection script did" without using it as an automation signal, just for monitoring.
fc417fc802 8 hours ago
susupro1 7 hours ago
toinewx 5 hours ago
matchagaucho 2 hours ago
Keyboard response feels 10x slower in ChatGPT Projects (possibly for reasons other than react state).
p-e-w 14 hours ago
Many cloud products now continuously send themselves the input you type while you are typing it, to squeeze the maximum possible amount of data from your interactions.
I don’t know whether ChatGPT is one of those products, but if it is, that behavior might be a side effect of blocking the input pipeline until verification completes. It might be that they want to get every single one of your keystrokes, but only after checking that you’re not a bot.
davidkunz 13 hours ago
mort96 10 hours ago
andai 12 hours ago
m3kw9 2 hours ago
Because the way they have the server architecture setup and how it loads the screen. You don’t even want all the bots hitting servers
dncornholio 5 hours ago
You cannot know what verifications they use. I could argue the disabled textbox is some sort part of the verification process. Humans will click on it while bots won't.
root_axis 4 hours ago
QEDCTrL 5 hours ago
Sounds like anti-distillation to me. But, know what? Meh.
mcmcmc 4 hours ago
deadbabe 8 hours ago
Remember you’re talking to a vibe coder who just stares at code being printed out by AI.
mcmcmc 4 hours ago
Imnimo 21 hours ago
It's interesting to me that OpenAI considers scraping to be a form of abuse.
DrinkyBird 7 hours ago
It’s funny because the first AI scraper I remember blocking was from OpenAI’s, as it got stuck in a loop somehow and was impacting the performance of a wiki I run. All to violate every clause of the CC BY-NC-SA license of the content it was scraping :)
raincole 15 hours ago
Quite sure even literal thieves would consider thievery a form of abuse.
mcmcmc 4 hours ago
duped 6 hours ago
littlestymaar 14 hours ago
jordanb 5 hours ago
They don't want anyone to take that which they have rightfully stolen.
altmanaltman an hour ago
splatter9859 3 hours ago
axegon_ 12 hours ago
The levels of irony that shouldn't be possible...
ProofHouse 20 hours ago
The irony is thick
wiseowise 10 hours ago
Church, politicians, moralists are all the biggest hypocrites that want to teach you something.
newsoftheday 5 hours ago
sabedevops 21 hours ago
Seriously. The hypocrisy is staggering!
zer00eyz 21 hours ago
" Integrity at OpenAI .. protect ... abuse like bots, scraping, fraud "
Did you mean to use the word hypocrisy. If not, I'm happy to have said it.
I just want to note, that it is well covered how good the support is for actual malware...
RobotToaster 10 hours ago
"You're trying to kidnap what I've rightfully stolen!"
gib444 10 hours ago
And have absolutely no reservations about making such an obvious statement on a public forum
Aurornis 19 hours ago
I interpreted scraping to mean in the context of this:
> we want to keep free and logged-out access available for more users
I have no doubt that many people see the free ChatGPT access as a convenient target for browser automation to get their own free ChatGPT pseudo-API.
lelanthran 11 hours ago
wolvoleo 14 hours ago
rsrsrs86 5 hours ago
This
nikitaga 20 hours ago
Scraping static content from a website at near-zero marginal cost to its server, vs scraping an expensive LLM service provided for free, are different things.
The former relies on fairly controversial ideas about copyright and fair use to qualify as abuse, whereas the latter is direct financial damage – by your own direct competitors no less.
It's fun to poke at a seeming hypocrisy of the big bad, but the similarity in this case is quite superficial.
PunchyHamster 18 hours ago
not2b 19 hours ago
cicko 13 hours ago
lm411 17 hours ago
sandeepkd 17 hours ago
alsetmusic 17 hours ago
unsungNovelty 7 hours ago
AmbroseBierce 15 hours ago
VadimPR 14 hours ago
wolvoleo 14 hours ago
lelanthran 11 hours ago
ungreased0675 8 hours ago
ori_b 6 hours ago
xmcqdpt2 8 hours ago
grishka 11 hours ago
the_sleaze_ 16 hours ago
razingeden 18 hours ago
foobiekr 5 hours ago
not_your_vase 15 hours ago
SkiFire13 14 hours ago
mcfedr 3 hours ago
bakugo 20 hours ago
heyethan 17 hours ago
gmerc 13 hours ago
swagmoney1606 18 hours ago
nozzlegear 17 hours ago
make3 15 hours ago
nickphx 9 hours ago
AtlasBarfed 19 hours ago
karlshea 19 hours ago
platybubsy 13 hours ago
andrepd 5 hours ago
nslsm 20 hours ago
miki123211 11 hours ago
It's not scraping they're concerned about, it's abusing free GPU resources to (anonymously) generate (abusive) content.
heyethan 18 hours ago
I think the distinction is less about scraping itself, and more about marginal cost.
Scraping static pages is cheap for both sides. Scraping an LLM-backed service effectively externalizes compute costs onto the provider.
Same behavior, very different economics.
crote 15 hours ago
everdrive a day ago
It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web.
Nick, I understand the practical realities regarding why you'd need to try to tamp down on some bot traffic, but do you see a world where users are not forced to choose between privacy and functionality?
mememememememo 21 hours ago
Local models for privacy.
You want to go to the world's best hotel? You are gonna be on their CCTV. Staying at home is crappier but private.
Unfortunately for the first time moores law isn't helping (e.g. give a poor person an old laptop and install linux they will be fine). They can do that and all good except no LLM.
karlgkk 20 hours ago
nozzlegear 17 hours ago
0x3f a day ago
Meet me in a cafe and I will sign a JWT saying you're not a bot. You can submit this to whoever will accept it.
magicseth a day ago
jagged-chisel a day ago
tshaddox 21 hours ago
kevin_thibedeau 21 hours ago
I've been doing that for years. Cloudflare is slowly breaking more and more of the web.
subscribed 11 hours ago
This is indeed what I do. And you also should. Separate browser for banking, trusted shipping sites etc, and the normal one.
Make sure not to browse the Internet without adblock and/or similar.
lukewarm707 10 hours ago
i am increasingly moving towards a model of 'no browser'.
search for me is now a proprietary index (like exa) that filters rubbish, with a zero data retention sla. so we don't need google profiling.
the content is distilled into markdown pulled from cloudflare's browser rendering api.
i let cloudflare absorb the torrent of trackers and robot checks, i just get md from the api with nothing else. cloudflare is poacher and gamekeeper.
an alternative is groq compound which can call browsers in parallel.
for interactive sites, or local ai browsing, i sometimes run a browser in a photon os docker with vnc, which gives you the same browser window but it runs code not on your pc.
that said little of my use is now interacting with websites, its all agentic search and websets so i don't have to spend mental energy on it myself
lukewarm707 9 hours ago
madrox 21 hours ago
I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person.
Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies.
atoav 20 hours ago
What if I run a website and OpenAI produces bot traffic? Do they also consider it abuse when they do it?
SV_BubbleTime a day ago
Firefox multicontainers are pretty cool. But it’s an advanced process that most people wouldn’t do or do correctly.
Sabinus 21 hours ago
halJordan 21 hours ago
Imustaskforhelp a day ago
gib444 8 hours ago
> It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web.
Every time I try this, I end up crossing wires (ie using the browser that 'works' for most things, more than the one that is 'broken')
cruffle_duffle 17 hours ago
There is also the browser I use to get Claude to route around people blocking its webfetch. Both Playwright and chrome-mcp.
gck1 15 hours ago
gruez a day ago
>It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web.
What are you talking about? It works fine with firefox with RFP and VPN enabled, which is already more paranoid than the average configuration. There are definitely sites where this configuration would get blocked, but chatgpt isn't one of them, so you're barking up the wrong tree here.
scared_together 17 hours ago
lionkor 12 hours ago
Hi Nick, first of all, very cool of you to respond here instead of letting us all sit in the dark. I think that's what makes HN special.
That said, is it not a little bit weird that you want to protect yourself from scraping and bots, when your entire company, product, revenue, and your employment, depends on the fact that OpenAI can bot and scrape literally every part of the internet? So your moat is non-hydrated react code in the frontend?
Schiendelman 6 hours ago
Don't beat up an engineer for decisions made by company leadership. It's really inappropriate.
diebillionaires 6 hours ago
lionkor 4 hours ago
SilasX 2 hours ago
halflife a day ago
Don’t know if it’s related to the article, but the chats ui performance becomes absolutely horrendous in long chats.
Typing the chat box is slow, rendering lags and sometimes gets stuck altogether.
I have a research chat that I have to think twice before messaging because the performance is so bad.
Running on iPhone 16 safari, and MacBook Pro m3 chrome.
DenisM 21 hours ago
In the good old days Netflix had "Dynamic HTML" code that would take a DOM element which scrolled out of view port and move it to the position where it was about to be scrolled in from the other end. Hence he number of DOM elements stayed constant no matter how far you scroll and the only thing that grows is the Y coordinate.
They did it because a lot of devices running Netflix (TVs, DVD players, etc) were underpowered and Netflix was not keen on writing separate applications. They did, however, invest into a browser engine that would have HW acceleration not just for video playback but also for moving DOM elements. Basically, sprites.
The lost art of writing efficient code...
zdragnar 21 hours ago
groundzeros2015 20 hours ago
bschwindHN 19 hours ago
Almost certainly running some sort of O(n^2) algorithm on the chat text every key press. Or maybe just insane hierarchies of HTML.
Either way, pretty wild that you can have billions of dollars at your disposal, your interface is almost purely text, and still manage to be a fuckup at displaying it without performance problems.
stacktraceyo a day ago
Same. It’s wild how bad it can get with just like a normal longer running conversation
qingcharles 18 hours ago
OpenAI sites are the only ones that do this to me. I have to keep a separate browser profile just for my OpenAI login with absolutely nothing installed on it or it'll end up being dogshit slow and unusable.
moffkalast a day ago
Yeah just had this earlier today, I had to write my response in vscode and paste it in, there were literal seconds of lag for typing each character. Typical bloated React.
scq 21 hours ago
PunchyHamster 18 hours ago
That's how eating your own dogshit works, or whatever was that saying
xg15 an hour ago
> how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
Are you applying the same standards to your own scraper bots?
sebmellen a day ago
Great to hear from a first-party source. I'm a Pro subscriber and my team spends well over two thousand dollars per month on OpenAI subscriptions. However, even when I'm logged in with my Pro account, if I'm using a VPN provider like Mullvad, I often have trouble using the chat interface or I get timeout errors.
Is this to be expected? I would presume that if I'm authenticated and paying, VPN use wouldn't be a worry. It would be nice to be able to use the tool whether or not I'm on a VPN.
JumpCrisscross 20 hours ago
> even when I'm logged in with my Pro account, if I'm using a VPN provider like Mullvad, I often have trouble using the chat interface or I get timeout errors
Heard from a founder who recently switched his company to Claude due to OpenAI's lagginess–it's absolutely an OpenAI problem. Not an AI problem in general.
ghm2199 5 hours ago
Would OpenAI also consider renumerations to every site they have scraped that had a robots.txt file and they chose to ignore it anyway? Feel free to not answer this question.
I have kind of lost count of how many content creators have said personally to me traffic is meaningfully down because of all these chatbots. The latest example is this poor but standup guy: moneyfortherestofus.com.
timeinput 4 hours ago
I'm really glad Hacker News disallows AI generated comments. The response I got from asking that question really is quite enlightening. Short answer: "no", long answer: "no -- fuck off", longer answer: "no -- fuck off -- if you want I can dig into whether or not you should fuck off harder"
lm411 16 hours ago
"we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform"
The scary part is that you don't even see the irony in writing this.
Or, are you just okay "misusing" everyone for your own benefit?
noosphr a day ago
>These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
Can you share these mitigations so we can mitigate against you?
0x3f a day ago
It's just Cloudflare. Bypassing it is a whole industry.
zenethian 21 hours ago
dawnerd a day ago
Flaresolverr is one way. Isn’t perfect but bypasses a lot.
seba_dos1 a day ago
Hi! It's all perfectly understandable - after all, we use things like Anubis to protect our services from OpenAI and similar actors and keep them available to the real users for exactly the same reasons.
jesuslop an hour ago
Hi Nick, the lag is quite bad in the field, honest. In desktop app in this case/datapoint. There was that "halt and catch fire" episode where they spoke about a millisencod threshold of delay that separated usability and non. Solvent hw and fiber connection.
driverdan 20 hours ago
Brand new account with 2 comments in this thread. How can we be sure you're not a bot deployed to defend OpenAI?
Please run Cloudflare's privacy invasive tool and share all the values it generates here so we can determine if you're a real person.
conartist6 10 hours ago
Still feels very anti-consumer.
If every company behaved like you do, the internet would be a much worse place.
In fact, OpenAI has already made the Internet a much worse place, already much, much less open and much less optimistic about its own future than it was even five years ago...
wiseowise 10 hours ago
> A big reason we invest in this is because we want to keep free and logged-out access available for more users.
Thank you for the reply, Nick. It wouldn’t be a problem to disable the tracking for authenticated users then, would it?
lloydatkinson 10 hours ago
It would because someone's KPI depends on number of tracked users lol
matsemann 8 hours ago
mehov a day ago
> because we want to keep free and logged-out access
But don't you run these checks on logged-in users too?
MyNameIsNickT a day ago
Yep, on logged-in users too. The reason is basically the same: we want scarce compute going to real people, not attackers. Being logged in is one useful signal, but it doesn’t fully prevent automation, account abuse, or other malicious traffic, so we apply protections in both cases.
lelanthran 11 hours ago
angoragoats 21 hours ago
jorvi 21 hours ago
salawat 21 hours ago
lm411 14 hours ago
"Integrity at OpenAI"
Basically an oxymoron at this point.
c0_0p_ a day ago
Can't have those bots or scrapers running amok can we...
witx 12 hours ago
> These checks are part of how we protect our first-party products from abuse like bots, scraping,
Do you guys see the irony here?
hosteur 11 hours ago
They obviously get it. They just do not care.
pdntspa 21 hours ago
Y'all just salty that DeepSeek et al are training their LLMs on yours
numlock86 11 hours ago
> [...] we protect our first-party products from abuse like [...] scraping [...]
what an odd thing to say for someone whose product is built entirely on exactly that
egorfine 12 hours ago
Paying customer since inception here.
I presume the local ChatGPT.app has even more measures to prevent automation, right? Presumably privacy-invasive ones as it is customary these days?
Is there a way I can opt out? I really, really, really don't like it.
radicality 4 hours ago
The way I use the products something like this. My main account on my MacBook - ChatGPT website, codex cli. Then, a Mac VM running via UTM with shared writable dir - anything more ‘shady’ in terms of permissions and for playing with new ai apps - eg ChatGPT/Codex standalone apps, Atlas, Claude desktop app etc. Seems to work decently enough. And I do totally agree that there should be a way to opt out of all these privacy invasive measures, especially after paying $200/mo
the_gipsy a day ago
But is the title true, is typing specifically blocked? Or does it just block submitting the text?
I ask because I have seen huge variations in load time. Sometimes I had to wait seconds until being able to type. Nowadays it seems better though.
leros 6 hours ago
Fwiw, I stopped using ChatGPT and went to a competitor because the checks slow down ChatGPT so much that the webapp becomes unusable in anything but a new short chat. CPU usage goes to 100%, you can't type, the entire tab freezes, etc. It's a miserable experience to use and I'm on a relatively new MacBook not some old computer. If you read around it's a very common problem people have been having for a while now.
tipiirai 18 hours ago
I don't trust what OpenAI says. Sam Altman gives shivers, and these kinds of blog posts make things look even worse.
myHNAccount123 a day ago
Can you fix the resizing text box issue on Safari when a new line is inserted? When your question wraps to a newline Safari locks up for a few seconds and it's really annoying. You can test by pasting text too.
cheese_van 7 hours ago
<protect our first-party products from abuse like scraping>
Abuse from scraping has long been a serious problem for many, good job!
20k 8 hours ago
>abuse like bots, scraping
10/10, I've got no notes
xtajv 8 hours ago
Earnest question: if I was feeling lazy and security-conscious at the same time, would I be better off...
(A) opening chatgpt.com in qubes (but staying logged out, i.e. never creating a chatgpt account)
-or-
(B) creating a freemium chatgpt account
?
(Obviously, the "best" answer would be something like running a local LLM from an airgapped machine in a concrete bunker :) But that's not what I'm after).
toddmorey 6 hours ago
Why are all these checks still performed on an authenticated, paid user?
vkou a day ago
> Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
How can first-party products protect themselves from abuse by OpenAI's bots and scraping?
mystraline 21 hours ago
This is a completely in-scope question.
How do we defend against your scraping, OpenAI?
I dont want any of my content scraped or seen by you all. Frankly, fuck you all for thinking my content is owned by you.
CableNinja 15 hours ago
tedsanders 12 hours ago
wilg 20 hours ago
diebillionaires 6 hours ago
As a free tier user I only get like three queries in now without model quality reduction, so I'd say your bases are covered as far as GPU costs around misuse.
AndrewKemendo an hour ago
Kudos for trying
This whole thread was like watching a swarm of ants try and take a grasshopper down
aucisson_masque 9 hours ago
Why send the Turnstile bytecode encrypted ? Surely people savvy enough to abuse the system will find out how to decrypt it, see OP, and it gives the impression that you are trying to hide stuffs you're not proud about.
pocksuppet 7 hours ago
Because they want to make it as hard as possible to reverse engineer. If they wanted it to be easy, they'd use <input type="checkbox" name="ishuman">I am a human
htx80nerd 4 hours ago
Thanks. I've used ChatGPT a million times and never had any input issues.
huertouisj 21 hours ago
sometimes I paste giant texts (think summarization) in the chatgpt (paid) webapp and I noticed that the CPU fans spin up for about 5 seconds after, as if the text is "processed" client side somehow. this is before hitting "submit" to send the prompt to the model.
I assumed it was maybe some tokenization going on client side, but now I realize maybe it's some proof of work related to prompt length?
invalidusernam3 11 hours ago
But why block the ui until then? Surely you can just not make any requests until the checks are complete?
matheusmoreira 5 hours ago
> protect our first-party products from abuse like bots, scraping
You do see the irony here?
mghackerlady 7 hours ago
No, leave it. Surely the mighty OpenAI can deal with the scraping. At least, it seems to think everyone else can
sourcecodeplz 8 hours ago
I really appreciate the free options, without even needing a login. Wish they would also keep the small free weekly allowance for Codex.
sandeepkd 4 hours ago
You do not ever trust the client side. Sometimes being simple is good enough. The maximum you can do is put rate limits on the IP address and/or user account. You just do not want some one to use the product at machine speeds.
dev1ycan a day ago
"abuse like bots, scraping, fraud, and other attempts to misuse the platform"
This has to be a joke, right?
pera a day ago
I really can't tell for sure (new user posting a ridiculously hypocritical corporate message on a Sunday) but if GP actually works for OpenAI the lack of self-awareness is seriously striking
singpolyma3 21 hours ago
prmoustache 11 hours ago
> we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
Isn't that how you build your service from the very start? How ironic.
gck1 15 hours ago
I always wondered why you even have logged out access. I'm glad I can use ChatGPT in incognito when I want a "clean room" response, but surely that's not the primary use case.
Is user base that never logs in really that significant?
pocksuppet 7 hours ago
This episode proves they know who you are, even when you're logged out. If they didn't know, they wouldn't let you use the service.
piskov a day ago
Tangential question: are there chatgpt app devs on X? There are a few from Codex team but I couldn’t find guys from “ordinary” chatgpt.
Also if you could pass this over: it takes 5 taps to change thinking effort on ios and none (as in completely hidden) on macos.
If I were to guess it seems that you were trying to lower the token usage :-). Why the effort is only nicely available on web and windows is beyond me
lifis 6 hours ago
Are you disabling them for paying subscribers?
nicbou 11 hours ago
For what it's worth, I switched to Gemini because of the long ChatGPT load time. Gemini loads as fast as Google Search.
rglullis 21 hours ago
I shouldn't be giving ideas to your boss, but I bet he would be interested in making ChatGPT available only by paying customers or free for those whose who gets their eyes scanned by The Orb. Give 30 days of raised limits and we're all set to live in the dystopia he wants.
freeopinion 16 hours ago
Its your business and your call. But my opinion is that I wish you would quit offering free services. I'm pretty concerned about the horrible effect your free services are having on education. Yes, AI can be an incredible tool to enhance education. But the reality is that it is decimating children's will to learn anything.
I don't want to blame AI for all the world's problems. And I don't want to throw the baby out with the bath water. But I think you should think really hard about the value of gates. Smart people can build better gates than cash. But right now, cash might be better than nothing. Clearly you have already thought about how to build gates, but I don't think you have spent enough time thinking about who should be gated and why. You should think about gates that have more purpose than just maximizing your profit.
"We want to hook as many people as possible without letting in our competitors" is a pretty crummy thought to use as a public justification.
(Edited for typos.)
kelnos 16 hours ago
> A big reason we invest in this is because we want to keep free and logged-out access available for more users.
Are these checks disabled for logged-in, paid users?
subscribed 17 hours ago
> "abuse like bots, scraping"
You what, mate? Would you please use that on yourselves first? Because it comes off as a GROSS hypocrisy. State of the art hypocrisy.
>> behavioral biometric layer
But this one, especially, takes the cake.
Quite disgusting.
gmerc 13 hours ago
the company that scrapes every until it collapses really needs to protect itself from scraping. Lol.
JumpCrisscross 20 hours ago
> we want to keep free and logged-out access available for more users
How does this comport with OpenAI's new B2B-first strategy?
> We also keep a very close eye on the user impact
Are paid or logged-in users also penalised?
andrepd a day ago
> OpenAI: These checks are part of how we protect products from abuse like bots, scraping, and other attempts to misuse the platform.
This would be fucking HILARIOUS if it wasn't so tragic.
rchaud 21 hours ago
Manifest destiny for me, border enforcement for thee.
lmz 18 hours ago
Chance-Device a day ago
It can be both
SubiculumCode 13 hours ago
In long threads in chatgpt, it grinds to a halt in both Chrome and Firefox. Please fix
tekawade 12 hours ago
Hey Nick, I find it concerning this account is. Frayed just to comment on this thread. And never even reply back to any of the real concerns.
Here to hoping this is real person and actually created account out of concern and sharing.
SilasX 4 hours ago
It has not been negligible for me, and, however you're doing this, there is significant room for improvement.
There have been times when, across about ten minutes of usage, most of which is me typing on iOS Safari, it drained 15% of my battery. There is no functional justification for this beyond poor code quality. (It was on a long conversation FWIW.)
This when I'm logged in, with a paid (Plus) account, connected to a very old email address with a real user profile. That can't be the result of super-clever bot defense measures, because it's merely an inconvenience on desktop. And if you genuinely believe that email has been compromised, why aren't you reaching out the to the account owner, as the account isn't otherwise connected to fraud by your heuristics?
However brilliant the LLM agent it is, I'm seeing a lot of unforced errors regarding how you implement a web interface to it. If it makes you feel any better, it doesn't really register compared to all the bloat I see on other sites.
blactuary 7 hours ago
> I work on Integrity at OpenAI
Irony is truly dead. Show you have integrity by quitting your job
MisterTea 8 hours ago
> These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
Isn't this the same behavior used by AI companies to gather training data? Pot, meet kettle.
wackget 5 hours ago
I understand it's not your area, but can you please politely tell your colleagues that the clickbait-type teaser questions from the latest model are absolutely infuriating and are quickly leading to me abandon the platform entirely?
If you'd like, I can write a two-sentence paragraph to send to your colleagues. It contains a special phrase which most colleagues will find difficult to ignore. Would you like me to do that?
ryanmcbride 7 hours ago
Protecting your site from bots and scraping is absolutely hilarious considering how you acquired (read: stole) the data you trained your bot on dude.
Just yank that ladder up behind you.
pocksuppet 7 hours ago
> Just yank that ladder up behind you.
You would be an irresponsible entrepreneur if you didn't. Don't forget your legal obligation to maximise shareholder value.
potsandpans 14 hours ago
Chatgpt banned me after I said disparaging things about Sam Altman in a chat.
When I appealed the ban, I was told that I couldn't be told exactly why I was banned, but if I wrote a written apology and "promised to never do it again" my ban could be appealed.
I asked for an update on the ban via email every month for over a year.
Maybe you could tell me a little bit about that process?
rsrsrs86 4 hours ago
Hi Nick, do you believe what you say? You scraped the shit out of everyone
marxisttemp 9 hours ago
History will not be kind to you and your ilk. Quit your job.
0dayman 21 hours ago
Hi Nick, your software is a horrendous encroachment on users' privacy and its quality is subpar to those of us who know what we're working with. We don't use your product here.
chronc6393 12 hours ago
> Hi Nick, your software is a horrendous encroachment on users' privacy and its quality is subpar to those of us who know what we're working with. We don't use your product here.
It’s ok, OpenAI is cooked.
Feel bad for anyone who joined OAI in the past 12 months. Their RSU ain’t going to be worth much later this year. IPO is too late.
owebmaster 10 hours ago
The reason why you did it is clear, why you guys settle down for such a poor implementation is why this thread exists
jgalt212 a day ago
> we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform
Have you just described the dilemma facing all the content sites used to train LLMs?
crest 20 hours ago
Then make sure they only target the free tier!
quotemstr a day ago
We really need ZKPs of humanity
ctoth a day ago
No, we really don't. We don't need worldcoin, we don't need papers, please. We just don't.
"Prove your humanity/age/other properties" with this mechanism quickly goes places you do not want it to go.
Muromec 20 hours ago
quotemstr 21 hours ago
gzread 18 hours ago
Sure. I'll provide an API to provide mine to your bot for $1 each time.
user3939382 a day ago
Have you given any thought to what we trade when big tech elects one corporation as the gatekeeper for vast swaths of the Internet?
Razengan 13 hours ago
> we want to keep free and logged-out access available for more users.
And THANK YOU for that!
Being able to use ChatGPT and Grok without signing in is a big part of why I like those services over Gemini etc.
Hell, dummy Claude won't even let me Sign-In-with-Apple on the Mac desktop, even though it let me Sign-UP-with-Apple on the iPhone! BUT they do support Sign-In-with-Google!!? What in the heavenly hell is this dumbassery
tomalbrc 19 hours ago
Fake Account
thegreatpeter 21 hours ago
You’re doing gods work sir, thank you!
boesboes 8 hours ago
lol, hypocrites.
nickphx 21 hours ago
the irony of your statement is hilarious, disappointing, and infuriating.
lxgr a day ago
It's absurd how unusable Cloudflare is making the web when using a browser or IP address they consider "suspicious". I've lately been drowning in captchas for the crime of using Firefox. All in the interest of "bot protection", of course.
lucasfin000 a day ago
The real frustrating part is that Cloudflare's "definition" of suspicious keeps changing and expanding. VPN users, privacy-first browsers, uncommon IP ranges, they all get flagged. The people most likely to get caught by these systems are exactly the ones who care most about their privacy, and not the bots that they are apparently targeting.
gruez a day ago
>The real frustrating part is that Cloudflare's "definition" of suspicious keeps changing and expanding.
That's... exactly expected? It's a cat and mouse game. People running botnets or AI scrapers aren't diligently setting the evil bit on their packets.
lxgr 21 hours ago
jagged-chisel a day ago
Aurornis 19 hours ago
> The people most likely to get caught by these systems are exactly the ones who care most about their privacy, and not the bots that they are apparently targeting.
In my brief experience with abuse mitigation, connections coming from VPNs or unusual IP ranges were very significantly more likely to be associated with abuse.
It depends on your users. VPNs aren’t common at all, even though you hear about them a lot on Hacker News. For types of social sites where people got banned for abuse (forums) the first step to getting back on the forum was always to sign up for a VPN and try to reconnect. It got so bad that almost every new account connecting via VPN would reveal itself as a spammer, a banned member trying to return, or someone trying to sock puppet alternate accounts for some reason.
The worst offenders are Tor IP addresses. Anyone connecting from Tor was basically guaranteed to have bad intentions.
I heard from someone who dealt with a lot of e-mail abuse that the death threats, extortion, and other serious abuse almost always came from Protonmail or one of the other privacy-first providers that I can’t remember right now. He half-jokingly said they could likely block Protonmail entirely without impacting any real users.
It’s tough for people who want these things for privacy, but the sad reality is that these same privacy protections are favored by people who are trying to abuse services.
frig57 7 hours ago
gzread 14 hours ago
whatisthiseven a day ago
Which VPNs are people using that actually care about the user's privacy? Most of them don't, sell their home IP to buyers, sell their DNS history to others, etc. Worse, some of them could require invasive MITM cert stuff most users will just click yes through.
I have yet to see a use case for VPNs for the casual internet audience, and for a tech savvy user, their better off renting through some datacenter or something, which at that point is hardly a VPN and more home IP obfuscation. All the same downsides, and at least you get real privacy.
traceroute66 a day ago
evilduck a day ago
lxgr 21 hours ago
gruez a day ago
Imustaskforhelp a day ago
ymolodtsov 11 hours ago
Yes, using an incognito windows is more than enough to kick off their checks.
ehnto a day ago
I recently had the insane experience of filling out 15 consecutive captchas, after, I had checked out and entered my payment information into the payment processor widget. I just wanted to submit the order. I was logged in to their website, and the bank even needed a one time code for payment. If the bank is pretty sure I am human then your ecomm site can figure it out surely.
lxgr 21 hours ago
That's my favorite combination: Shitty bot detection meeting shitty payment security systems.
At least outside the US, there's 3DS as an (admittedly often high friction) high quality cardholder verification method, but in the US, that's of course considered much too consumer-hostile, so "select 87 overpasses" it is.
amatecha a day ago
A while back I was buying tickets for a gondola for a trip in Europe and the checkout process failed during payment because their site didn't load their analytics/tracking stuff with proper error-handling, so when my ad-blocker prevented the tracking stuff, their checkout process failed to handle my CC's 2-factor auth and the checkout would fail. Had to contact my CC company and work with the gondola company to tell them what they're doing wrong so they could fix their website code. Pretty sad to know whoever built their stuff actually shipped a checkout flow (for a VERY popular tourist destination) without testing with ad-blockers enabled.
lxgr 21 hours ago
girvo a day ago
Surprising really, because I'm a Firefox + Ublock Origin die hard and I never get Cloudflare captchas. Wonder what the difference is? I have CGNAT turned off, if that matters at all (probably not).
lxgr 21 hours ago
I could definitely imagine a public IPv4 with lots of good, logged-in Cloudflare traffic to act as a positive signal for their heuristics, possibly even overriding the Firefox penalty.
danielheath a day ago
Maybe check your network isn't sending web traffic you're not aware of?
I'm running firefox and seeing the normal amount.
jychang a day ago
Most people are on a CGNAT these days, drowning in captchas is the new normal. You’re at the mercy of one of your neighbors not hosting a botnet from their home computer.
perching_aix a day ago
tokioyoyo a day ago
cogman10 a day ago
Every so often, usually after a firefox update, CF will get into a "I'm convinced your a bot" mode with me. I can get out of it by solving 20 CAPTCHAs.
hansvm a day ago
g-b-r a day ago
Maybe you allow tracking and cookies?
Eji1700 a day ago
mghackerlady 6 hours ago
Heaven forbid you not use JavaScript, then they can't <s>track you</s> keep the internet safe!
geysersam 14 hours ago
I use firefox daily and I don't encounter the problems you describe, might be worth looking if there's some other issue.
binaryturtle 21 hours ago
I'm with a slightly older Firefox and can't use many websites at all anymore because the Cloudflare cancer.
Of course then you got sites like gnu.org too that block you because your slightly outdated user agent.
mghackerlady 6 hours ago
I... Don't think it does that? It shouldn't, anyway. How long has that been a thing? They've been hit pretty hard by the slop crew lately but I couldn't imagine it being so bad they require an up to date UA
onion2k a day ago
Is that because botnets spoof being Firefox? It's not really fair to blame Cloudflare it is. That's on the bots.
doctaj a day ago
In what way would that not be fair? Their product giving false positives (unnecessary challenges for a normal browser humans commonly use) to real people is definitely their fault.
eks391 14 hours ago
gruez a day ago
lxgr 21 hours ago
No, using a stupid authentication/verification method with lots of false positives is always on whoever deploys it.
Imagine an apartment building with a flimsy front door lock that breaks all the time, and the landlord only telling you that that can't be helped because of all the burglars.
josephcsible 21 hours ago
If it's just as easy to spoof being Chrome as it is to spoof being Firefox, then it is indeed fair to blame Cloudflare if they give Firefox users more CAPTCHAs than Chrome users.
conradkay a day ago
Not really, there's camoufox but the vast majority use modified chrome/chromium
lm411 14 hours ago
That's not Cloudflare trying to make your life hard.
It's the reality of how bad the bots have become.
dawnerd a day ago
I’ve been getting it in safari too. It’s ridiculous frankly. My residential ip must have been flagged or something. The part that’s really annoying is its trivial for bots to bypass.
lxgr 21 hours ago
> I’ve been getting it in safari too.
I'm getting it on iCloud Private Relay all the time. It honestly makes it kind of useless.
Maybe that's the point? But then again, doesn't Cloudflare run part of it!? And wasn't there some "privacy-preserving captcha replacement" that iOS devices should already be opting me in to? So many questions, nobody there to answer them, because they can get away with it.
> The part that’s really annoying is its trivial for bots to bypass.
Not the ethical bots, though! My GPT-backed Openclaw staunchly refuses to go anywhere near a "I'm not a robot" button.
gzread 14 hours ago
segmondy 19 hours ago
trying using firefox and then using a cellphone network for internet. sometimes i can't access a site, because i get infinite captcha. i know what a damn bus, stairwell, stop light or motorcycle looks like.
lazycouchpotato 12 hours ago
At times I'm completely locked out of a website and Cloudflare asks me to email the website owner to get the issue resolved.
.. how do they expect me to find the website owner's email if I can't access said website?
wongarsu 10 hours ago
Once upon a time we had whois lookup for exactly that usecase (finding a domain's owner without visiting the site). Of course now nearly everyone has meaningless entries from some domain privacy service
tshaddox 21 hours ago
Is anyone talking about the fact that this is a fundamental design flaw of the web? Or arguably even the entire Internet?
3form 21 hours ago
It's hard to call something a "fundamental flaw of web" if it wasn't an issue for 30 years. Unless you mean something more general that I'm missing.
tshaddox 18 hours ago
fastball 18 hours ago
pixl97 17 hours ago
lukewarm707 10 hours ago
sometimes when there is mafia you get no option but pay pizzo
hence i am just using cloudflare remote browser rendering.
amatecha a day ago
These days I just close sites that show that "checking if you're a bot" shit. If this is how the web is going to be now, I don't care, I'll just not use it. I didn't need to see that article or post that badly anyways. I'm tired of paying the price for the sociopathic, greedy actions of others. It's especially bad for anyone who uses an open source OS like Linux or *BSD (to the extent many sites just block me automatically with a 403 Forbidden simply for using OpenBSD + Firefox, completely free pass if I try the same site from a Windows or Linux computer).
jgalt212 a day ago
We use Cloudflare to protect our content, but at the same time our machines mostly run Linux / Firefox so it really is quite a frustrating relationship. It really bums me out how much of Turnstile boils down to these two questions:
is it Linux (or similar)?
is it Firefox?
If yes, to one or both, you're blocked! Clearly millions of dollars of engineering talent and petabytes of data collection should be able to come up with something more nuanced than this.
dheera a day ago
Exactly. For the most part all this bot protection is only protecting these websites against humans.
I don't do free work. I'm not going to label 50 images of crosswalks and motorcycles for free.
ronbenton a day ago
> For the most part all this bot protection is only protecting these websites against humans.
Curious how do you know this?
EGreg a day ago
Well, that's for the public internet.
I'm building Safebox and Safecloud, where this won't be the case anymore. Not only will you have a decentralized hosting network that can sideload resources (e.g. via a browser extension that looks at your "integrity" attribute on websites) but also the websites will require you to be logged in with a HMAC-signed session ID (which means they don't need to do any I/O to reject your requests, and can do so quickly)... so the whole thing comes down to having a logged in account.
https://github.com/Safebots/Safecloud
As far as server-to-server requests, they'll be coming from a growing network of cryptographically attested TPMs (Nitro in AWS, also available in GCP, IBM, Azure, Oracle etc.) so they'll just reject based on attestations also.
In short... the cryptographically attested web of trust will mean you won't need cloudflare. What you will need, however, to prevent sybil attacks, is age verification of accounts (e.g. Telegram ID is a proxy for that if you use Telegram for authentication).
password4321 a day ago
Wow, if Seinfeld can have a soup nazi, I think it's within reason for you to be called the internet nazi.
"No s̶o̶u̶p̶ internet for you!"
Good luck!
ale42 a day ago
This was sarcasm, right?
EGreg an hour ago
i18nagentai an hour ago
The irony of a company that sells DDoS protection making the browsing experience worse for legitimate users. The real issue is that Cloudflare's bot detection runs JavaScript that introspects the page state — which means any site using Cloudflare is implicitly giving Cloudflare access to read the DOM of the protected application. That's a much bigger concern than the typing delay.
simonw a day ago
Presumably this is all because OpenAI offers free ChatGPT to logged out users and don't want that being abused as a free API endpoint.
NotPractical a day ago
But do they do it whether you're logged in or not?
I noticed the ChatGPT app also checks Play Integrity on Android (because GrapheneOS snitches on apps when they do this), probably for the same reason. Claude's app doesn't, by the way, but it also requires a login.
Gander5739 21 hours ago
Because accounts are free, and could still be used to abuse as a free endpoint, with a little trickiness.
gzread 14 hours ago
appreciatorBus a day ago
Yup.
Coincidentally about an hour ago, I wanted to look something up in ChatGPT and I happened to be in a browser window I don’t normally use, with no logged in accounts. I assumed it wouldn’t work, but to my surprise with no account, no cookies of any kind it took my query and gave me an answer.
gruez a day ago
>I assumed it wouldn’t work, but to my surprise with no account, no cookies of any kind it took my query and gave me an answer.
They allowed anonymous requests for months now, maybe even a year.
solaire_oa 17 hours ago
aziaziazi a day ago
I used to mostly use chatgpt in an incognito tab, logged out. Until I notice it seems to have some context of my logged in session, and of the logged out as well. It may be paranoia or prompt deduction as well but that felt strange.
FergusArgyll a day ago
Yeah it works but it's a dumber model. Prob mini
lelandfe 6 hours ago
bredren 20 hours ago
It is also intended to protect the usage patterns of pro subscribers.
As has been amply explained, the API pricing per token is far more for equivalent use when maximizing a subscription plan.
It isn’t really a massive hurdle to deal with this full SPA load check. If one is even aware it exists they already have the skills to bypass it anyway.
I get why people would “what about” the automation inherit in what OpenAI is doing but that is a separate matter.
Other businesses and applications can put into place their own hurdles and anti bot practices to protect the models they’ve leaned into—-and they have been.
darepublic 21 hours ago
Using 5.2 at 20 a month would also be a steal. Other shoe will drop on codex sooner or later
thisisnow 21 hours ago
Its probably same for copilot.microsoft.com and their cloudfart usage
petcat a day ago
> These properties only exist if the ChatGPT React application has fully rendered and hydrated. A headless browser that loads the HTML but doesn't execute the JavaScript bundle won't have them. A bot framework that stubs out browser APIs but doesn't actually run React won't have them.
> This is bot detection at the application layer, not the browser layer.
I kind of just assumed that all sophisticated bot-detectors and adblock-detectors do this? Is there something revealing about the finding that ChatGPT/CloudFlare's bot detector triggers on "javascript didn't execute"?
iancarroll 15 hours ago
It’s pretty interesting to me that Cloudflare is collecting additional client-side data for individual customers. This is not widely done by most anti-bot solutions.
supriyo-biswas 11 hours ago
OpenAI is on an enterprise plan and (presumably) gets a customized version of Turnstile.
red_admiral 10 hours ago
"Sophisticated" may vary, but for a lot of EU media products you can just block the script that launches the paywall/consent overlay. Sometimes disabling JS does it; sometimes activating reading mode works.
Chance-Device a day ago
Perhaps the author should have made it clearer why we should care about any of this. OpenAI want you to use their real react app. That’s… ok? I skimmed the article looking for the punchline and there doesn’t seem to be one.
raincole 11 hours ago
Why does every article need a 'punchline'? It's a technical analysis. Do you expect punchlines when you read recipes or source code?
Chance-Device 10 hours ago
Where did I say “every article”? This is AI slop that’s set up like it’s some investigative expose of something scandalous and then shows us nothing interesting. A competent human writer would have reframed the whole thing or just not published it.
raincole 10 hours ago
dmos62 11 hours ago
For me the interesting parts of the article is how author got to the decompiled checks and what the checks are. Anti-bot is an interesting space.
elwebmaster a day ago
That's because the article is AI slop.
londons_explore a day ago
I just don't understand why bot owners can't just run a complete windows 11 VM running Google Chrome complete with graphics acceleration.
You can probably run 50 of those simultaneously if you use memory page deduplication, and with a decent CPU+GPU you ought to be able to render 50 pages a second. That's 1 cent per thousand page loads on AWS. Damn cheap.
jaccola 21 hours ago
There are myriad providers competing to offer this, nicely packaged with all the accoutrements (IP rotation, location spoofing, language settings, prebuilt parsers, etc.) behind an easy to use API.
Honestly it is a very healthy competitive market with reasonably low switching costs which drives prices down. These circumstances make rolling your own a tough sell.
arcfour 18 hours ago
They do, but the fact that they have to do this means there are fewer bots because it's less economical to go to such lengths, compared to something much less complex (which is orders of magnitude cheaper).
huertouisj 21 hours ago
there are scraping subreddits.
if you browse them you will see that bot writers are very annoyed if they can't scrape a site with a headless browser.
you can do what you suggested, but with Linux VMs/containers. windows is too heavy, each VM will cost you 4 GB of RAM
londons_explore 13 hours ago
The reason to use windows is that anti bot tech is going to be a lot stricter if Linux is detected...
xmcp123 19 hours ago
I’m in those. xvfb and headless=false still works great
himata4113 16 hours ago
284 on 296gb of ram with deduplication enabled on a 128c with 32Q vgpu.
YetAnotherNick 4 hours ago
I am reasonably sure that these kind of fingerprints can detect if the browser is inside a VM.
kristjansson 4 hours ago
… yup?
I mean you missed the minigame of preventing Chrome from signaling that it’s being programmatically (webdriver etc) driven and tipping your hand, but … yup?
poly2it a day ago
If you know of a simple way to run a Windows 11 VM with good graphics acceleration (no GPU passthrough), please contact me.
MarioMan a day ago
I assume your concern with GPU passthrough is that each VM needs a whole GPU? You can use GPU-PV to split your GPU between VM instances. Then the main bottleneck becomes how thin you split out your VRAM.
More info here:
https://web.archive.org/web/20231107182321/https://mu0.cc/20...
deltoidmaximus 6 hours ago
hrmtst93837 13 hours ago
In theory you could run hundreds of full-fat Chrome bots if you don't care about the ops mess, but keeping Windows images stable while Cloudflare and friends keep changing the fingerprinting game turns the cheap math into a maintenance job from hell. AWS VM signals are a big red flag, so you still eat CAPTCHAs and blocks even with a full browser stack. The page load number looks cheap.
technion 21 hours ago
To prompt a discussion that's purely technical: I'm interested in how this was done.
Specifically, Turnstile as far as I'm aware doesn't do anything specifically configurable or site specific. It works on sites that don't run React, and the cookie OpenAI-Sentinel-Turnstile-Token is not a CF cookie.
Did OpenAI somehow do something on their own API that uses data from Turnstile?
XYen0n 15 hours ago
Cloudflare should be able to determine whether a website uses React by analyzing data flowing through its CDN.
technion 14 hours ago
Whilst true, "validate the right state is loaded" would surely be something not done without developer input.
kristjansson 4 hours ago
ripbozo a day ago
and chatgpt was then used to write this article. at least try to clean it up a bit
hx8 a day ago
Ah yes, the timeless hallmark of web blogs: a draft so messy even a language model would ask for a second pass.
tommodev 14 hours ago
Ah, this explains chatgpt (and probably copilot) performance behind corporate firewalls such as zscaler.
Between the network latency and low end machines, there is an enormous lag between chatgpts response and being able to reply, especially for editing a canvas.
I've been sitting there for up to a minute plus waiting to be able to use the canvas controls or highlight text after an update.
croemer 7 hours ago
When using ChatGPT Android app with some NextDNS block lists, I get an error modal in app saying "security misconfiguration blah blah".
Clearly I'm blocking some tracker and it's upset about that. I allowlisted a sentry subdomain and since then got no more complaints.
TimLeland 5 hours ago
It seems they fixed the biggest issue Ive had where you start typing then it erases the content once the page fully loads
bredren 20 hours ago
On a related note, ChatGPT.com changed how it handles large text pastes this past week.
It now behaves like Claude, attaching the paste as a file for upload rather than inlining it.
This affected page UX some and reduces the cost of the browser tab some.
At some point, maybe still true, very long conversations ~froze/crashed ChatGPT pages.
NSPG911 21 hours ago
I was using KeepChatGPT[1] for a while back in 2023-2024, pre-Gemini-in-Google era, and I was fascinated as to how it was able to mask being a user without needing any API or help from the end user. I stopped using it after 2024 because 1) Gemini and 2) It breaks quite a lot. I did however, like how you had an option to push the AI panel to the right, if only Google even considers doing so.
qingcharles 17 hours ago
I have a little helper app I run sometimes that I have a button to push a query into ChatGPT and get a json response. You wouldn't even know OpenAI had any anti-bot tools because it doesn't get flagged at all. It just uses a webview inside WinForms.
edg5000 3 hours ago
The chat client has serious performance issues on lower end systems. Now I see why!
natdempk a day ago
Does anyone know how this is integrated on the Cloudflare side and across the app? Is this beyond standard turnstile? Is this custom/enterprise functionality? Something else?
dsparkman 8 hours ago
That explains why ChatGPT has been running like shit all weekend. In the desktop app on Mac, it could not even complete a response. On the web, it would hang before you could input anything.
tosh 17 hours ago
It used to be possible to type immediately while the page is loading and have all key presses end up in the input field.
Why run this check before user can type?
Why not run it later like before the message gets sent to the server?
tripdout a day ago
AI-written article?
avazhi a day ago
Yep. I flag these as spam at this point.
pautasso 10 hours ago
AI goes through great lengths to ensure it's talking with humans.
Why would two AI bots want to chat with each other?
balkanist an hour ago
Hah, 100% true!
jtbayly 20 hours ago
Others here are asking if this is the cause of slow performance in a long chat.
But it seems clear to me that this is why I can't start typing right away when I first load the page and click to focus in the text field.
darepublic a day ago
I imagine to stop web automation from getting free API like use of the model
self-portrait 13 hours ago
A/B testing /dev/ kit that tokenizes four permutations of language
CorneredCoroner a day ago
> A headless browser that loads the HTML but doesn't execute the JavaScript bundle won't have them.
this is meaningless btw. A browser headless or not does execute javascript.
jaccola 21 hours ago
I disagree, a browser can have javascript execution disabled (and this is somewhat common in scraping to save time/resources).
I read it to mean: "A browser that doesn't execute the JavaScript bundle won't have [the rendered React elements]." Which is true.
maxwellg 18 hours ago
Wouldn't a browser that doesn't execute JS also not execute the browser fingerprinting code in the first place?
XYen0n 15 hours ago
If JavaScript is disabled, why use a headless browser instead of making HTTP requests directly?
girvo a day ago
A bunch of the points in this AI generated blog post were like that. Makes me feel dirty when I'm 1/3rd of the way through and I realise how off it is.
thisisnow 21 hours ago
Hah, sure, you just let random JS execute from random sites on your machine...
lightedman 7 hours ago
Preventing me from typing until you SCAN MY SYSTEM?
Fine, by extension, you agree I can scan all of your systems for whatever I desire. This works both ways.
j45 4 hours ago
This is a lot of fingerprinting.
AndreyK1984 10 hours ago
CamuFox will fix it easy peasy.
tristor 5 hours ago
This explains some of the weird performance behavior I've seen in the last 24 hours with ChatGPT, sometimes lagging my entire browser while typing. Note, I'm a paying user with a Teams account, so it's kind of annoying that this is being applied to logged in paying users as well. I might have to vibe-code my own chat webUI using the APIs.
aucisson_masque 9 hours ago
Mistral chat is also free to use without account and doesn't do that.
refulgentis a day ago
If you have AI write a blog post for ya, when you think it's set, check word count (can c+p to google docs if AI can't pull it off with built in tools), and ask it to identify repetitions if it's over 1000.
Also, you can have it spotcheck colors: light orange on light background is unreadable, ask it to find the L*[1] of colors and dark/lighten as necessary if gap < 40 (that's minimum gap for yuge header text on background, 50 for text on background, these have gap of 25)
I haven't tried this yet, but, maybe have it count word count-per-header too. It's got 11 headers for 1000 words currently, makes reading feel really stacatto and you gotta evaluate "is this a real transition or vibetransition"
[1] L* as in L*a*b*, not L in Oklab
arcfour 18 hours ago
> They exist only if the request passed through Cloudflare's network. A bot making direct requests to the origin server or running behind a non-Cloudflare proxy will produce missing or inconsistent values.
...I don't think that's possible even if you are a bot? I would be very surprised if OAI had their origin exposed to the internet. What is a "non-Cloudflare proxy"? Is this AI slop?
It's likely just looking at the CF properties as part of a bot scoring metric (e.g. many users from this ASN or that geoip to this specific city exhibit abusive patterns).
apsurd 21 hours ago
Haven't read yet but instantly matched with my experience of the chat being unusable at times. The latency and glitch-like feel is unbearable.
seker18 9 hours ago
Cómo puedo acceder a un celular
aslihana a day ago
I mean, I can easily get them to behaving defensively for not being abused. But MBP with M5 here, my chatgpt tab always get stucked when I hit some prompt.
Really really bad user experience, wondering about when they will leave this approach.
gobdovan a day ago
Imagine if they'd put as much effort into making a decent frontend experience.
heliumtera a day ago
I am shocked openai collects data about it's users before users have the opportunity to send the same data to openai servers!
EGreg a day ago
Why does ChatGPT slow down so much when the conversations get long, while Claude does compaction?
My best guess is -- ChatGPT is running something in your browser to try to determine the best things to send down to the model API –- when it should have been running quantized models on its own server.
themafia a day ago
My theory is that "AI" doesn't really have any long term paying customers and the majority of the "users" are people who have cooked up some clever hack to effectively siphon computing power from these providers in an effort to crank out the lowest effort ad supported slop imaginable.
Every provider seems to have been plauged by these freeloaders to such an extent that they've had to develop extreme and onerous countermeasures just to avoid losing their shirts.
What's the word? Schadenfreude?
Josephjackjrob1 9 hours ago
cloud flare will not be around for long, its a shame as it is the GOAT lol
yapyap 20 hours ago
wow OpenAi sure doesnt like bots for a company enabling the botification of the world wide web
baggachipz 5 hours ago
"We wouldn't want somebody scraping our data, that's ours!"
avazhi a day ago
Another AI-slop article.
Sick.
pencilcode a day ago
ai slop analysis finding CF detects non javascript capable browsers with no punchline
blinkbat a day ago
Ok... so... ?
beering a day ago
So are you able to get free inference now that you decrypted this?
superkuh a day ago
It doesn't look like it in the full sense of "free". But part of how one pays these services is by running a permissive modern browser which allows the corporation to spy on you even when you already paid in currency. In a sense by depriving them of the ability to easily spy on your this workaround is closer to "free".
gruez a day ago
>My best guess is -- ChatGPT is running something in your browser to try to determine the best things to send down to the model API
There's no way this is worth it unless the models are absolutely tiny, in which case any benefits from offloading to the client is marginal and probably isn't worth the engineering effort.
danny_codes 14 hours ago
beering a day ago
They already see everything I’m doing because I send my prompts to them. What “workaround” are you referring to?
superkuh a day ago
voxic11 a day ago
But isn't ChatGPT access free through the browser? What do you mean already paid in currency?
pocksuppet 7 hours ago
dgb23 8 hours ago
Why are companies like OpenAI and others that are all-in on LLMs still using ReactJS, Python and so on?
These programming languages and frameworks were made for developer convenience and got wide adoption, because it makes on-boarding easier.
This obviously comes at a cost of performance, complexity and introduces a liability into a system, because they are dependencies that come with a whole bunch of assumptions about how they are used.
Is this tradeoff even worth it anymore?
robmccoll 8 hours ago
Probably training data. The largest number of public repos are built on that stack. We recently picked React for new projects because LLMs seemed to be the most reliable when writing React code.