You can't trust the internet anymore (nicole.express)
124 points by panic 2 hours ago
WD-42 2 hours ago
This is why my friends and I are setting up a mesh network in our town.
The open internet has been going downhill for a while, but LLMs are absolutely accelerating it's demise. I was in denial for the last few years but at this point I've accepted that the internet I grew up on as a kid in the late 90s to mid 2000s is dead. I am grateful for having experienced it but the time has come to move on.
The future for people that valued what the early internet provided is local, trusted networks in my opinion. It's sad that we need to retreat into exclusionary circles but there are too many people interested in making a buck on the race to the bottom.
cortesoft 2 hours ago
This seems like solving the problem at the wrong layer? The issue isn’t the actual network connection between people, it is the content. You could easily create your own forum or something and only include people you trust. You don’t need an entirely separate internet.
noosphr an hour ago
>The issue isn’t the actual network connection between people, it is the content.
Everyone serving a website is being ddos by AI agents right now.
A local mesh network is one way to make sure that no one with a terabit network can index you.
pphysch 5 minutes ago
EvanAnderson an hour ago
Even if it was a "network connection" issue creating an overlay network on top of the Internet (with VPN tunnels and mesh routing, for example) would yield wildly better bandwidth and latency characteristics.
You can still make that overlay network geofenced and vetted. Heck, running it over a local ISP's last mile would probably yield wonderful latency.
We need vetted webrings on the existing Internet, not a new Internet.
sky2224 an hour ago
There's only so much you can do to detect and block content that's AI generated. At the end of the day, the content starts with the people creating it.
Jumping to an invite only network isn't the most ridiculous idea imo.
drysart an hour ago
kolinko an hour ago
willturman an hour ago
Perhaps, but it also, by default, excludes that entire class of authentication problems that are only manifested in a non-local network.
I love the idea.
It's also interesting in that a local mesh doesn't necessarily need to operate using the TCP/IP/HTTP stack that has been compromised at every layer by advertising and privacy intrusions.
kolinko an hour ago
PaulDavisThe1st 2 hours ago
I "got online" in 1985. I don't recall a single point in time that a geographically local internet was ever useful or of interest to me.
xoxxala an hour ago
I got a 300 baud modem right around the same time. There were a few local BBSs that ran meetups, scavenger hunts, warez parties and the like. I got to know a bunch of the regulars from the area. Pretty cool time.
allenu an hour ago
I think before Friendster, Myspace, then Facebook, there was a period where there were discussion forums for local communities. I think it was useful for meeting people. I remember friends in the late '90s used them frequently for chatting and some made new friends in real life that way. It was a short period, though, as more established companies came along that had a wider reach.
holoduke an hour ago
Bbs. Downloaded first shareware version of doom. Was it 4mb or something? I remember I had like 5kb/s and paid 5 cents a minute. My parents weren't happy those days. Now they are :)
iLoveOncall 2 hours ago
What about when you want to find hot singles in your area?
Jokes aside, probably 10-20% of my browsing is related to local things, up to the country scale. From finding local restaurants or businesses, to finding about relevant laws or regulations, news, etc. That's not negligible.
PaulDavisThe1st an hour ago
ethbr1 an hour ago
If you'd like, flip an email my way. We've been thinking similarly.
Email in profile (deref a few times)
shevy-java an hour ago
It is good to see there are some internet rebels left.
Perhaps AI-Skynet will not win - but they have a lot of money. I think we need to defund those big corporations that push AI onto everyone and worsen our lives.
xantronix 2 hours ago
I've been looking into building some sort of Wireguard mesh service since many of my friends are distributed all across the world. I wish you the very best in your endeavours!
purpleKiwi an hour ago
How would that look like in practice? I've just heard about the term and I like the description of it, especially the possibilities it gives
xantronix an hour ago
anigbrowl an hour ago
This will be about as impactful as printing out the best web articles you encounter and building a shed to shelve them in binders.
klysm an hour ago
What does a mesh network have to do with this?
bonesss an hour ago
HAM & pirate radio vs corporate broadcasting.
marginalia_nu 2 hours ago
Tangentially related, I have a hunch, but cannot prove, that prediction markets are the driving force behind a lot of the bad information online, since they essentially monetarily incentivize making people misjudge the state of the world.
There's been a huge uptick in this sort of brigade like behavior around current events. First noted it around LK99, that failed room temperature semiconductor in 2023, but it just keeps happening.
Used to be we only saw it around elections and crypto pump and dumps, now it's cropping up in the weirdest places.
3eb7988a1663 an hour ago
That seems really high effort. I assume most events are things which are hard to influence, so at best you are hoping to tilt the wager odds into your favor. Which could backfire if you are betting on the wrong outcome.
pphysch 2 minutes ago
There's plenty of "high effort" market information manipulation going on, even before LLMs. Spread (justified, researched) FUD about a company your fund is shorting.
digiown 2 hours ago
Interesting theory. I'm inclined to disagree, however. Prediction markets essentially allows people to trade information for money, even the types historically more difficult to trade. There aren't enough people betting on things for deliberate misinformation to become worthwhile, IMO, and most people would stop betting after being in the wrong too often, unlike casinos which always let you win sometimes.
I believe the misinformation is largely by self-interested parties. Politicians as well as influencers trying to push agendas, and the engagement/attention farming for advertising revenue, which are largely indifferent to truth.
marginalia_nu 2 hours ago
It's the same as with crypto rug pulls, nobody is going to fall for that several times. Was still money to make in that before everyone and their grandma wisened up.
digiown 2 hours ago
eterm 2 hours ago
It is the failure mode of incorrect trust that has changed.
Previously you might get burned with some bad information or incorrect data or get taken in by a clever hoax once in a while.
Now you get overwhelmed by regurgitation, which itself gets fed back into the machine.
The ratio of people to bots reading is crashed to near zero.
We have burned the web.
pixl97 3 minutes ago
Hence dead internet theory has turned into dead internet reality.
lazystar 2 hours ago
I've been miserable over the last few weeks after coming to that same conclusion. Its so bad that i doubt the people that were pulling the strings can even tell whats going on anymore.
mnau 2 hours ago
Signal to noise ratio is getting *lower (EDIT: was higher) than ever. I don't see a way out of this other than "human certified" digitally signed authorship (e.g. by using eIDAS in EU). There could be a proxy to at least retain pseudo-anonymity, but trackable to a human. Tragedy of commons strikes again.
PaulDavisThe1st 2 hours ago
"Tragedy of commons" is a false concept that obscures greed and selfishness and often lawlessness. Even its originator (Hardin) accepts that it does not describe actual history.
roxolotl an hour ago
The use of the word Tragedy in the name I think makes it easier for people to excuse themselves when they monopolize the commons. “Oh it’s a tragedy humans are just selfish we can’t avoid it.” The tragedy is that people are comfortable excusing others selfish, greedy behavior by saying it’s innate.
armchairhacker an hour ago
There’s a lot of debate under your linked comment.
My understanding is that people tend to cooperate in smaller numbers or when reputation is persistent (the larger the group, the more reliable reputation has to be), otherwise the (uncommon) low-trust actors ruin everything.
Most humans are altruistic and trusting by default, but a large enough group will have a few sociopaths and misunderstood interactions; which creates distrust across the entire group, because people hate being taken advantage of.
PaulDavisThe1st 42 minutes ago
pino999 2 hours ago
And that human can use A.I. again. It won't help.
mnau 2 hours ago
I would argue that it can be circumvented, not that it won't help. If a human uses his/her signature for content farm, it can be flagged as such.
varjag 2 hours ago
I suppose you meant SNR is getting lower.
3eb7988a1663 an hour ago
I was recently running into this while playing the latest Hollow Knight game. Several sloppified sites which obviously were trying to tailor mechanics/items of the original game into the new one. The new release is only ~six months old, so there is just not that much hard content available to reference.
My question is -why? Is it really worth the ad revenue to trick a few people looking into a few niche topics? Say you pick the top 5000 trending movies/music/games and generate fake content covering the gamut. What is the payback period?
neom an hour ago
I thought a lot last night about how we could protect HN, I didn't come up with a good answer except maybe you'll need to have someone with a higher reputation vouch aka invites. My internet community journey has mostly just been irc -> dA -> twitter -> HN. Too frequently these days I feel I might be putting emotional energy into something that isn't human on this site, hard to express how that makes me feel, but it's not pleasant at all. 힝
krapp 23 minutes ago
We can't. This forum is run by the company that used to be run by Sam Altman and it's already full of people who work in the industry that's driving AI adoption and who use and aggressively believe in AI to the point of religion. There are already bot accounts posting, and humans posting comments filtered by AI. Most Show HNs are vibe coded.
There's nothing anyone can do about it. No matter how many guidelines dang deploys, no matter how much negative social pressure we apply (and we could apply much more but doing so would just run afoul of the tone policing of the guidelines) people will use AI because they want to, and because it's a part of their identity politics, specifically to spite people who don't want to see it. They currently bother to mention when they use ChatGPT for a comment. It's just a matter of time until people don't even bother, because it's so normalized.
The Fediverse is currently good, the culture there is rabidly anti-capitalist and anti-AI. I like Mastodon. But that will eventually, inevitably get ruined as well, and we'll just have to move on to the next thing.
neom 17 minutes ago
CrzyLngPwd 15 minutes ago
There was a very brief window, of maybe hours or days, where the Internet could be trusted, and that was a long time ago.
arjie an hour ago
It is true that as the cost to construct fake content has gone to zero, we need some kind of scalable trust mechanism to access all this information. I don't yet know what this is but a Web of Trust structure always seems appealing. A lot of people are going to be excluded, but such is life, I suppose.
If I were to be honest, going to where the fish aren't is also going to help. Almost certainly there are very few LLM generated websites on the Gemini protocol.
I'm setting up a secondary archiver myself that will record simply the parts of the web that consent to it via robots.txt. Let's see how far I get.
armchairhacker 8 minutes ago
I think if a Web of Trust becomes common, it will create a culture shift and most people won’t be excluded (compared to invite-only spaces today). If you have a public presence, are patient enough, or a friend or colleague of someone trusted, you can become trusted. With solid provenance, trust doesn’t have to be carefully guarded, because it can be revoked and the offender’s reputation can be damaged such that it’s hard to regain. Also, small sites could form webs of trust with each other, trusting and revoking other sites within the larger network in the same manner that people are vouched or revoked within each site (similar to the town -> state -> government -> world hierarchy); then you only need to gain the trust of an easy group (e.g. physically local or of a niche hobby you’re an expert in) to gain trust in far away groups who trust that entire group.
nwhnwh 17 minutes ago
The trust collapse: Infinite AI content is awful https://arnon.dk/the-trust-collapse-infinite-ai-content-is-a...
stavros an hour ago
You never could trust the internet. The difference is that now the problem is so widespread that it's finally spurring us into action, and hopefully a good "web of trust" or similar solution will emerge.
shevy-java an hour ago
AI is kind of like Skynet in the first Terminator movie. It now destroys our digital life. New autogenerated websites appear, handled by AI. Real websites become increasingly less likely to show up on people's daily info-feed. It is very strange compared to the 1990s; I feel we lost something here.
> The commons of the internet are probably already lost
That depends. If people don't push back against AI then yes. Skynet would have won without the rebel forces. And the rebels are there - just lurking. It needs a critical threshold of anger before they will push back against the AI-Skynet 3.0 slop.
Devasta an hour ago
The future of the internet is going to be invite-only enclaves. I sometimes wonder is anyone working on the next generation of discussion forums, or if it'll be a return to PHPBB.
marginalia_nu 40 minutes ago
lobste.rs is already kinda that I think, makes an interesting contrast to HN, which has a similar crowd but is open to anyone.
gustavus 2 hours ago
When I first started using the Internet there were 3 rules that were pounded into my head repeatedly.
1. Don't believe everything or anything you read or see on the Internet.
2. Never share personal information about yourself online.
3. Every man was a man, every woman was a man and every teenager is an FBI agent.
I have yet to find a problem with the Internet thats isn't because of breaking one of the above rules.
My point being you couldn't ever trust the Internet before anyways.
WD-42 2 hours ago
You've always needed skepticism, of course. But it used to be if you came across an article about a super obscure video game from the early 90s (referencing the blog post here) you could be reasonably sure that it wasn't completely made up. There just wasn't the incentive to publish nonsense about super niche things because it took time and effort to do so.
Now you can collate a list of thousands of titles and simply instruct an LLM to produce garbage for each one and publish it on the internet. This is a real change, IMO.
PaulDavisThe1st 2 hours ago
You forgot Fido's Corollary:
3a. ... and nobody knows if you're a dog.
anigbrowl an hour ago
Yeah when I was 10 someone told me not to believe everything I read too. But guess what, that's kinda useless advice because consulting reference material is a necessity and there are wide variations in the quality of reference material. This sort of 'don't trust anyone' heuristic can just as easily lead to conclusions that the earth is flat, the moon landing never happened, vaccinations are the leading cause of disease etc.
underlipton an hour ago
It comes down to Google's failure. Rather than outright defeating the SEO eldridge abomination by adopting a zero-tolerance policy to those tactics, Google made a mutually advantageous bargain with them of - course, leaving out a third party: us. They could do this because they had no competition. Now, the culture of enabling bad actors is, unfortunately, set.
Google did all the innovation it needed to and ever is going to. It needed to be broken up a decade ago. We can still do it now. Though I don't know how much it will save, especially if we don't also go after Apple, and Meta, and Microsoft.
avidiax an hour ago
It would be in Google's ultimate interest to label AI-generated websites and potentially rank them lower in search results.
AI needs to be kept up to date with training data. But that same training data is now poisoned with AI hallucination. Labelling AI generated media helps reduce the amount of AI poison in the training set, and keeps the AI more useful.
It also simply undermines the quality of search, both for human users and for AI tool use.
dehrmann an hour ago
> Rather than outright defeating the SEO...
SEO is a slippery slope on both sides because a little bit is good for everyone. Google wanted pages it could easily extract meaning from, publishers wanted traffic, and users wanted relevant search results. Now there's a prisoners dilemma where once someone starts abusing SEO, it's a race to the bottom.
underlipton 37 minutes ago
>SEO is a slippery slope on both sides because a little bit is good for everyone
I reject this emphatically. Google should never have been in the business of shaping internet content. Perhaps they should have even gone out of their way to avoid doing so. Without Google (or a better-performing competitor) acquiescing to the game, there is no SEO market.
rvz 2 hours ago
It always has been like that on the internet. Now made worse for obvious reasons.
On the internet no one knows if you're a dog, human or a moltbot.
ninjagoo 2 hours ago
The internet has gone from a high-trust society to a low-trust society, all in the span of a couple of decades.
Enshittification strikes again.
And it doesn't have appear to have any means to rid itself of the bad apples. A sad situation all around.
PessimalDecimal 2 hours ago
It might be more accurate to say that a lot of low-trust societies have become connected to the Internet which weren't nearly as online a couple of decades ago.
For example, a huge fraction of the world's spam originates from Russia, India and Bangladesh. And we know that a lot of the romance scams are perpetrated by Chinese gangs operating out of quasi-lawless parts of Myanmar. Not so much from, say, Switzerland.
blell 10 minutes ago
70% of the GDP of Laos comes from scamming people in the first world.
"A report by the Global Initiative on Transnational Organised Crime (based on United States Institute of Peace findings) estimated that revenues from “pig-butchering” cyber scams in Laos were around US $10.9 billion, which would be *equivalent to more than two-thirds (≈67–70 %) of formal Lao GDP in a recent year."
https://globalinitiative.net/wp-content/uploads/2025/05/GI-T...
kgeist an hour ago
Russia has been among the top sources of spam since the early 2000s, it's not like anything changed lately. Mail-order bride scams and similar peaked in like 2005. It doesn't take a lot of people to send spam, I don't think it's correlated with the general population's online presence. I'd actually say it's quite the opposite: in 2026, Russia has never been more disconnected from the Western parts of the Internet than it is now (the Russian Internet watchdog blocks like 30% of foreign resources since a few years ago, while Russian IPs are routinely banned on Western sites after 2022, I can barely open anything without a VPN).
For that reason, and because of limited English proficiency, Russian netizens rarely visit foreign resources these days, except for a few platforms without a good Russian replacement like Instagram and YouTube (both banned btw, only via a VPN), where they usually stay mostly within their Russian-speaking communities. I'm not sure why any of them would be the reason the Internet as a whole has supposedly become low-trust. The OP in question is some SEO company using an LLM to churn out sites with "unique content." We already had this stuff 20 years ago, except the "unique content" was generated by scripts that replaced words with synonyms. Nothing really new here.
marginalia_nu an hour ago
expedition32 34 minutes ago
digiown 2 hours ago
The WWW has never been a high-trust place. Some smaller communities, sure, but anyone has always been able to write basically what they want on the internet, true or false, as long as it is not illegal in the country hosting it, which is close to nothing in the US.
The difference is that there historically weren't much to be gained by annoying or misleading people on the internet, so trolling is mainly motivated by personal satisfaction. Two things changed since then: (1) most people now use the internet as the primary information source, and (2) the cost of creating bullshit has fallen precipitously.
allenu 2 hours ago
I agree. It's not that the web was high-trust. It was more that if you landed on a niche web page, you knew whoever put it together probably had at least a little expertise (and care) since it wouldn't be worth writing about something that very few people would find and read anyway. Now that it's super cheap to produce niche content, even if very few people find a page, it's "worth it" to produce said garbage as it gives you some easy SEO for very little time investment.
The motivation for content online has changed over the last 20 years from people wanting to share things they're interested in to one where the primary goal is to collect eyeballs to make a profit in some way.
PaulDavisThe1st 2 hours ago
to be boring, the term "enshittification" was invented by one individual, recently, and has a specific meaning. it does not refer to "things just get worse" but describes a specific strategy adopted by corporations using the internet for commercial purposes.
pdonis an hour ago
> a specific strategy adopted by corporations using the internet for commercial purposes.
Isn't that what's driving the pollution of the Internet by LLMs?
PaulDavisThe1st an hour ago
LPisGood an hour ago
Words change meaning as they are used. Especially negative words that may start rather specific tend to get used more generally until the specificity is lost.
anigbrowl an hour ago
PaulDavisThe1st an hour ago
krapp 27 minutes ago
>to be boring, the term "enshittification" was invented by one individual, recently, and has a specific meaning. it does not refer to "things just get worse"
It literally started meaning that hours after it was first posted to HN and being used. Sorry, that's just how language works. Enshittification got enshittified. Deal with it and move on.
expedition32 an hour ago
People talk about AI slop but I predict that in a couple of years you won't be able to tell...
And at that point does it even matter? Zuckerberg wins.
throwaway2027 2 hours ago
"You really think someone would do that? Just go on the Internet and tell lies?" https://knowyourmeme.com/memes/just-go-on-the-internet-and-t...
nicole_express 2 hours ago
A big part of my annoyance is that in the past, something like Phantasy Star Fukkokuban would not really be worth lying about; people need a reason to lie.
anigbrowl an hour ago
I'm gonna guess that it's just popular enough that being in the top 5 results on search engines yields a small net gain in ad revenue. It's possible the decision to generate the fake article was itself made by a machine.
Great piece btw
yellowapple an hour ago
There was no reason to lie about knowing the Scots language well enough to be the primary contributor by volume to Scots Wikipedia, and yet that's something that happened.
pdonis an hour ago
surgical_fire 7 minutes ago
Lies are intentional. A liar cares about the truth and attempts to hide it.
What we have here is worse; LLMs give you bullshit. A bullshitter does not care if something is true or false, it just uses rhetoric to convince you of something.
I am far from being someone nostalgic about the old internet, or the world in general back then. Things in many ways sucked back then, we just tend to forget how exactly they sucked. But honestly, a LLM-driven internet is mostly pointless. If what I am to read online is AI generated crap, why bother reading it on websites and not just reading it straight from a chatbot already?