When internal hostnames are leaked to the clown (rachelbythebay.com)
415 points by zdw 17 hours ago
notsylver 16 hours ago
I think people are misunderstanding. This isn't CT logs, its a wildcard certificate so it wouldn't leak the "nas" part. It's sentry catching client-side traces and calling home with them, and then picking out the hostname from the request that sent them (ie, "nas.nothing-special.whatever.example.com") and trying to poll it for whatever reason, which is going to a separate server that is catching the wildcard domain and being rejected.
spondyl 16 hours ago
My first thought was perhaps they're trying to fetch a favicon for rendering against the traces in the UI?
n0w 15 hours ago
They're likely trying to retrieve source maps
hsbauauvhabzb 16 hours ago
Sounds like a great way to get sentry to fire off arbitrary requests to IPs you don’t own.
sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.
leoc 16 hours ago
Obligatory Bruce Scneier: https://www.schneier.com/blog/archives/2008/03/the_security_...
ralferoo 8 hours ago
fc417fc802 8 hours ago
doctorpangloss 3 hours ago
people are misunderstanding because the blog post is really confusing and poorly written haha
andix 7 hours ago
Hostnames are not private information. There are too many ways how they get leaked to the outside world.
It can be useful to hide a private service behind a URL that isn't easy to guess (less attack surfaces, because a lot of attackers can't find the service). But it needs to be inside the URL path, not the hostname.
bad: my-hidden-fileservice-007-abc123.example.com/
good: fileservice.example.com/my-hidden-service-007-abc123/
In the first example the name is leaked with DNS queries, TLS certificates and many other possibilities. In the second example the secret path is only transmitted via HTTPS and doesn't leak as easy.amichal 6 hours ago
Marginally better for sure but in this case the path would also have been "leaked" to the sentry instance owned by developers of the the NAS device phoning home. This can happen in zillions of ways and is a good reason to use relatively opaque urls in generally and not "friendly ids" and generally being careful abou putting secrets in URLs.
andix 6 hours ago
Just try it. The first example gets attacked by bots nearly immediately after issuing a TLS cert. The second one usually doesn't get detected at all.
Kwpolska 4 hours ago
Wowfunhappy 5 hours ago
Curious, does this still apply if http is used exclusively?
b1temy 16 hours ago
Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
jraph 16 hours ago
> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Clown is Rachel's word for (Big Tech's) cloud.
hk1337 6 minutes ago
So, it's basically like Cloud2Butt but with a different word.
dehrmann 16 hours ago
She was (or is) at Facebook, and "clowntown" and "clowny" are words you see there.
jraph 15 hours ago
mintplant 16 hours ago
iwontberude 15 hours ago
baxtr 15 hours ago
Anyone know how she come up with the word or why she chose it?
rwmj 13 hours ago
kadoban 14 hours ago
oniony 14 hours ago
senectus1 16 hours ago
amusingly its a term used by my co-workers to describe anyone thats not them.
jraph 16 hours ago
JackFr 5 hours ago
NoGravitas 6 hours ago
jrflowers 15 hours ago
1vuio0pswjnm7 3 hours ago
I remember the term "clown computing" to describe "cloud computing" from IRC earlier than 2016
I use a localhost TLS forward proxy for all TCP and HTTP over the LAN
There is no access to remote DNS, only local DNS. I use stored DNS data periodically gathered in bulk from various sources. As such, HTTP and other traffic over TCP that use hostnames cannot reach hosts on the internet unless I allow it in local DNS or the proxy config
For me, "WebPKI" has proven useful for blocking attempts to phone home. Attempts to phone home that try to use TLS will fail
I also like adding CSP response header that effectively blocks certain Javascript
It sounds like the blog author gave the NAS direct access to the internet
Every user is different, not everyone has the same preferences
simoncion an hour ago
> It sounds like the blog author gave the NAS direct access to the internet
FTFA:
Every time you load up the NAS [in your browser], you get some clown GCP host knocking on your door, presenting a SNI hostname of that thing you buried deep inside your infrastructure. Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
This is when you fire up Little Snitch, block the whole domain for any app on the machine, and go on with life.
I disagree with your conclusion. The post speaks specifically about interactions with the NAS through a browser being the source of the problem and the use of an OSX application firewall program called Little Snitch to resolve the problem. [0] The author's ~fifteen years of posts demonstrate that she is a significantly accomplished and knowledgeable system administrator who has configured and debugged much trickier things than what's described in the article.It's not impossible that the source of the problem has been misidentified... but it's extremely unlikely. Having said that, one thing I do find likely is that the NAS in question is isolated from the Internet; that's just a smart thing that a savvy sysadmin would do.
[0] I find it... unlikely that the NAS in question is running OSX, so Little Snitch is almost certainly running on a client PC, rather than the NAS.
rausr 13 hours ago
> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
The term has been in use for quite some time; It is voicing sarcastic discontent with the hyperscaler platforms _and_ their users (the idea being that the platform is "someone else's computer" or - more up to date - "a landlord for your data"). I'm not sure if she coined it, but if she did then good on her!
Not everyone believes using "the cloud" is a good idea, and for those of us who have run their own infrastructure "on-premises" or co-located, the clown is considered suitably patronising. Just saying ;)
b1temy 12 hours ago
> the idea being that the platform is "someone else's computer"
I have a vague memory of once having a userscript or browser extension that replaced every instance of the word "cloud" with "other peoples' computers". (iirc while funny, it was not practical, and I removed it).
fwiw I agree and I do not believe using "the cloud" for everything is a good idea either, I've just never heard of the word "clown" being used in this way before now.
masto 10 hours ago
seethishat 10 hours ago
Also, sometimes, we use the term 'weenie' rather than 'clown'. They are interchangeable.
m463 4 hours ago
with clown=cloud, GCP must mean google clown platform
user_of_the_wek 9 hours ago
The circus left town, but the clowns are still here.
yabones 9 hours ago
Stuff like this is why I consider uBlock Origin to be the bare minimum security software for going on the web. The amount of 3rd party scripts running on most pages, constantly leaking data to everybody listening, is just mind boggling.
It's treating a symptom rather than a disease, but what else can we do?
behringer 5 hours ago
I also have taken to using adguard home on the router. It blocks 15 or 20 percent of all my traffic. It's quite scary how bad the tracking and other nasties has become.
mike-cardwell 10 hours ago
Only way I can think of protecting against this is to put a reverse proxy in front of it, like Nginx, and inject CSP headers to prevent cross site requests. Wouldn't block the NAS server side from making external calls, but would prevent your browser doing it for them as is the case here. Also would prevent stuff like Google Analytics if they have it. If you set up a proxy, you could also give it a local hostname like nas.local or something with a cert signed by your private CA that Nginx knows about, and then point the real hostname at Nginx, which has the wildcard cert.
Bit of a pain to set this all up though. I run a number of services on my home network and I always stick Nginx in front with a restrictive CSP policy, and then open that policy up as needed. For example, I'm running Home Assistant, and I have the Steam plugin, which I assume is responsible for requests from my browser like for: https://avatars.steamstatic.com/HASH_medium.jpg, which are being blocked by my injected CSP policy
P.S. I might decide to let that steam request through so I can see avatars in the UI. I also inject "Referrer-Policy: no-referrer", so if I do decide to do that, at least they wont see my HA hostname in there logs by default.
RamRodification 23 minutes ago
ATM machine
dd_xplore 6 hours ago
NPM is pretty painless
atmosx 15 hours ago
I bought a SynologyNAS and I have regretted already 3-4 times. Apart from the software made available from the community, there is very little one can do with this thing.
Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?
Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.
I should have gone with something that runs proper Linux or BSD.
joshstrange 11 hours ago
Unless you know what you are walking into ahead of time I would not recommend Synology to someone who wants to host a bunch of stuff and also wants a NAS. I don’t touch any of the container/apps stuff on my Synology(s), they are simply file servers for my application server. For this purpose, I find Synology rock solid and I’ve been very happy with them.
That said, I’ll probably try out the UniFi NAS offerings in the near future. I believe Synology has semi-walked-back its draconian hard drive policy but I don’t trust them to not try that again later. And because I only use my Synology as a NAS I can switch to something else relatively easily, as long as I can mount it on my app server, I’m golden.
PunchyHamster 14 hours ago
You wanted a server and complain NAS is not just a server.
Gud 11 hours ago
More like, user wanted an open operating system but chose a proprietary one.
atmosx 10 hours ago
NAS is the primary function. But yes, I want full linux server that I can decide what to install and which protocol to use to upload and/or download files.
criddell 8 hours ago
lurking_swe 6 hours ago
tetris11 15 hours ago
(Copied from an earlier comment of mine)
There are guides on how to mainline Synology NAS's to run up-to-date debian on them: https://forum.doozan.com/list.php
tgpc 13 hours ago
please don't do this to your synology
leave it to serve files and iscsi. it's very good at it
if you leave it alone, no extra software, it will basically be completely stable. it's really impressive
aetherspawn 10 hours ago
Second this, just use it for files, it’s great for it. 10+ years uptime if you leave it alone.
alexalx666 9 hours ago
I bought Synology RS217 for $100 last year and it's the best tech purchase I made in years. The software it comes with is the best web interface I experienced in years. The simplicity, stability and attention to detail reminds me of old macs. I have macmini as application server and did not expect to use Synology for anything but file storage / replication. However it comes with a great torrent client that I use all the time now. We also use Synology Office instead of google docs now. It exceeded all my expectations and when it dies, I will immediately buy one of the new rack stations they offer.
reddalo 14 hours ago
I'm so happy I didn't buy a NAS, Synology or not. I think a proper computer running Linux gives me so much more flexibility.
butvacuum 13 hours ago
that's still a NAS.
paffdragon 13 hours ago
You can run a container on Synology and install your custom services, tools there. At least that is what I do. For custom kernel modules you still need a Synology package for something like Wireguard.
If you have OPNSense, it has an ACME plugin with Synology action. I use that to automatically renew and push a cert to the NAS.
That said, since I like to tinker, Synology feels a bit restricted, indeed. Although there is some value in a stable core system (like these immutable distros from Fedora Atomic).
Arrowmaster 10 hours ago
The extremely old kernel on Synology makes it hard or impossible to run some containers.
paffdragon 4 hours ago
tbyehl 8 hours ago
> Using LE to apply SSL to services? Complicated.
https://github.com/JessThrysoee/synology-letsencrypt
> there is very little one can do with this thing.
It has a VMM and Docker. Entware / opkg exist for it. There's very little that can't be done, but expecting to use an appliance that happens to be Linux-based as a generic Linux server is going to lead to challenges. Be it Synology, TrueNAS, or anything else.
alimoeeny 4 hours ago
I personally have been blocking sentry and all relevant domains on my machines. I understand this is not a generally applicable advice. For me that’s the right choice
trjordan 7 hours ago
Having recently set up sentry, at least one of the ways they use this is to auto-configure uptime monitoring.
Once they know what hosts you run, it'll ping that hostname periodically. If it stays up and stable for a couple days, you'll get an alert in product: "Set up uptime monitoring on <hostname>?"
Whether you think this is valid, useful, acceptable, etc. is left as an exercise to the reader.
Linkd 7 hours ago
Expansion opportunities
ggm 15 hours ago
Reverse address lookup servers routinely see escaped attempts to resolve ULA and rfc1918. If you can tie the resolver to other valid data, you know inside state.
Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.
Darknet collection during final /8 run-down captured audio in UDP.
Firewalls? ACLs? Pah. Humbug.
_gmax1 14 hours ago
"Darknet collection during final /8 run-down captured audio in UDP."
Mind elaborating on this? SIP traffic from which year?
ggm 14 hours ago
2010/2011 time frame. Google and others helped sink the traffic, all written up at apnic labs. It's how 1.1.1.0/24 got held back from general release.
advisedwang 4 hours ago
LtdJorge 14 hours ago
RTP I’d say
mixedbit 13 hours ago
I have investigated similar situation on Heroku. Heroku assigns a random subdomain suffix for each new app, so URLs of apps are hard to guess and look like this: test-app-28a8490db018.herokuapp.com. I have noticed that as soon as a new Heroku app is created, without making any requests to the app that could leak the URL via a DNS lookup, the app is hit by requests from automatic vulnerability scanning tools. Heroku confirmed that this is due the new app URL being published in certificate authority logs, which are actively monitored by vulnerability scanners.
adolph 4 hours ago
> certificate authority logs, which are actively monitored by vulnerability scanners
That sounds like a large kick-me sign taped to every new service. Reading how certificate transparency (CT) works leads me to think that there was a missed opportunity to publish hashes to the logs instead of the actual certificate data. That way a browser performing a certificate check can verify in CT, but a spammer can't monitor CT for new domains.
nightpool 4 hours ago
Really? Is that new? My apps use wildcard domains: https://i.postimg.cc/SQ82S0Dp/image.png
mixedbit 9 minutes ago
This applies only to Heroku Fir and Cedar apps (apps that run in Heroku Private Spaces). Heroku Common Runtime apps still use shared wildcard certificate and their domains are not discoverable like this.
ashu1461 14 hours ago
Isn't the article over emphasising a little bit on leakage of internal urls ?
Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.
icedchai 3 hours ago
Is it a real problem? My internal hostnames resolve to RFC-1918 addresses and I have a firewall. If I wasn't so lazy, I'd use split DNS.
reddalo 14 hours ago
In other words: never put sensitive information in names and metadata.
dmichulke 13 hours ago
Or name them after little bobby tables.
Is there some sort of injection that's a legal host name?
jerf 8 hours ago
m3047 4 hours ago
This is exactly why I have a number of "appliances" which never get clown updates: have addresses in a subnet I block at the segment edge, have DNS which never answers, and there are a few entries in the "DNS firewall" [0] (RPZ) which mostly serve as canaries.
This is the problem with the notion that "in the name of securitah IoT devices should phone home for updates": nobody said "...and map my network in the name of security"
[0] Don't confuse this with Rachel's honeypot wildcarding *.nothing-special.whatever.example.com for external use.
teekert 16 hours ago
Is this a Chrome/Edge thing? Or do privacy respecting browsers also do this? If so, it's unexpected.
If Firefox also leaks this, I wonder if this is something mass-surveillance related.
(Judging from the down votes I misunderstood something)
nomercy400 14 hours ago
From what I understand, sentry.io is like a tracing and logging service, used by many organizations.
This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.
This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.
For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.
It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users. And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.
jeroenhd 14 hours ago
My employer uses Sentry for (backend) metrics collection so I had to unblock it to do my job. I wish Sentry would have separate infra for "operating on data collected by Sentry" and "submit every mouse click to Sentry" so I could block their mass surveillance and still do my job, but I suppose that would cut into their profit margins.
My current solution is a massive hack that breaks down every now and then.
wbobeirne 10 hours ago
linhns 6 hours ago
Well somehow Rachel's website is not sending back any response now.
notpushkin 10 hours ago
zaptheimpaler 15 hours ago
Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.
jeroenhd 14 hours ago
The (somewhat affordable) productized NASes all suffer from big tech diseases.
I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.
If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.
zaptheimpaler 14 hours ago
Actually I host everything on a linux PC/server, but a different box runs PFSense and a local DNS resolver so I was talking about setting up a split-brain DNS there. So I don't have to manually edit the hosts file on every machine and keep it up to date with IP changes. Personally I really like docker compose, its made running the little homeserver very easy.
jeroenhd 13 hours ago
prmoustache 11 hours ago
I don't even understand what kind of webui one would want.
All you really need is a bunch of disk and an operating system with an ssh server. Even the likes of samba and nfs aren't even useful anymore.
Gigachad 25 minutes ago
jeroenhd 10 hours ago
AndyMcConachie 13 hours ago
The real trick, and the reason I don't build my own NAS, is standby power usage. How much wattage will a self built Linux box draw when it's not being used? It's not easy to figure out, and it's not easy to build a NAS optimized for this.
Whereas Synology or other NAS manufacturers can tell me these numbers exactly and people have reviewed the hardware and tested it.
ssl-3 6 hours ago
lstodd 4 hours ago
jraph 15 hours ago
> Can't even name the domains on my own damn server with an expectation of privacy now.
You never could. A host name or a domain is bound to leave your box, it's meant to. It takes sending an email with a local email client.
(Not saying, the NAS leak still sucks)
ahoka 10 hours ago
I have internal zones in my home network and requests to resolve them never leave the private network. So no, it's not meant to.
jraph 10 hours ago
zaptheimpaler 14 hours ago
I don't know much about email, but how would some random service send an email from my domain if I've never given it any auth tokens?
TheDong 12 hours ago
jraph 14 hours ago
superkuh 7 hours ago
I love that this write-up is hosted both on HTTP and HTTPS. I cannot access the HTTPS version but the HTTP display just fine. Now that's reliability.
DANmode 4 hours ago
> I cannot access the HTTPS version
Curiosity begs: why not?
superkuh 8 minutes ago
I opened it on an old computer with an old linux distro with an old browser because old linux distros have reliable and working accessibility features like screen readers and good non-gpu text to speech and advanced keyboard/mouse sharing. Modern linux distros do not. Don't worry, I have javascript execution/etc turned off by default on that machine.
HocusLocus 3 hours ago
The Clown is my master
I've been chosen!
Eeeeeeeeeah!
stingraycharles 17 hours ago
I don’t understand. How could a GCP server access the private NAS?
I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.
minitech 17 hours ago
It couldn’t, but it tried.
copperx 16 hours ago
A for effort, F for firewall.
throwaway290 17 hours ago
It said knocking, not accessing
also
> you notice that you've started getting requests coming to your server on the "outside world" with that same hostname.
NitpickLawyer 16 hours ago
Not sure why they made the connection to sentry.io and not with CT logs. My first thought was that "*.some-subdomain." got added to the CT logs and someone is scanning *. with well known hosts, of which "nas" would be one. Curious if they have more insights into sentry.io leaking and where does it leak to...
jraph 16 hours ago
That hypothesis seems less likely and more complicated than the sentry one.
Scanning wildcards for well-known subdomains seems both quite specific and rather costly for unclear benefits.
flexagoon 10 hours ago
Bots regularly try to bruteforce domain paths to find things like /wp-admin, bruteforcing subdomains isn't any more complicated
jraph 10 hours ago
rawling 15 hours ago
I feel like the author would have noticed and said so if she was getting logs for more than just the one host.
A1kmm 15 hours ago
But she mentioned: 1) it isn't in DNS only /etc/hosts and 2) they are making a connection to it. So they'd need to get the IP address to connect to from somewhere as well.
jeroenhd 15 hours ago
From the article:
> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.
They don't need the IP address itself, it sounds like they're not even connecting to the same host.
bardsore 15 hours ago
Unless she hosts her own cert authority or is using a self-signed cert, the wildcard cert she mentions is visible to the public on sites such as https://crt.sh/.
heipei 12 hours ago
imtringued 13 hours ago
Because sentry.io is a commercial application monitoring tool which has zero incentive to any kind of application monitoring on non-paying customers. That's just costs without benefits.
You now have to argue that a random third party is using and therefore paying sentry.io to do monitoring of random subdomains for the dubious benefit of knowing that the domain exists even though they are paying for something that is way more expensive.
It's far more likely that the NAS vendor integrated sentry.io into the web interface and sentry.io is simply trying to communicate with monitoring endpoints that are part of said integration.
From the perspective of the NAS vendor, the benefits of analytics are obvious. Since there is no central NAS server where all the logs are gathered, they would have to ask users to send the error logs manually which is unreliable. Instead of waiting for users to report errors, the NAS vendor decided to be proactive and send error logs to a central service.
rcakebread 7 hours ago
TIL Rachel uses a Mac.
audience_mem 7 hours ago
How do you know?
JSR_FDED 7 hours ago
Little Snitch?
cwillu 10 hours ago
Just getting 404 not found
that_guy_iain 15 hours ago
This is actually an really interesting way to attack a sensitive network. This is a way of allowing to map the internal network of a sensitive network. Getting access is obviously the main challenge but once you're in there you need to know where you go and what to look for. If you've already got that knowledge when planning the attack to gain entry then you've got the upper-hand. So while it kinda seems like "Ok, so they have a hostname they can't access why do I care?". If you're doing high-end security on your system admin level then this is the sort of small nitpicking that it takes to be the best.
TZubiri 16 hours ago
>Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.
I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.
That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).
So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
Jolter 15 hours ago
Obl. nitpick: you mean paranoia, presumably. Schizophrenia is a dissociative/psychotic disorder, paranoia is the irrational belief that you’re being persecuted/watched/etc.
Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
TZubiri 15 hours ago
You are right, I meant paranoid.
>Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
Yes, but I mean being overly cautious in the threat model. For example, birds may be watching through my window, it's true and I might catch a bird watching my house, but it's paranoid in the sense that it's too tight of a threat model.
jraph 15 hours ago
nottorp 10 hours ago
jraph 16 hours ago
> any sensitive info is pushed to the URL Path
This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
> Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
We are used to the tracking being everywhere but it is scandalous and should be considered as such. Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
TZubiri 15 hours ago
>This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
Sure. POST for extra security.
> Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
If this were a completely local product, like say a USB stick. Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
simoncion 3 hours ago
wasmitnetzen 10 hours ago
I've blown fairly competent colleagues' minds multiple times by showing them the existence of certificate transparency logs. They were very much under the impression that hostnames can be kept secret as a protection against external infrastructure mapping.
TZubiri 3 hours ago
Can't it? If you get a wildcard certificate?
Otherwise if you are getting a domain specific certificate, you are obviously giving your cert provider the domains, and why would you assume it would be secret?
OptionOfT 15 hours ago
TLS 1.3 has encrypted client hello which encrypts the domain name during an HTTPS connection.
TZubiri 3 hours ago
That's one of those features that's not quite standard, but risks getting into paranoid threat models , like DNS over HTTP, residential proxies, Tor.
voidUpdate 14 hours ago
> "So, no one competent is going to do this"
What about all the people who are incompetant?
dcrazy 17 hours ago
Slightly surprised that this blog seems to have succumbed to inbound traffic.
unsnap_biceps 16 hours ago
If you're on an apple device, disable private relay. It appears the blog has tar pitted private relay traffic.
bhaney 15 hours ago
It's tar pitting my normal unproxied residential traffic too
computerfriend 14 hours ago
daveoc64 11 hours ago
Rachel has blogged quite a bit about blocking badly behaved RSS Clients in recent years.
I'd link you to one of the articles if I wasn't blocked too, and my VPN wasn't also blocked!
lapcat 10 hours ago
> Rachel has blogged quite a bit about blocking badly behaved RSS Clients in recent years.
Unfortunately that blocking is buggy and overzealous.
I just gave up eventually and unsubscribed from the RSS feed.
that_lurker 17 hours ago
Opens fine for me
urbandw311er 14 hours ago
“Works on my machine”
ck2 8 hours ago
that's actually a great spy trap idea, no?
create an impossible internal hostname and watch for it to come back to you
you don't even need a real TLD if I am not mistaken, use .ZZZ etc
happyopossum 2 hours ago
> you don't even need a real TLD if I am not mistaken, use .ZZZ etc
if it's not a real TLD, you won't ever see the dns requests coming to you...
fragmede 16 hours ago
This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.
Gigachad 20 minutes ago
Unsecured fresh install states that rely on you signing in before an attacker does were always a horrible idea. It's been a welcome change on the Linux side where Linux distros can install with your SSH key and details preloaded so password login is always disabled.
These PHP apps need to change so you first boot the app with credentials so the app is secured at all moments.
ale42 14 hours ago
It's not just Let's Encrypt, right? CT is a requirement for all Certificate Authorities nowadays. You can just look at the certificate of www.google.com and see that it has been published to two CT logs (Google's and Sectigo's)
nottorp 10 hours ago
Now I get why they want to reduce certificate validity to 20 minutes. The logs will become so spammy then that the bad guys won't be able to scan all hosts in them any more...
tialaramex 13 hours ago
Technically logging certificates is not a Requirement of the trust stores, but most web browsers won't accept a certificate which isn't presented with a proof of logging, typically (but not always) baked inside the certificates.
The reason for this distinction is that failing to meet a Requirement for issued certificates would mean the trust stores might remove your CA, but several CAs today do issue unlogged certificates - and if you wanted to use those on a web server you would need to go log them and staple the proofs to your certs in the server configuration.
Most of the rules (the "Baseline Requirements" or BRs) are requirements and must be followed for all issued certificates, but the rule about logging deliberately doesn't work that way. The BRs do require that a CA can show us - if asked - everything about the certificates they issued, and these days for most CAs that's easiest accomplished by just providing links to the logs e.g. via crt.sh -- but that requirement could also be fulfilled by handing over a PDF or an Excel sheet or something.
krautsauer 16 hours ago
That may be related, but it's not what happened here. Wildcard-cert and all.
prmoustache 11 hours ago
Why would you care that your hostname on a local only domain is published to the world if it is not reachable from outside? Publicly available hosts are alread published to the world anyway through DNS.
LetsEncrypt doesn't make a difference at all.
thakoppno 16 hours ago
> the Internet is a bad place
FWIW - it’s made of people
TZubiri 16 hours ago
No, it's made by systems made by people, systems which might have grown and mutated so many times that the original purpose and ethics might be unrecognizable to the system designers. This can be decades in the case of tech like SMTP, HTTP, JS, but now it can be days in the era of Moltbots and vibecoding.
Spivak 16 hours ago
I like only getting *.domain for this reason. No expectation of hiding the domain but if they want to figure out where other things are hosted they'll have to guess.
ttoinou 16 hours ago
So how do you get this ?
rossy 16 hours ago
hsbauauvhabzb 16 hours ago
That’s really not a great fix. If those hostnames leak, they leak forever. I’d be surprised if AV solutions and/or windows aren’t logging these things.
jesterson 16 hours ago
> If you use LetsEncrypt for ssl certs (which you should)
You meant you shouldn't right? Partially exactly for the reasons you stated later in the same sentence.
josh3736 16 hours ago
Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).
CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.
So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.
Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.
So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.
tialaramex 2 hours ago
jesterson 15 hours ago
ranger_danger 17 hours ago
Pennywise found my hostname? We're cooked.
defrost 17 hours ago
You're IT, I'm IT, We're all IT.
bonesss 15 hours ago
We all use floats down here.
ahoka 10 hours ago
TeapotNotKettle 17 hours ago
Misconfigured clown - bad news indeed.
renewiltord 15 hours ago
Haha, this obtuse way of speech is such a classic FAANG move. I wonder if it’s because of internal corporate style comms. Patio11 also talks like this. Maybe because Stripe is pretty much a private FAANG.
rini17 10 hours ago
Fancy web interfaces are road to hell. Do simplest thing that works. Plain apache or nginx with webdav, basic auth(proven code, minimal attack surface). Maybe firewall with ip_hashlimit on new connections. I have it set to 2/minute and for browser it's actually fine, while moronic bots make new connection for every request. When they improve, there's always fail2ban.
That the nas server incl. hostname is public does not bother me then.