A proposal to restrict sites from accessing a users’ local network (github.com)

650 points by doener 2 days ago

mystifyingpoi 2 days ago

I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

buildfocus 2 days ago

This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.

The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

xp84 2 days ago

Doesn't CORS just restrict whether the webpage JS context gets to see the response of the target request? The request itself happens anyway, right?

So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.

jonchurch_ a day ago

tombakt 2 days ago

nbadg 2 days ago

sidewndr46 a day ago

This is also a misunderstanding. CORS only applies to the Layer 7 communication. The rest you can figure out from the timing of that.

Significant components of the browser, such as Websockets have no such restrictions at all

James_K a day ago

afiori a day ago

rnicholus 2 days ago

CORS doesn’t protect you from anything. Quite the opposite: it _allows_ cross origin communication (provided you follow the spec). The same origin policy is what protects you.

spr-alex a day ago

I made a CTF challenge 3 years ago that proves why local devices are not so protected. exploitv99 bypasses PNA with timing as the other commentor points out.

https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...

friendzis a day ago

> The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.

False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.

Aeolun 2 days ago

How would Facebook do that? They scan all likely local ranges for what could be your phone, and have a web server running on the phone? That seems more like a problem of allowing the phone app to start something like that and keep it running in the background.

londons_explore a day ago

esnard a day ago

Do you have a link talking about those Facebook's recent tricks? I think I missed that story, and would love to read an analysis about it

JimDabell a day ago

IshKebab 2 days ago

I think this can be circumvented by DNS rebinding, though your requests won't have the authentication cookies for the target, so you would still need some kind of exploit (or a completely unprotected target).

MBCook 2 days ago

hsbauauvhabzb 2 days ago

CORS prevents the site from accessing the response body. In some scenarios, a website could, for example, blindly attempt to authenticate to your router and modify settings by guessing your router bran/model and password

ameliaquining 2 days ago

Is this kind of attack actually in scope for this proposal? The explainer doesn't mention it.

h4ck_th3_pl4n3t 19 hours ago

> Local network devices are protected from random websites by CORS

C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.

Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.

ars 17 hours ago

> but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.

This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.

kmeisthax 2 days ago

THE MYTH OF "CONSENSUAL" REQUESTS

Client: I consent

Server: I consent

User: I DON'T!

ISN'T THERE SOMEBODY YOU FORGOT TO ASK?

cwillu 2 days ago

jm4 2 days ago

This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?

loaph 2 days ago

I've used https://pairdrop.net/ before to share files between devices on the same LAN. It obviously wouldn't have to be a website, but it's pretty convenient since all my devices I wanted to share files on already have a browser.

A4ET8a8uTh0_v2 a day ago

necovek a day ago

Not a local network, but localhost example: due to the lousy private certificate capability APIs in web browsers, this is commonly used for signing with electronic IDs for countries issuing smartcard certificates for their citizens (common in Europe). Basically, a web page would contact a web server hosted on localhost which was integrated with PKCS library locally, providing a signing and encryption API.

One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub

For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.

Thorrez a day ago

>Why should websites ever have access to the local network?

It's just the default. So far, browsers haven't really given different IP ranges different security.

evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .

chuckadams a day ago

EvanAnderson a day ago

> Is there even a use case for this for which there isn’t already a better solution?

I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.

Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.

Thorrez a day ago

>That presents an entirely new threat model for which we don’t have a solution.

What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.

charcircuit 2 days ago

>for which we don’t have a solution

It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.

udev4096 10 hours ago

esseph 11 hours ago

lucideer 2 days ago

> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".

MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.

mastazi 2 days ago

Do we have any evidence that most users just click yes?

My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.

Unless we have statistics, I don't think we can make assumptions.

technion 2 days ago

Aeolun 2 days ago

lucideer a day ago

paxys 2 days ago

People accept permission prompts from apps because they conciously downloaded the app and generally have an idea about the developer and what the app does. If a social media app asks for permission to your photos it's easy to understand why, same with a music streamer wanting to connect to your smart speaker.

A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.

poincaredisk 2 days ago

lucideer a day ago

lxgr a day ago

And annoyingly, for some reason it does not remember this decision properly. Chrome asks me about local access every few weeks, it seems.

Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.

grokkedit 2 days ago

problem is: without allowing it webUIs like synology won't work, since they require your browser to connect to the local network... as it is, it's not great

planb 2 days ago

jay_kyburz 2 days ago

mystified5016 2 days ago

I can't believe that anyone still thinks a popup permission modal offers any type of security. Windows UAC has shown quite definitively that users will always click through any modal in their way without thought or comprehension.

Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.

ameliaquining 2 days ago

knome a day ago

lxgr a day ago

A4ET8a8uTh0_v2 a day ago

xp84 2 days ago

broguinn a day ago

This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:

https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...

It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!

edit: localhost won't be restricted:

"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"

Thorrez a day ago

>edit: localhost won't be restricted:

It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:

* If evil.com makes a request to a local address it'll get blocked.

* If evil.com makes a request to a localhost address it'll get blocked.

* If a local address makes a request to a localhost address it'll get blocked.

* If a local address makes a request to a local address, it'll be allowed.

* If a local address makes a request to evil.com it'll be allowed.

* If localhost makes a request to a localhost address it'll be allowed.

* If localhost makes a request to a local address, it'll be allowed.

* If localhost makes a request to evil.com it'll be allowed.

broguinn 20 hours ago

socalgal2 2 days ago

I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.

Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.

I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)

By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.

Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.

3eb7988a1663 2 days ago

I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.

I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.

ordu 10 hours ago

Yeah. I'd like it too. I can't use my bank's app, because it wants some weird permissions like an access to contacts, I refuse to give them, because I see no use in it for me, and it refuses to work.

yonatan8070 9 hours ago

Also for the camera, just feed them random noise or a user-selectable image/video

nothrabannosir 2 days ago

In iOS you can share a subset of your contacts. This is functionally equivalent and works as you described for WhatsApp.

shantnutiwari a day ago

WhyNotHugo a day ago

baobun a day ago

GrapheneOS has this feature (save for faking GPS) fwiw

quickthrowman a day ago

Apps are not allowed to force you to share your contacts on iOS, report any apps that are asking you to do so as it’s a violation of the App Store TOS.

totetsu 2 days ago

Like the github 3rd party application integration. "ABC would like to see your repositories, which ones do you want to share?"

kuschku a day ago

Does that UI actually let you choose? IME it just tells me what orgs & repos will be shared, with no option to choose.

rjh29 a day ago

Safari doesn't support Web MIDI apparently for this reason (fingerprinting), but it makes using any kind of MIDI web app impossible.

Thorrez a day ago

Are you talking about web apps, mobile apps, desktop apps, or browser extensions?

socalgal2 19 hours ago

All of them.

Thorrez 2 hours ago

_bent a day ago

Apple does this for iOS 18 via the AccessorySetupKit

bsder 19 hours ago

> Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.

Blame Apple and Google and their horrid BLE APIs.

An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.

What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.

paxys 2 days ago

It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?

3abiton a day ago

I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?

Too a day ago

What’s even crazier is that nobody learned this lesson and new protocols are created with the same systematic vulnerabilities.

Talking about MCP agents if that’s not obvious.

thaumasiotes a day ago

> Does every one of them have the correct CORS configuration?

I would guess it's closer to 0% than 0.1%.

reassess_blind a day ago

The local server has to send Access-Control-Allow-Origin: * for this to work, right?

Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.

meindnoch a day ago

No, simple requests [1] - such as a GET request, or a POST request with text/plain Content-Type - don't trigger a CORS preflight. The request is made, and the browser may block the requesting JS code from seeing the response if the necessary CORS response header is missing. But by that point the request had already been made. So if your local service has a GET endpoint like http://localhost:8080/launch_rockets, or a POST endpoint, that doesn't strictly validate the body Content-Type, then any website can trigger it.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...

reassess_blind a day ago

pacifika 2 days ago

Internet Explorer solved this with their zoning system right?

https://learn.microsoft.com/en-us/previous-versions/troubles...

donnachangstein 2 days ago

Ironically, Chrome partially supported and utilized IE security zones on Windows, though it was not well documented.

pacifika 2 days ago

Oh yeah forgot about that, amazing.

bux93 a day ago

Although those were typically used to give ActiveX controls on the intranet unfettered access to your machine because IT put it in the group policy. Fun days.

nailer 2 days ago

Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.

sroussey 2 days ago

I guess this would help Meta’s sneaking identification code sharing between native apps and websites with their sdk on them from communicating serendipitously through localhost, particularly on Android.

[0] https://www.theregister.com/2025/06/03/meta_pauses_android_t...

will4274 12 hours ago

surreptitiously

skybrian 2 days ago

While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.

Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.

paxys 2 days ago

> Most users don't know what's running on localhost or on their local network, so they won't understand the risk.

Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.

xp84 2 days ago

Either way they'll click "yes" as long as the attacker site properly primes them for it.

For instance, on the phishing site they clicked on from an email, they'll first be prompted like:

"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."

Yes, that's meaningless gibberish but most people would say:

• "Not sure what that means..."

• "I DO want to access my account, though."

kevincox a day ago

cpburns2009 4 hours ago

While that message has less jargon, most users still won't understand what "resources on your local network" means. They'll blindly accept it.

derefr 2 days ago

In an ideal world, the browser could act as an mDNS client, discovering local services, so that it could then show the pretty name of the relevant service in the security prompt.

In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.

mixmastamyk a day ago

skybrian 2 days ago

On a phone at least, it should be "do you want to allow website A to connect to app B."

(It's harder to do for the rest of the local network, though.)

nine_k 2 days ago

A comprehensive implementation would be a firewall. Which CIDRs, which ports, etc.

I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.

kuschku a day ago

> I wish there were an API to build such a firewall, e.g. as a part of a browser extension,

There was in Manifest V2, and it still exists in Firefox.

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

That's the API Chrome removed with Manifest V3. You can still log all web requests, but you can't block them dynamically anymore.

skybrian a day ago

I think something like Tailscale is the way to go here.

rerdavies 2 days ago

I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.

I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.

I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.

I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.

There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.

And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.

gerdesj 2 days ago

IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.

In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.

With IPv6 you have a lot more options.

All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.

Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.

You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.

There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.

You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.

Bon chance mate

globular-toast 20 hours ago

HTTPS doesn't care about IP addresses. It's all based on domain names. You can get a certificate for any domain you own. You can also set said domain to resolve to any address you like, including a "local" one.

NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.

An IP address is local if you can resolve it and don't have to communicate via a router.

It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".

donnachangstein 2 days ago

> Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?

No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.

Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.

Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.

As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.

".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.

ryanisnan 2 days ago

It's very useful to have this additional information in something like a network address. I agree, you shouldn't rely on it, but IPv6 hasn't clicked with me yet, and the whole "globally routable" concept is one of the reasons. I hear that, and think, no, I don't agree.

donnachangstein 2 days ago

rerdavies 2 days ago

@donnachangstein:

The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.

It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.

I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..

Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).

There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.

The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?

How would YOU see https working on a device like that?

> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.

Yes. That was my point. It is currently widely ignored.

mixmastamyk a day ago

AStonesThrow 2 days ago

> can't even agree on the meaning of "local"

Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?

This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.

https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...

Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.

Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.

So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.

G_o_D 2 days ago

Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser

Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors

Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,

Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics

Chrome already has flag to prevent locahost access still as said websocket can be used

Completely banning localhost is detrimental

Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server

1vuio0pswjnm7 17 hours ago

Explainer by non-Googler

Is the so-called "modern" web browser too large and complex

I never asked for stuff like "websockets"; I have to disable it, why

I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources

It is relatively small, fast and reliable; very useful

It can read larger HTML files that make so-called "modern" web browsers choke

It does not support online ad services

The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems

1vuio0pswjnm7 12 hours ago

Text-only browsers are not a "solution". That is not the point of the comment. Such simpler clients are not a problem.

The point is that gigantic, overly complex "browsers" designed for surveillance and advertising are the problem. They are not a solution.

HumanOstrich 13 hours ago

Going back to text-only browsers is not the solution.

ronsor 2 days ago

Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.

It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)

michaelt 2 days ago

Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?

Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.

And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?

kuschku 2 days ago

A common use case, whether for 3D printers, switches, routers, or NAS devices is that you've got a centrally hosted management UI that then sends requests directly to your local devices.

This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.

michaelt 2 days ago

hypercube33 2 days ago

ronsor 2 days ago

That works if you want to launch an application from a website, but it doesn't work if you want to actively communicate with an application from a website.

fn-mote 2 days ago

RagingCactus 2 days ago

I don't believe this is true, as https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web... exists. It does need an extension to be installed, but I think that's fair in your comparison with NPAPI.

IshKebab 2 days ago

It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.

ImPostingOnHN 2 days ago

> locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost

if that software runs with a pull approach, instead of a push one, the server becomes unnecessary

bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)

rhdunn a day ago

It's harder to run html and xml files with xslt by just opening them in a web browser (things like nunit test run output). To view these properly now -- to get the css, xslt, images, etc. to load -- you now typically have to run a web server at that file path.

Note: this is why the viewers for these tools will spin up a local web server.

With local LLMs and AI it is now common to have different servers for different tasks (LLM, TTS, ASR, etc.) running together where they need to communicate to be able to create services like local assistants. I don't want to have to jump through hoops of running these through SSL (including getting a verified self-signed cert.), etc. just to be able to run a local web service.

ImPostingOnHN a day ago

angst 2 hours ago

i suppose i can live with this as long as the browser will explicitly ask me to enable that permission they are talking about adding and not just silently deny it...

AdmiralAsshat 2 days ago

uBlock / uMatrix does this by default, I believe.

I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...

potholereseller 2 days ago

It appears to not have been enabled by default on my instance of uBlock; it seems a specific filter list is used to implement this [0]; that filter was un-checked; I have no idea why. The contents of that filter list are here [1]; notice that there are exceptions for certain services, so be sure to read through the exceptions before enabling it.

[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan

[1] <https://github.com/uBlockOrigin/uAssets/blob/master/filters/...>

reyqn a day ago

This filter broke twitch for me. I had to create custom rules for twitch if I wanted to use it with this filter enabled.

apazzolini a day ago

nickcw 2 days ago

This has the potential to break rclone's oauth mechanism as it relies on setting the redirect URL to localhost so when the oauth is done rclone (which is running on your computer) gets called.

I guess if the permissions dialog is sensibly worded then the user will allow it.

I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.

0xCMP 2 days ago

IIUC this should not break redirects. This only affects: (1) fetch/xmlhttprequests (2) resources linked to AND loaded on a page (e.g. images, js, css, etc.)

As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.

bmacho a day ago

One of the very few security inspired restrictions I can wholeheartedly agree with. I don't want random websites be able to read my localhost. I hope it gets accepted and implemented sooner than later.

OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.

avidiax a day ago

> OTOH it would be cool if random websites were able to open up and use ports on my computer's network

That's what WebRTC does. There's no requirement that WebRTC is used to send video and audio as in a Zoom/Meet call.

That's how WebTorrent works.

https://webtorrent.io/faq

AshamedCaptain 2 days ago

I do not understand. Doesn't same-origin prevent all of these issues? Why on earth would you extend some protection to resources based on IP address ranges? It seems like the most dubious criteria of all.

maple3142 a day ago

I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.

fn-mote 2 days ago

I think you're mistaken about this.

Use case 1 in the document and the discussion made it clear to me.

AshamedCaptain 2 days ago

Browsers allow launching HTTP requests to localhost in the same way they allow my-malicious-website.com to launch HTTP requests to say mail.google.com . They can _request_ a resource but that's about it -- everything else, even many things you would expect to be able to do with the downloaded resource, are blocked by the same origin policy. [1] Heck, we have a million problems already where file:/// websites cannot access resources from http://localhost , and viceversa.

So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?

The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.

Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".

[1] https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...

In fact, [1] literally says

> [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.

AnthonyMouse 2 days ago

junkblocker a day ago

benob a day ago

Isn't it time for disallowing browsers to connect to anything outside same origin pages except for actual navigation?

Servers can do all the hard work of gathering content from here and there.

globular-toast 21 hours ago

Is it possible to do this today with browser extensions? I ran noscript 10 years ago and it was really tough. Kinda felt like being gaslit constantly. I could go back, only enabling sites selectively, but it's not going to work for family. Wondering if just blocking cross origin requests would be more feasible.

udev4096 10 hours ago

Deny any incoming requests using ufw or nftables. Only allow outbound requests by default

foota 2 days ago

The alternative proposal sounds much nicer, but unfortunately was paused due to concerns about devices not being able to support it.

I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?

spr-alex a day ago

The existing PNA is easily defeated for bugs that can be triggered with standard cross origin requests. For example PNA does nothing to stop a website from exploiting some EOL devices I have with POST requests and img tags.

This is a much better approach.

profmonocle 2 days ago

Assuming that RFC1918 addresses mean "local" network is wrong. It means "private". Many large enterprises use RFC1918 for private, internal web sites.

One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.

A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.

kccqzy 2 days ago

The article spends a lot of effort defining the words "local" and "private" here. It then says:

> Note that local -> local is not a local network request

So your use case won't be affected.

ale42 2 days ago

The computer I use at work (and not only mine, many many of them) has a public IP address. Many internal services are on 10.0.0.0/8. How is this being taken into account?

numpad0 2 days ago

lilyball 2 days ago

jaywee 2 days ago

kccqzy 2 days ago

xp84 2 days ago

Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?

The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.

In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.

EvanAnderson a day ago

> Is it a gross generalization to say that if you're visiting a site whose name resolves to a private IP address, it's a part of the same organizational entity as your computer is?

Yes. That's a gross generalization.

I support applications delivered via site-to-site VPN tunnels hosted by third parties. In the Customer site the application is accessed via an RFC 1918 address. It is is not part of the Customer's local network, however.

Likewise, I support applications that are locally-hosted but Internet facing and appear on a non-RFC1918 IP address even though the server is local and part of the Customer's network.

Access control policy really should be orthogonal to network address. Coupling those two will enivtably lead to mismatches to work around. I would prefer some type of user-exposed (and sysadmin-exposed, centrally controllable) method for declaring the network-level access permitted by scripts (as identified by the source domain, probably).

rjmunro a day ago

Don't some internet providers to large scale NAT (CGNAT), so customers each get a 10.x address instead of a public one? I'm not sure if this is a problem or not. It sounds like it could be.

xp84 a day ago

JdeBP 2 days ago

Many years ago, before it was dropped, IP version 6 had a concept of "site local" addresses, which (if it had applied to version 4) would have encompassed the corporate intranet addresses that you are talking about. Routed within the corporate intranet; but not routed over corporate borders.

Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.

aaronmdjones a day ago

fd00::/8 (within fc00::/7) is still reserved for this purpose (site-local IPv6 addressing).

fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.

Roughly speaking, the following are analogs:

169.254/16 -> fe80::/64 (within fe80::/10)

10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)

For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).

fc00::/7 is not globally routable.

rerdavies a day ago

qwertox a day ago

Proposing this in 2025. While probably knowing about this problem since Chrome was released (2008).

Why not treat any local access as if it were an access to a microphone?

A4ET8a8uTh0_v2 a day ago

I would love for someone with more knowledge to opine on this, because, to me, it seems like it would actually be the most sane default state.

dadrian a day ago

That is literally what this proposal is suggesting.

G_o_D a day ago

Browser should just allow per-site settings or global allow/deny all to allow deny permission to localhost

So thats user will be in control

cant just write a extension that blocks access to domains based on origin

So user can just add facebook.com as origin to block all facebook* sites from sending any request to any registered url in these case localhost/127.0.0.1 domains

DNR api allows blocking based on initiatorDomains

qbane a day ago

IIRC Flash has a similar design. One Flash app can access the internet, or local network, but not both.

geekodour a day ago

Just wanted to confirm something, this only works for HTTP right? browser dont allow arbitrary TCP reqs right?

thesdev 2 days ago

Off-topic: Is the placement of the apostrophe right in the title? Should it be "a users' local network" (current version) or "a user's local network"?

IshKebab 2 days ago

It should be "from accessing a user's local network", or "from accessing users' local networks".

AndriyKunitsyn 19 hours ago

Why do you think so? How is "a users' local network" significantly different from "a children's book"?

AStonesThrow 2 days ago

Chris Siebenmann weighs in with thoughts on:

Browers[sic] can't feasibly stop web pages from talking to private (local) IP addresses (2019)

https://utcc.utoronto.ca/~cks/space/blog/web/BrowsersAndLoca...

kccqzy 2 days ago

The split horizon DNS model mentioned in that article is to me insane. Your DNS responses should not change based on what network you are connected to. It breaks so many things. For one, caching breaks because DNS caching is simplistic and is only cached with a TTL: no way to tell your OS to associate a DNS cached response to a network.

I understand why some companies want this, but doing it on the DNS level is a massive hack.

If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)

parliament32 2 days ago

> Your DNS responses should not change based on what network you are connected to.

GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.

Further: why would I want my private hosts to be resolvable outside my networks?

Of course DNS responses should change depending on what network you're on.

kccqzy 2 days ago

kuschku 2 days ago

I'm surprised you've never seen this before.

Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.

Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.

Others may provide read-write access from inside, but limited read-only access from the outside.

Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.

Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.

kccqzy 2 days ago

rs186 a day ago

Why is this a Chrome thing, not an Android thing?

I get that this could happen on any OS, and the proposal is from browser maker's perspective. But what about the other side of things, an app (not necessarily browser) talking to arbitrary localhost address?

will4274 11 hours ago

Basically any inter-process communication (IPC). https://en.wikipedia.org/wiki/Inter-process_communication . There are fancier IPC mechanisms, but none as widely supported as just sending arbitrary data over a socket. It wouldn't surprise me if e.g. this is how Chrome processes communicate with each other.

moktonar 2 days ago

The web is currently just “controlled code execution” on your device. This will never work if not done properly. We need a real “web 3.0” where web apps can run natively and containerized, but done correctly, where they are properly sandboxed. This will bring performance and security.

graemep a day ago

The underlying problem is that we are trying to run untrusted code safel, with very few restrictions on its capabilities.

klabb3 a day ago

Disagree. Untrusted code was thought to be a meaningful term 20-30 years ago when you ran desktop OSs with big name software like Microsoft Word and Adobe, and games. What happened in reality is that this fence had false positives (ie Meta being one of your main adversaries) and an enormous amount of false negatives (all indie or small devs that would have their apps classified as viruses).

The model we need isn’t a boolean form of trust, but rather capabilities and permissions on a per-app, per-site or per-vendor basis. We already know this, but it’s incredibly tricky to design, retrofit and explain. Mobile OSs did a lot here, even if they are nowhere near perfect. For instance, they allow apps (by default even) to have private data that isn’t accessible from other apps on the same device.

Whether the code runs in an app or on a website isn’t actually important. There is no fundamental reason for the web to be constrained except user expectations and the design of permission systems.

andromaton 15 hours ago

This used to cause malicious sites to reboot home internet routers around 2013.

zajio1am 2 days ago

This seems like a silly solution, considering we are in the middle of IPv6 transition, where local networks use public addresses.

jeroenhd 2 days ago

Even IPv6 has local devices. Determining whether that's a /64 or a /56 network may need some work, but the concept isn't all that different. Plus, you have ::1 and fe80::, of course.

rerdavies a day ago

Whatever happened to IPv6 site-local and link local address ranges (address ranges that were specifically defined as address ranges that would not cross router or WAN boundaries? They were in the original IPv6 standards, but don't seem to be implemented or supported. Or at least they aren't implemented or supported by my completely uconfigurable home cable router povided by my ISP.

fulafel a day ago

IPv6 in normal ethernet/wlan like uses requires link-local to for functioning neighbour discovery (equivalent for v4's ARP) so it's very likely it works. Not meant for normal application usage though. Site local was phased out in favour of ULA etc.

But if you're not using global addresses you're probably doing it wrong. Global addressing doesn't mean you're globally reachable, confusing addressing vs reachability is the source of a lot of misunderstandings. You can think of it as "everyone gets their own piece of unique address space, not routed unless you want it to be".

MBCook 2 days ago

So because IPv6 exists we shouldn’t even try?

It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.

With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.

mbreese 2 days ago

Even in this case, it could be useful to limit the access websites have to local servers within your subnet (/64, etc), which might be a better way to define the “local” network.

(And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)

eternityforest a day ago

I really hope this gets implemented, and more importantly, I really hope they have the ability to access an HTTP local site from an HTTPS domain.

There are so many excellent home automation and media/entertainment use cases for something like this.

b0a04gl 2 days ago

this thing’s leaking. localhost ain’t private if random sites can hit it and get responses. devices still exposing ports like it’s 2003. prompts don’t help, people just just click through till it goes away. cors not doing much, it’s just noise now.issue’s been sitting there forever, everyone patches on top but none of these local services even check who’s knocking. just answers. every time.

similar thread: https://news.ycombinator.com/item?id=44179276

elansx a day ago

As sooner this happens, the better.

Hnrobert42 2 days ago

Ironic given that on my Mac, Chrome always asks to find other devices on my network by Firefox never does.

parliament32 2 days ago

Won't this break every local-device oauth flow?

calibas a day ago

Choose one:

Web browsers use sandboxing to keep you safe.

Web browsers can portscan your local network.

otherayden 2 days ago

This seems like such a no-brainer, I’m shocked this isn’t already something sites need explicit permission to do

grahamj a day ago

I don’t see this mentioned anywhere but Safari on iOS already does this. If you try to access a local network endpoint you’ll be asked to allow it by Safari, and the permission is per-site.

gostsamo 2 days ago

What is so hard in blocking apps on android from listening on random ports without permission?

jeroenhd 2 days ago

The same thing that makes blocking ports on iOS and macOS so hard: there's barely any firewall on these devices by default, and the ones users may find cause more problems than users will ever think they solve.

Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?

Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.

elric a day ago

That doesn't seem right. Can't speak to macOS, but on Android every application is sandboxed. Restricting its capabilities is trivial. Android apps certainly ARE blocked from reading files, except for some files in its storage and files the user grants it access to.

Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).

zb3 2 days ago

It's not only about android, it's about exploiting local services too..

fulafel 2 days ago

A browser can't tell if a site is on the local network. Ambiguous addresses may not be on the local network and conversely a local network may use global addresses especially with v6.

phkahler 2 days ago

This should not be possible in the first place. There is no legitimate reason for it. Having users grant "concent" is just a way to make it more OK, not to stop it.

auxiliarymoose 2 days ago

There are definitely legitimate reasons—for example, a browser-based CAD system communicating with a 3D mouse.

cwilby 2 days ago

Is it just me or am I not seeing any example that isn't pure theory?

And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.

numpad0 2 days ago

Personally I had completely forgotten that anyone and anything can do this right now.

TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.

Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.

Disabling this, as proposed, does not affect your ability to open http://192.168.0.1/login.html, as that's just another "web" site. If JS on http://myNAS.local/search-local.html wants to access http://myLaptop.local:8000/myNasDesktopAppRemotingApi, only then you have to click some buttons to allow it.

Edit: uBlock Origin has filter for it[1]; was unchecked in mine.

1: https://news.ycombinator.com/item?id=44184799

MBCook 2 days ago

> so that range can be default exempted

I disagree. I know it’s done, but I don’t think that makes it safe or smart.

Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.

No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.

reassess_blind a day ago

The server has to allow cross origin requests for it to return a response though, right?

Pxtl 2 days ago

Honestly I think cross-site requests were a mistake. Tracking cookies, hacks, XSS attacks, etc.

My relationship is with your site. If you want to outsource that to some other domain, do that on your servers, not in my browser.

elric a day ago

The mistake was putting CORS on the server side. It should have been part of the browser. "Facebook.com wants to access foo.example.com: y/n?"

But then we would have had to educate users, and ad peddlers would have lost revenue.

AStonesThrow 2 days ago

Cross-site requests have been built in to the design of the WWW since the beginning. The whole idea of hyperlinking from one place to another, and amalgamating media from multiple sites into a single page, is the essence of the World Wide Web that Tim Berners-Lee conceived at CERN, based on the HyperCard stacks and Gopher and Wais services that had preceded it.

Of course it was only later that cookies and scripting and low-trust networks were introduced.

The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.

Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.

elric a day ago

CORS != hyperlinks. CORS is about random websites in your browser accessing other domains without your say-so. Websites doing stuff behind your back does feel antithetical to Tim Berners Lee's ideals...

Pxtl 2 days ago

I'm aware of that but obviously there's a huge difference between the user clicking a link and navigating to a page on another domain and the site making that request on the user's behalf for a blob of JS.

owebmaster 2 days ago

I propose restricting android apps, not websites.

jeroenhd 2 days ago

Android apps need UDP port binding to function. You can't do QUIC without UDP. Of course you can (should) restrict localhost bound ports to the namespaces of individual apps, but there is no easy solution to this problem at the moment.

If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.

On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.

There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.

IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.

bmacho a day ago

Android apps obviously shouldn't be able to just open or read a global communication channel on your device. But this applies to websites too.

Joel_Mckay a day ago

Thus ignoring local private web servers, and bypassing local network administered zone policy.

Seems like a sleazy move to draw down even more user DNS traffic data, and a worse solution than the default mitigation policy in NoScript =3

naikrovek 2 days ago

Why can browsers do the kinds of things they do at all?

Why does a web browser need USB or Bluetooth support? They don’t.0

Browsers should not be the universal platform. They’ve become the universal attack vector.

auxiliarymoose 2 days ago

With WebUSB, you can program a microcontroller without needing to install local software. With Web Bluetooth, you can wirelessly capture data from + send commands to that microcontroller.

As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.

As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.

Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.

WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.

What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?

naikrovek 2 days ago

I’m ok with needing non-browser software for those things.

> Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.

Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)

> What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?

There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.

A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.

MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.

Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.

The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.

So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.

Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.

xyst 2 days ago

Advertising firms hate this.

hulitu 2 days ago

> A proposal to restrict sites from accessing a users' local network

A proposal to treat webbrowsers as malware ? Why would a webbrowser connect to a socket/internet ?

loumf 2 days ago

The proposal is directed at the websites in the browser (using JS, embedded images or whatever), not the code that implements the browser.

hello_computer 2 days ago

just the fact that this comes from google is a hard pass for me. they sell so many adwords scams that they clearly do not give a damn about security. “security” from google is just another one of their trojan horses.

fn-mote 2 days ago

Don't post shallow dismissals. The same company runs Project Zero, which has a major positive security impact.

[1]: https://googleprojectzero.blogspot.com/

hello_computer 2 days ago

project zero is ZERO compared to the millions of little old ladies around the world getting scammed through adwords. only security big g cares about is its own. they have the tools to laser-in on and punish the subtlest of wrongthink on youtube, yet it’s just too tall of an order to focus the same laser on tech support scammers…

themikesanto 2 days ago

Google loves wreaking havoc on web standards. Is there really anything anyone can do about it at this point? The number of us using alternative browsers are a drop in the bucket when compared to Chrome's market share.

charcircuit 2 days ago

Google open source the implementation of them which any other browser is free to use.

bethekidyouwant a day ago

make this malicious website and show me that it works. I have doubts.

gnarbarian 2 days ago

I don't like the implications of this. say you want to host a game that has a lan play component. that would be illegal.

neuroelectron 2 days ago

Cia isn't going to like this. I bet that google monopoly case suddenly reaches a new resolution.

zelon88 2 days ago

I understand the idea behind it and am still kinda chewing on the scope of it all. It will probably break some enterprise applications and cause some help desk or group policy/profile headaches for some.

It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.

They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?

I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.

iforgotpassword 2 days ago

Idk, I like the idea of my browser warning me when a random website I visit tries to talk to my network. if there's a legitimate reason I can still click yes. This is orthogonal to any ads and data collection.

Henchman21 2 days ago

I have this today from macOS. To me it feels more appropriate to have the OS attempt to secure running applications.

happyopossum 2 days ago

Henchman21 2 days ago

I agree that any newly proposed standards for the web coming from Google should be met with a skeptical eye — they aren’t good stewards IMO and are usually self-serving.

I’d be interested in hearing what the folks at Ladybird think of this proposal.

jenny91 a day ago

On a quick look, isn't this a bit antithetical to the concept of the internet as a decentralized and hierarchical system? You have to route through the public internet to interoperate with the rest of the public internet?