OpenBSD: PF queues break the 4 Gbps barrier (undeadly.org)
146 points by defrost 6 hours ago
ralferoo 5 hours ago
In the days when even cheap consumer hardware ships with 2.5G ports, this number seems weirdly low. Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?
I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.
Someone 5 hours ago
The article is about allowing bandwidth restrictions in bytes/second that are larger than 2³²-1, not about how fast pf can filter packets.
I guess few people with faster ports felt the need to limit bandwidth for a service to something that’s that large.
FTA:
“OpenBSD's PF packet filter has long supported HFSC traffic shaping with the queue rules in pf.conf(5). However, an internal 32-bit limitation in the HFSC service curve structure (struct hfsc_sc) meant that bandwidth values were silently capped at approximately 4.29 Gbps, ” the maximum value of a u_int ".
With 10G, 25G, and 100G network interfaces now commonplace, OpenBSD devs making huge progress unlocking the kernel for SMP, and adding drivers for cards supporting some of these speeds, this limitation started to get in the way. Configuring bandwidth 10G on a queue would silently wrap around, producing incorrect and unpredictable scheduling behaviour.
A new patch widens the bandwidth fields in the kernel's HFSC scheduler from 32-bit to 64-bit integers, removing this bottleneck entirely.”
nine_k 4 hours ago
> silently wrap around, producing incorrect and unpredictable
Now I'm more scared to use OpenBSD than I was a minute before.
I strongly prefer software that fails loudly and explicitly.
kaashif 3 hours ago
traceroute66 5 hours ago
> Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere
Half the problem is lack of proper drivers. I love OpenBSD but all the fibre stuff is just a bit half-baked.
For a long time OpenBSD didn't even have DOM (light-level monitoring etc.) exposed in its 1g fibre drivers. Stuff like that automatically kills off OpenBSD as a choice for datacentres where DOM stats are a non-negotiable hard requirement as they are so critical to troubleshooting.
OpenBSD finally introduced DOM stats for SFP somewhere around 2020–2021, but it doesn't always work, it depends if you have the right magic combination of SFP and card manufacturer. Whilst on FreeBSD it Just Works (TM).
And then overall, for higher speed optics, FreeBSD simply remains lightyears ahead (forgive the pun !). For example, Decisio make nice little router boxes with 10g SFP+ on them, FreeBSD has the drivers out-of-the-box, OpenBSD doesn't. And that's only an SFP+ example, its basically rolling-tumbleweed in a desert territory if you start venturing up to QSFP etc. ...
CursedSilicon 3 hours ago
How much work is it to port drivers between Free and Open BSD?
SoftTalker 2 hours ago
traceroute66 2 hours ago
atmosx 2 hours ago
PF itself is not tailored towards ISPs and/or big orgs. IPFW (FreeBSD) is more powerful and flexible.
OpenBSD shines as a secure all-in-one router SOHO solution. And it’s great because you get all the software you need in the base system. PF is intuitive and easy to work with, even for non network gurus.
asmnzxklopqw an hour ago
OpenBSD was a great OS back in the late 90s and even early 2000s. In some cases it was competing neck to neck with Linux. Since then, well, Linux grew a lot and OpenBSD not so much. There are multiple causes for this, I will go only through a few: Linux has more support from the big companies; the huge difference in userbase numbers; Linux is more welcoming to new users. And the difference is only growing.
dim13 29 minutes ago
"OpenBSD does not want to attract GNU newbies." misc@
And that's IMHO is a good thing.
ffk 3 hours ago
A lot of the time once you get into multi-gig+ territory the answer isn't "make the kernel faster," it's "stop doing it in the kernel."
You end up pushing the hot path out to userland where you can actually scale across cores (DPDK/netmap/XDP style approaches), batch packets, and then DMA straight to and from the NIC. The kernel becomes more of a control plane than the data plane.
PF/ALTQ is very much in the traditional in-kernel, per-packet model, so it hits those limits sooner.
toast0 3 hours ago
The big things to avoid are crossing the user/kernel divide and communication across cores.
Staying in the kernel is approximately the same as bypassing the kernel (caveats apply); for a packet filtering / smoothing use case, I don't think kernel bypass is needed. You probably want to tune NIC hashing so that inbound traffic for a given shaping queue arrives in the same NIC rx queue; but you probably want that in a kernel bypass case as well. Userspace is certainly nicer during development, as it's easier to push changes, but in 2026, it feels like traffic shaping has pretty static requirements and letting the kernel do all the work feels reasonable to me.
Otoh, OpenBSD is pretty far behind the curve on SMP and all that (I think their PF now has support for SMP, but maybe it's still in development?; I'd bet there's lots of room to reduce cross core communication as well, but I haven't examined it). You can't pin userspace cores to cpus, I doubt their kernel datastructures are built to reduce communications, etc. Kernel bypass won't help as much as you would hope, if it's available, which it might not be, because you can't control the userspace to limit cross core communications.
rpcope1 an hour ago
cperciva 3 hours ago
pushing the hot path out to userland where you can actually scale across cores
What sort of kernel do you have which can't scale across cores?
Melatonic 2 hours ago
Isnt OpenBSD mainly used for security testing or do I have it wrong? Would be surprised if it was used in production datacenter networking hardware at all. Seems like most people would use one of the proprietary implementations (which likely would include specific written drivers for that hardware) or something like FreeBSD
SoftTalker 2 hours ago
It's widely used as a router, that's one of its primary uses. But not sure to what scale, likely at small orgs not at major ISPs.
But, OpenBSD is a project by and for its developers. They use it and develop it to do what they want; they don't really care what anyone else does or doesn't do with it.
lstodd an hour ago
You don't need 4gbps pf queues or even fiber on every single machine in a datacenter. So be surprised, it is used widely for its simplicity and reliability not to mention security compared to those proprietary implementations you speak of, may they rot in hell.
toast0 5 hours ago
> Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?
This looks like it only affects bandwidth limiting. I suspect it's pretty niche to use OpenBSD as a traffic shaper at 10G+, and if you did, I'd imagine most of the queue limits would tend toward significantly less than 4G.
IcePic 5 hours ago
One thing could also be that by the time you have 10GE uplinks, shaping is not as important.
When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.
hrmtst93837 2 hours ago
At 10G and up, shaping still matters. Once you mix backups, CCTV, voice, and customer circuits on the same uplink, a brief saturation event can dump enough queueing delay into the path that the link looks fine on paper while the stuff people actually notice starts glitching, and latency budgets is tight. Fat pipes don't remove the need for control, they just make the billing mistake more expensive.
Melatonic 2 hours ago
citrin_ru 5 hours ago
AFAIK performance is not a priority for OpenBSD project - security is (and other related qualities like code which is easy to understand and maintain). FreeBSD (at least when I followed it several years ago) had better performance both for ipfw and its own PF fork (not fully compatible with OpenBSD one).
traceroute66 4 hours ago
> AFAIK performance is not a priority for OpenBSD project - security is
TBF that was the case historically, but they have absolutely been putting in an effort into performance in their more recent releases.
Lots of stuff that used to be simply horrific on OpenBSD, such as multi-peer BGP full-table refreshes is SIGNIFICANTLY better in the last couple of years.
Clearly still not as good as FreeBSD, but compared to what it was...
haunter 3 hours ago
My local fiber finally offers 4 Gbps connection but I’m not even sure what to use it for lol. I have 2 Gbps and that's more than enough already.
shpingbing 2 hours ago
I finally talked myself into going to 3Gbps (and working on internal network to 10). Internal transfer to NAS will be much faster, and downloading AI models should go from ~8 minutes to less than 3 minutes. Is it necessary? Not exactly. But super nice
rayiner 5 hours ago
Can pf actually shape at speeds above 4 gbps?
gigatexal 3 hours ago
It’s still single threaded. PF in FreeBSD is multithreaded. For home wan’s I’d be using openBSD. For anything else FreeBSD.
bell-cot 6 hours ago
"Values up to 999G are supported, more than enough for interfaces today and the future." - Article
"When we set the upper limit of PC-DOS at 640K, we thought nobody would ever need that much memory." - Bill Gates
throw0101d 6 hours ago
> "Values up to 999G are supported, more than enough for interfaces today and the future." - Article
Especially given that IEEE 802.3dj is working on 1.6T / 1600G, and is expected to publish the final spec in Summer/Autumn 2026:
* https://en.wikipedia.org/wiki/Terabit_Ethernet
Currently these interfaces are only on switches, but there are already NICs at 800G (P1800GO, Thor Ultra, ConnectX-8/9), so if you LACP/LAGG two together your bond is at 1600G.
arsome 5 hours ago
If you're moving those kind of speeds you're probably not doing packet filtering in software.
throw0101d 5 hours ago
himata4113 5 hours ago
mulmen an hour ago
bitfilped 5 hours ago
Yes, we're already running 800G networks, so this phrasing seems really silly to me.
WhyNotHugo 6 hours ago
Honestly, I'm really curious about this number. 10bits is 1024, so why 999G specifically?
abound 6 hours ago
Looking at the patch itself (linked in the article), the description has this:
> We now support configuring bandwidth up to ~1 Tbps (overflow in m2sm at m > 2^40).
So I think that's it, 2^40 is ~1.099 trillion
elevation 6 hours ago
Looks like an arbitrary validation cap. By the time we're maxing out the 64-bit underlying representation we probably won't be using Ethernet any more.
palmotea 6 hours ago
bell-cot 5 hours ago
chokan 4 hours ago
dsa