Big GPUs don't need big PCs (jeffgeerling.com)

188 points by mikece 14 hours ago

pjmlp 9 minutes ago

Of course, just go to any computer store where most gamer setups on affordable bugets go with the combo "beefy GPU + an i5", instead of using an i7 or i9 Intel CPUs.

Waterluvian 4 hours ago

At what point do the OEMs begin to realize they don’t have to follow the current mindset of attaching a GPU to a PC and instead sell what looks like a GPU with a PC built into it?

pjmlp 10 minutes ago

So basically going back to the old days of Amiga and Atari, in a certain sense, when PCs could only display text.

nightshift1 3 hours ago

Exactly. With the Intel-Nvidia partnership signed this September, I expect to see some high-performance single-board computers being released very soon. I don't think the atx form-factor will survive another 30 years.

numpad0 12 hours ago

Not sure what was unexpected about the multi GPU part.

It's very well known that most LLM frameworks including llama.cpp splits models by layers, which has sequential dependency, and so multi GPU setups are completely stalled unless there are n_gpu users/tasks running in parallel. It's also known that some GPUs are faster in "prompt processing" and some in "token generation" that combining Radeon and NVIDIA does something sometimes. Reportedly the inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty or something.

It takes appropriate backends with "tensor parallel" mode support, which splits the neural network parallel to the direction of flow of data, which also obviously benefit substantially from good node interconnect between GPUs like PCIe x16 or NVlink/Infinity Fabric bridge cables, and/or inter-GPU DMA over PCIe(called GPU P2P or GPUdirect or some lingo like that).

Absent those, I've read somewhere that people can sometimes see GPU utilization spikes walking over GPUs on nvtop-style tools.

Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one "manager" and few "delegated engineers" personalities. Or simulating multiple different domains of brain such as speech center, visual cortex, language center, etc. communicating in tokens might be interesting in working around this problem.

syntaxing 5 hours ago

Theres some technical implementations that makes it more efficient like EXO [1]. Jeff Geerling recently did a review on a 4 MAC Studio cluster with RDMA support and you can see that EXO has a noticeable advantage [2].

[1] https://github.com/exo-explore/exo [2] https://www.youtube.com/watch?v=x4_RsUxRjKU

zozbot234 11 hours ago

> Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one "manager" and few "delegated engineers" personalities.

This is pretty much what "agents" are for. The manager model constructs prompts and contexts that the delegated models can work on in parallel, returning results when they're done.

nodja 10 hours ago

> Reportedly the inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty or something.

Not an expert, but napkin math tells me that more often that not this will be in the order of megabytes—not kilobytes—since it scales with sequence length.

Example: Qwen3 30B has a hidden state size of 5120, even if quantized to 8 bits that's 5120 bytes per token. It would pass the MB boundary with just a little over 200 tokens. Still not much of an issue when a single PCIe lane is ~2GB/s.

I think device to device latency is more of an issue here, but I don't know enough to assert that with confidence.

remexre 8 hours ago

For each token generated, you only send one token’s worth between layers; the previous tokens are in the KV cache.

yjftsjthsd-h 13 hours ago

I've been kicking this around in my head for a while. If I want to run LLMs locally, a decent GPU is really the only important thing. At that point, the question becomes, roughly, what is the cheapest computer to tack on the side of the GPU? Of course, that assumes that everything does in fact work; unlike OP I am barely in a position to understand eg. BAR problems, let alone try to fix them, so what I actually did was build a cheap-ish x86 box with a half-decent GPU and called it a day:) But it still is stuck in my brain: there must be a more efficient way to do this, especially if all you need is just enough computer to shuffle data to and from the GPU and serve that over a network connection.

binsquare 12 hours ago

I run a crowd sourced website to collect data on the best and cheapest hardware setup for local LLM here: https://inferbench.com/

Source code: https://github.com/BinSquare/inferbench

kilpikaarna 8 hours ago

Nice! Though for older hardware it would be nice if the price reflected the current second hand market (harder to get data for, I know). Eg. Nvidia RTX 3070 ranks as second best GPU in tok/s/$ even at the MSRP of $499. But you can get one for half that now.

binsquare 7 hours ago

jsight 6 hours ago

It seems like verification might need to be improved a bit? I looked at Mistral-Large-123B. Someone is claiming 12 tokens/sec on a single RTX 3090 at FP16.

Perhaps some filter could cut out submissions that don't really make sense?

nodja 9 hours ago

binsquare 9 hours ago

seanmcdirmid 12 hours ago

And you don’t want to go the M4 Max/M3 Ultra route? It works well enough for most mid sized LLMs.

zeusk 12 hours ago

Get the DGX Spark computers? They’re exactly what you’re trying to build.

Gracana 5 hours ago

They’re very slow.

geerlingguy 4 hours ago

tcdent 12 hours ago

We're not yet to the point where a single PCIe device will get you anything meaningful; IMO 128 GB of ram available to the GPU is essential.

So while you don't need a ton of compute on the CPU you do need the ability address multiple PCIe lanes. A relatively low-spec AMD EPYC processor is fine if the motherboard exposes enough lanes.

p1necone 9 hours ago

I'm holding out for someone to ship a gpu with dimm slots on it.

tymscar 7 hours ago

anon25783 7 hours ago

kristianp 7 hours ago

skhameneh 12 hours ago

There is plenty that can run within 32/64/96gb VRAM. IMO models like Phi-4 are underrated for many simple tasks. Some quantized Gemma 3 are quite good as well.

There are larger/better models as well, but those tend to really push the limits of 96gb.

FWIW when you start pushing into 128gb+, the ~500gb models really start to become attractive because at that point you’re probably wanting just a bit more out of everything.

tcdent 11 hours ago

dist-epoch 12 hours ago

This problem was already solved 10 years ago - crypto mining motherboards, which have a large number of PCIe slots, a CPU socket, one memory slot, and not much else.

> Asus made a crypto-mining motherboard that supports up to 20 GPUs

https://www.theverge.com/2018/5/30/17408610/asus-crypto-mini...

For LLMs you'll probably want a different setup, with some memory too, some m.2 storage.

jsheard 12 hours ago

Those only gave each GPU a single PCIe lane though, since crypto mining barely needed to move any data around. If your application doesn't fit that mould then you'll need a much, much more expensive platform.

dist-epoch 12 hours ago

skhameneh 12 hours ago

In theory, it’s only sufficient for pipeline parallel due to limited lanes and interconnect bandwidth.

Generally, scalability on consumer GPUs falls off between 4-8 GPUs for most. Those running more GPUs are typically using a higher quantity of smaller GPUs for cost effectiveness.

zozbot234 11 hours ago

M.2 is mostly just a different form factor for PCIe anyway.

Eisenstein 10 hours ago

There is a whole section in here on how to spec out a cheap rig and what to look for:

* https://jabberjabberjabber.github.io/Local-AI-Guide/

omneity 4 hours ago

I wish for a hardware + software solution to enable direct PCIe interconnect using lanes independent from the chipset/CPU. A PCIe mesh of sorts.

With the right software support from say pytorch this could suddenly make old GPUs and underpowered PCs like in TFA into very attractive and competitive solutions for training and inference.

snuxoll 4 hours ago

PCIe already allows DMA between peers on the bus, but, as you pointed out, the traces for the lanes have to terminate somewhere. However, it doesn't have to be the CPU (which is, of course, the PCIe root in modern systems) handling the traffic - a PCIe switch may be used to facilitate DMA between devices attached to it, if it supports routing DMA traffic directly.

ComputerGuru 3 hours ago

That’s what happened in TFA.

3eb7988a1663 13 hours ago

Datapoints like this really make me reconsider my daily driver. I should be running one of those $300 mini PCs at <20W. With ~flat CPU performance gains, would be fine for the next 10 years. Just remote into my beefy workstation when I actually need to do real work. Browsing the web, watching videos, even playing some games is easily within their wheelhouse.

samuelknight 11 hours ago

Switching from my 8-core ryzen minipc to an 8-core ryzen desktop makes my unit tests run way faster. TDP limits can tip you off to very different performance envelopes in otherwise similar spec CPUs.

adrian_b 7 hours ago

A full-size desktop computer will always be much faster for any workload that fully utilizes the CPU.

However, a full-size desktop computer seldom makes sense as a personal computer, i.e. as the computer that interfaces to a human via display, keyboard and graphic pointer.

For most of the activities done directly by a human, i.e. reading & editing documents, browsing Internet, watching movies and so on, a mini-PC is powerful enough. The only exception is playing games designed for big GPUs, but there are many computer users who are not gamers.

In most cases the optimal setup is to use a mini-PC as your personal computer and a full-size desktop as a server on which you can launch any time-consuming tasks, e.g. compilation of big software projects, EDA/CAD simulations, testing suites etc.

The desktop used as server can use Wake-on-LAN to stay powered off when not needed and wake up whenever it must run some task remotely.

loeg 8 hours ago

Even if you could cool the full TDP in a micro PC, in a full size desktop you might be able to use a massive AIO radiator with fans running at very slow, very quiet speeds instead of jet turbine howl in the micro case. The quiet and ease of working in a bigger space are mostly a good tradeoff for a slightly larger form factor under a desk.

ekropotin 13 hours ago

As experiment, I decided to try using proxmox VM with eGPU and usb bus bypassed to it, as my main PC for browsing and working on hobby projects.

It’s just 1 vCPU with 4 Gb ram, and you know what? It’s more than enough for these needs. I think hardware manufactures falsely convinced us that every professional needs beefy laptop to be productive.

reactordev 12 hours ago

I went with a beelink for this purpose. Works great.

Keeps the desk nice and tidy while “the beasts” roar in a soundproofed closet.

jasonwatkinspdx 7 hours ago

For just basic windows desktop stuff, a $200 NUC has been good enough for like 15 years now.

jonahbenton 14 hours ago

So glad someone did this. Have been running big gpus on egpus connected to spare laptops and thinking why not pis.

Wowfunhappy 13 hours ago

I really would have liked to see gaming performance, although I realize it might be difficult to find a AAA game that supports ARM. (Forcing the Pi to emulate x86 with FEX doesn't seem entirely fair.)

3eb7988a1663 13 hours ago

You might have to thread the needle to find a game which does not bottleneck on the CPU.

kgeist 10 hours ago

What about constrained decoding (with JSON schemas)? I noticed my vLLM instance is using 1 CPU 100%.

kristjansson 12 hours ago

Really why have the PCI/CPU artifice at all? Apple and Nvidia have the right idea: put the MPP on the same die/package as the CPU.

bigyabai 12 hours ago

> put the MPP on the same die/package as the CPU.

That would help in latency-constrained workloads, but I don't think it would make much of a difference for AI or most HPC applications.

jauntywundrkind 9 hours ago

PCIe 3.0 is the nice easy convenient generation where 1 lane = 1GBps. Given the overhead, thats pretty close to 10Gb ethernet speeds (low latency though).

I do wonder how long the cards are going to need host systems at all. We've already seen GPUs with m.2 ssd attached! Radeon Pro SSG hails back from 2016! You still need a way to get the model on that in the first place to get work in and out, but a 1Gbe and small RISC-V chip (which Nvidia already uses formanagement cores) could suffice. Maybe even an rpi on the card. https://www.techpowerup.com/224434/amd-announces-the-radeon-...

Given the gobs of memory cards have, they probably don't even need storage; they just need big pipes. Intel had 100Gbe on their Xeon & Xeon Phi cores (10x what we saw here!) in 2016! GPUs that just plug into the switch and talk across 400Gbe or UltraEthernet or switched CXL, that run semi independently, feel so sensible, so not outlandish. https://www.servethehome.com/next-generation-interconnect-in...

It's far off for now, but flash makers are also looking at radically many channel flash, which can provide absurdly high GB/s, High Bandwidth Flash. And potentially integrated some extremely parallel tensorcores on each channel. Switching from DRAM to flash for AI processing could be a colossal win for fitting large models cost effectively (& perhaps power efficiently) while still having ridiculous gobs of bandwidth. With that possible win of doing processing & filtering extremely near to the data too. https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...

lostmsu 12 hours ago

Now compare batched training performance. Or batched inference.

Of course prefill is going to be GPU bound. You only send a few thousand bytes to it, and don't really ask to return much. But after prefill is done, unless you use batched mode, you aren't really using your GPU for anything more that it's VRAM bandwidth.

Avlin67 6 hours ago

tired of jeff glinglin everywhere...