Ggml.ai joins Hugging Face to ensure the long-term progress of Local AI (github.com)
597 points by lairv 8 hours ago
simonw 5 hours ago
It's hard to overstate the impact Georgi Gerganov and llama.cpp have had on the local model space. He pretty much kicked off the revolution in March 2023, making LLaMA work on consumer laptops.
Here's that README from March 10th 2023 https://github.com/ggml-org/llama.cpp/blob/775328064e69db1eb...
> The main goal is to run the model using 4-bit quantization on a MacBook. [...] This was hacked in an evening - I have no idea if it works correctly.
Hugging Face have been a great open source steward of Transformers, I'm optimistic the same will be true for GGML.
I wrote a bit about this here: https://simonwillison.net/2026/Feb/20/ggmlai-joins-hugging-f...
mythz 8 hours ago
I consider HuggingFace more "Open AI" than OpenAI - one of the few quiet heroes (along with Chinese OSS) helping bring on-premise AI to the masses.
I'm old enough to remember when traffic was expensive, so I've no idea how they've managed to offer free hosting for so many models. Hopefully it's backed by a sustainable business model, as the ecosystem would be meaningfully worse without them.
We still need good value hardware to run Kimi/GLM in-house, but at least we've got the weights and distribution sorted.
data-ottawa 7 hours ago
Can we toss in the work unsloth does too as an unsung hero?
They provide excellent documentation and they’re often very quick to get high quality quants up in major formats. They’re a very trustworthy brand.
disiplus 7 hours ago
Yeah, they're the good guys. I suspect the open source work is mostly advertisements for them to sell consulting and services to enterprises. Otherwise, the work they do doesn't make sense to offer for free.
arcanemachiner 4 hours ago
cubie 7 hours ago
I'm a big fan of their work as well, good shout.
Tepix 6 hours ago
It's insane how much traffic HF must be pushing out of the door. I routinely download models that are hundreds of gigabytes in size from them. A fantastic service to the sovererign AI community.
razster 3 hours ago
My fear is that these large "AI" companies will lobby to have these open source options removed or banned, growing concern. I'm not sure how else to explain how much I enjoy using what HF provides, I religiously browse their site for new and exciting models to try.
culi 3 hours ago
vardalab 4 hours ago
Yup, I have downloaded probably a terabyte in the last week, especially with the Step 3.5 model being released and Minimax quants. I wonder what my ISP thinks. I hope they don't cut me off. They gave me a fast lane, they better let me use it, lol
fc417fc802 2 hours ago
Onavo 3 hours ago
Bandwidth is not that expensive. The Big 3 clouds just want to milk customers via egress. Look at Hetzner or CloudFlare R2 if you want to get get an idea of commodity bandwidth costs.
zozbot234 7 hours ago
> We still need good value hardware to run Kimi/GLM in-house
If you stream weights in from SSD storage and freely use swap to extend your KV cache it will be really slow (multiple seconds per token!) but run on basically anything. And that's still really good for stuff that can be computed overnight, perhaps even by batching many requests simultaneously. It gets progressively better as you add more compute, of course.
Aurornis 4 hours ago
> it will be really slow (multiple seconds per token!)
This is fun for proving that it can be done, but that's 100X slower than hosted models and 1000X slower than GPT-Codex-Spark.
That's like going from real time conversation to e-mailing someone who only checks their inbox twice a day if you're lucky.
HPsquared 7 hours ago
At a certain point the energy starts to cost more than renting some GPUs.
vardalab 4 hours ago
fc417fc802 2 hours ago
sowbug 7 hours ago
Why doesn't HF support BitTorrent? I know about hf-torrent and hf_transfer, but those aren't nearly as accessible as a link in the web UI.
embedding-shape 6 hours ago
> Why doesn't HF support BitTorrent?
Harder to track downloads then. Only when clients hit the tracker would they be able to get download states, and forget about private repositories or the "gated" ones that Meta/Facebook does for their "open" models.
Still, if vanity metrics wasn't so important, it'd be a great option. I've even thought of creating my own torrent mirror of HF to provide as a public service, as eventually access to models will be restricted, and it would be nice to be prepared for that moment a bit better.
sowbug 6 hours ago
taminka 4 hours ago
jimbob45 3 hours ago
homarp 4 hours ago
Fin_Code 6 hours ago
I still don't know why they are not running on torrent. Its the perfect use case.
heliumtera 6 hours ago
How can you be the man in the middle in a truly P2P environment?
freedomben 6 hours ago
That would shut out most people working for big corp, which is probably a huge percentage of the user base. It's dumb, but that's just the way corp IT is (no torrenting allowed).
zozbot234 6 hours ago
HanClinto 8 hours ago
I'm regularly amazed that HuggingFace is able to make money. It does so much good for the world.
How solid is its business model? Is it long-term viable? Will they ever "sell out"?
microsoftedging 6 hours ago
FT had a solid piece a few weeks back: "Why AI start-up Hugging Face turned down a $500mn Nvidia deal"
https://giftarticle.ft.com/giftarticle/actions/redeem/9b4eca...
jackbravo 6 hours ago
sounds very interesting, but even though it says giftarticle.ft, I got blocked by a paywall.
nerevarthelame 6 hours ago
culi 3 hours ago
bityard 5 hours ago
Their business model is essentially the same as GitHub. Host lots of stuff for free and build a community around it, sell the upscaled/private version to businesses. They are already profitable.
HanClinto 5 hours ago
This is what Sourceforge did too, and they still had the DevShare adware thing didn't they?
GitHub is great -- huge fan. To some degree they "sold out" to Microsoft and things could have gone more south, but thankfully Microsoft has ruled them with a very kind hand, and overall I'm extremely happy with the way they've handled it.
I guess I always retain a bit of skepticism with such things, and the long-term viability and goodness of such things never feels totally sure.
dmezzetti 8 hours ago
They have paid hosting - https://huggingface.co/enterprise and paid accounts. Also consulting services. Seems like a pretty good foundation to me.
julien_c 6 hours ago
and a lot of traction on paid (private in particular) storage these days; sneak peek at new landing page: https://huggingface.co/storage
heliumtera 6 hours ago
>Will they ever "sell out"?
Oh no, never. Don't worry, the usual investors are very well known for fighting for user autonomy (AMD, Nvidia, Intel,IBM, Qualcomm)
They are all very pro consumers and all backers are certainly here for your enjoyment only
zozbot234 6 hours ago
These are all big hardware firms, which makes a lot of sense as a classic 'commoditize the complement' play. Not exactly pro-consumer, but not quite anti-consumer either!
5o1ecist 4 hours ago
smallerize 3 hours ago
I_am_tiberius 8 hours ago
I once tried hugging face because I wanted I worked through some tutorial. They wanted my credit card details during the registration as far as I remember. After a month they invoiced me some amount of money and I had no idea what it was. To be honest, I don't understand what exactly they do and what services I was paying for, but I cancelled my account and never touched it again. For me that was a totally intransparent process.
shafyy 8 hours ago
Their pricing seems pretty transparent: https://huggingface.co/pricing
mnewme 8 hours ago
Huggingface is the silent GOAT of the AI space, such a great community and platform
lairv 8 hours ago
Truly amazing that they've managed to build an open and profitable platform without shady practices
al_borland 8 hours ago
It’s such a sad state of affairs when shady practices are so normal that finding a company without them is noteworthy.
0xbadcafebee 5 hours ago
> The community will continue to operate fully autonomously and make technical and architectural decisions as usual. Hugging Face is providing the project with long-term sustainable resources, improving the chances of the project to grow and thrive. The project will continue to be 100% open-source and community driven as it is now.
I want this to be true, but business interests win out in the end. Llama.cpp is now the de-facto standard for local inference; more and more projects depend on it. If a company controls it, that means that company controls the local LLM ecosystem. And yeah, Hugging Face seems nice now... so did Google originally. If we all don't want to be locked in, we either need a llama.cpp competitor (with a universal abstration), or it should be controlled by an independent nonprofit.
zozbot234 5 hours ago
Llama.cpp is an open source project that anyone can fork as needed, so any "control" over it really only extends to facilitating development of certain features.
0xbadcafebee 34 minutes ago
In practice, nobody does this, because you then have to keep the fork up to date with upstream plus your changes, and this is an endless amount of work.
jgrahamc 6 hours ago
This is great news. I've been sponsoring ggml/llama.cpp/Georgi since 2023 via Github. Glad to see this outcome. I hope you don't mind Georgi but I'm going to cancel my sponsorship now you and the code have found a home!
moralestapia 5 minutes ago
I hope Georgi gets a big fat check out of this, he deserves it 100%.
forty 36 minutes ago
Looks like someone tried to type "Gmail" while drunk...
rkomorn 31 minutes ago
Looks like Gargamel of Smurfs fame to me.
beoberha 8 hours ago
Seems like a great fit - kinda surprised it didn’t happen sooner. I think we are deep in the valley of local AI, but I’d be willing to bet it breaks out in the next 2-3 years. Here’s hoping!
breisa 2 hours ago
I mean they already supported the project quite a bit. @ngxson and maybe others? from Huggingface are big contributors to llama.cpp.
tkp-415 7 hours ago
Can anyone point me in the direction of getting a model to run locally and efficiently inside something like a Docker container on a system with not so strong computing power (aka a Macbook M1 with 8gb of memory)?
Is my only option to invest in a system with more computing power? These local models look great, especially something like https://huggingface.co/AlicanKiraz0/Cybersecurity-BaronLLM_O... for assisting in penetration testing.
I've experimented with a variety of configurations on my local system, but in the end it turns into a make shift heater.
0xbadcafebee 4 hours ago
8GB is not enough to do complex reasoning, but you could do very small simple things. Models like Whisper, SmolVLM, Quen2.5-0.5B, Phi-3-mini, Granite-4.0-micro, Mistral-7B, Gemma3, Llama-3.2 all work on very little memory. Tiny models can do a lot if you tune/train them. They also need to be used differently: system prompt preloaded with information, few-shot examples, reasoning guidance, single-task purpose, strict output guidelines. See https://github.com/acon96/home-llm for an example. For each small model, check if Unsloth has a tuned version of it; it reduces your memory footprint and makes inference faster.
For your Mac, you can use Ollama, or MLX (Mac ARM specific, requires different engine and different model disk format, but is faster). Ramalama may help fix bugs or ease the process w/MLX. Use either Docker Desktop or Colima for the VM + Docker.
For today's coding & reasoning models, you need a minimum of 32GB VRAM combined (graphics + system), the more in GPU the better. Copying memory between CPU and GPU is too slow so the model needs to "live" in GPU space. If it can't fit all in GPU space, your CPU has to work hard, and you get a space heater. That Mac M1 will do 5-10 tokens/s with 8GB (and CPU on full blast), or 50 token/s with 32GB RAM (CPU idling). And now you know why there's a RAM shortage.
mft_ 7 hours ago
There’s no way around needing a powerful-enough system to run the model. So you either choose a model that can fit on what you have —i.e. via a small model, or a quantised slightly larger model— or you access more powerful hardware, either by buying it or renting it. (IME you don’t need Docker. For an easy start just install LM Studio and have a play.)
I picked up a second-hand 64GB M1 Max MacBook Pro a while back for not too much money for such experimentation. It’s sufficiently fast at running any LLM models that it can fit in memory, but the gap between those models and Claude is considerable. However, this might be a path for you? It can also run all manner of diffusion models, but there the performance suffers (vs. an older discrete GPU) and you’re waiting sometimes many minutes for an edit or an image.
ryandrake 6 hours ago
I wasn't able to have very satisfying success until I bit the bullet and threw a GPU at the problem. Found an actually reasonably priced A4000 Ada generation 20GB GPU on eBay and never looked back. I still can't run the insanely large models, but 20GB should hold me over for a while, and I didn't have to upgrade my 10 year old Ivy Bridge vintage homelab.
sigbottle 7 hours ago
Are mac kernels optimized compared to CUDA kernels? I know that the unified GPU approach is inherently slower, but I thought a ton of optimizations were at the kernel level too (CUDA itself is a moat)
liuliu 2 hours ago
bigyabai 3 hours ago
zozbot234 7 hours ago
The general rule of thumb is that you should feel free to quantize even as low as 2 bits average if this helps you run a model with more active parameters. Quantized models are not perfect at all, but they're preferable to the models with fewer, bigger parameters. With 8GB usable, you could run models with up to 32B active at heavy quantization.
xrd 7 hours ago
I think a better bet is to ask on reddit.
https://www.reddit.com/r/LocalLLM/
Everytime I ask the same thing here, people point me there.
yjftsjthsd-h 4 hours ago
With only 8 GB of memory, you're going to be running a really small quant, and it's going to be slow and lower quality. But yes, it should be doable. In the worst case, find a tiny gguf and run it on CPU with llamafile.
ontouchstart 5 hours ago
This is the easiest set up on a Mac. You need at least 16gb on a MacBook:
HanClinto 6 hours ago
Maybe check out Docker Model Runner -- it's built on llama.cpp (in a good way -- not like Ollama) and handles I think most of what you're looking for?
https://www.docker.com/blog/run-llms-locally/
As far as how to find good models to run locally, I found this site recently, and I liked the data it provides:
Hamuko 5 hours ago
I tried to run some models on my M1 Max (32 GB) Mac Studio and it was a pretty miserable experience. Slow performance and awful results.
kristianp 2 hours ago
> Towards seamless “single-click” integration with the transformers library
That's interesting. I thought they would be somewhat redundant. They do similar things after all, except training.
cyanydeez 20 minutes ago
Is there a local webui that integrates with Hugging face?
Ollama and webui seem to rapidly lose their charm. Ollama now includes cloud apis which makes no sense as a local.
fancy_pantser 2 hours ago
Was Georgi ever approached by Meta? I wonder what they offered (I'm glad they didn't succeed, just morbid curiosity).
karmasimida 2 hours ago
Does local AI have a future? The models are getting ridiculously big and any storage hardware is hoarded by few companies for next 2 years and nvidia has stopped making consumer GPU for this year.
It seems to me there is no chance local ML is going to be anywhere out of the toy status comparing to closed source ones in short term
rhdunn 2 hours ago
Mistral have small variants (3B, 8B, 14B, etc.), as do others like IBM Granite and Qwen. Then there are finetunes based on these models, depending on your workflow/requirements.
dust42 an hour ago
I am actually doing now a good part of dev with Qwen3-Coder-Next on an M1 64GB with Qwen Code CLI (a fork of Gemini CLI). I very much like
a) to have an idea how much tokens I use and
b) be independent of VC financed token machines and
c) I can use it on a plane/train
Also I never have to wait in a queue, nor will I be told to wait for a few hours. And I get many answers in a second.I don't do full vibe coding with a dozen agents though. I read all the code it produces and guide it where necessary.
Last not least, at some point the VC funded party will be over and when this happens one better knows how to be highly efficient in AI token use.
mattfrommars 3 hours ago
I don’t know if this warrants a separate thread here but I have to ask…
How can I realistically get involved the AI development space? I feel left out with what’s going on and living in a bubble where AI is forced into by my employer to make use of it (GitHub Copilot), what is a realistic road map to kinda slowly get into AI development, whatever that means
My background is full stack development in Java and React, albeit development is slow.
I’ve only messed with AI on very application side, created a local chat bot for demo purposes to understand what RAG is about to running models locally. But all of this is very superficial and I feel I’m not in the deep with what AI is about. I get I’m too ‘late’ to be on the side of building the next frontier model and makes no sense, what else can I do?
I know Python, next step is maybe do ‘LLM from scratch”? Or I pick up Google machine learning crash course certificate? Or do recently released Nvidia Certification?
I’m open for suggestions
fc417fc802 2 hours ago
I'm not entirely clear what your goals are but roughly, just figure out an application that holds your interest and build a model for it from scratch. Probably don't start with an LLM though. Same as for anything else really. If you're interest in computer graphics then decide on a small scale project and go build it from scratch. Etc.
breisa 2 hours ago
Maybe look into model finetuning/distilation. Unsloth [1] has great guides and provides everything you need to get started on Google Colab for free. [1] https://unsloth.ai/
the__alchemist 8 hours ago
Does anyone have a good comparison of HuggingFace/Candle to Burn? I am testing them concurrently, and Burn seems to have an easier-to-use API. (And can use Candle as a backend, which is confusing) When I ask on Reddit or Discord channels, people overwhelmingly recommend Burn, but provide no concrete reasons beyond "Candle is more for inference while Burn is training and inference". This doesn't track, as I've done training on Candle. So, if you've used both: Thoughts?
csunoser 6 hours ago
I have used both (albeit 2 years ago, and things change really fast). At the time, Candle didn't have 2d conv backprop with strides properly implemented. And getting Burn running libtch backend was just a lot simpler.
I did use candle for wasm based inference for teaching purposes - that was reasonably painless and pretty nice.
jimmydoe 8 hours ago
Amazing. I like the openness of both project and really excited for them.
Hopefully this does not mean consolidation due to resource dry up but true fusion of the bests.
androiddrew 7 hours ago
One of the few acquisitions I do support
lukebechtel an hour ago
Thank you Georgi <3
sheepscreek 5 hours ago
Curious about the financials behind this deal. Did they close above what they raised? What’s in it for HuggingFace?
stephantul 6 hours ago
Georgi is such a legend. Glad to see this happening
segmondy 7 hours ago
Great news! I have always worried about ggml and long term prospect for them and wished for them to be rewarded for their effort.
dhruv3006 7 hours ago
Huggingface is actually something thats driving good in the world. Good to see this collab/
superkuh 6 hours ago
I'm glad the llama.cpp and the ggml backing are getting consistent reliable economic support. I'm glad that ggerganov is getting rewarded for making such excellent tools.
I am somewhat anxious about "integration with the Hugging Face transformers library" and possible python ecosystem entanglements that might cause. I know llama.cpp and ggml already have plenty of python tooling but it's not strictly required unless you're quantizing models yourself or other such things.
dmezzetti 8 hours ago
This is really great news. I've been one of the strongest supporters of local AI dedicating thousands of hours towards building a framework to enable it. I'm looking forward to seeing what comes of it!
logicallee 7 hours ago
>I've been one of the strongest supporters of local AI, dedicating thousands of hours towards building a framework to enable it.
Sounds like you're very serious about supporting local AI. I have a query for you (and anyone else who feels like donating) about whether you'd be willing to donate some memory/bandwidth resources p2p to hosting an offline model:
We have a local model we would like to distribute but don't have a good CDN.
As a user/supporter question, would you be willing to donate some spare memory/bandwidth in a simple dedicated browser tab you keep open on your desktop that plays silent audio (to not be put in the background and deloaded) and then allocates 100mb -1 gb of RAM and acts as a webrtc peer, serving checksumed models?[1] (Then our server only has to check that you still have it from time to time, by sending you some salt and a part of the file to hash and your tab proves it still has it by doing so). This doesn't require any trust, and the receiving user will also hash it and report if there's a mismatch.
Our server federates the p2p connections, so when someone downloads they do so from a trusted peer (one who has contributed and passed the audits) like you. We considered building a binary for people to run but we consider that people couldn't trust our binaries, or would target our build process somehow, we are paranoid about trust, whereas a web model is inherently untrusted and safer. Why do all this?
The purpose of this would be to host an offline model: we successfully ported a 1 GB model from C++ and Python to WASM and WebGPU (you can see Claude doing so here, we livestreamed some of it[2]), but the model weights at 1 GB are too much for us to host.
Please let us know whether this is something you would contribute a background tab to hosting on your desktop. It wouldn't impact you much and you could set how much memory to dedicate to it, but you would have the good feeling of knowing that you're helping people run a trusted offline model if they want - from their very own browser, no download required. The model we ported is fast enough for anyone to run on their own machines. Let me know if this is something you'd be willing to keep a tab open for.
[1] filesharing over webrtc works like this: https://taonexus.com/p2pfilesharing/ you can try it in 2 browser tabs.
[2] https://www.youtube.com/watch?v=tbAkySCXyp0and and some other videos
echoangle 6 hours ago
Maybe stupid question but why not just put it in a torrent?
liuliu 5 hours ago
logicallee 6 hours ago
HanClinto 6 hours ago
Hosting model weights for projects like this I think is something that you could upload to a space in Hugging Face?
What services would you need that Hugging Face doesn't provide?
liuliu 6 hours ago
> We have a local model we would like to distribute but don't have a good CDN.
That is not true. I am serving models off Cloudflare R2. It is 1 petabyte per month in egress use and I basically pay peanuts (~$200 everything included).
logicallee 6 hours ago
geooff_ 8 hours ago
As someone who's been in the "AI" space for a while its strange how Hugging Face went from one of the biggest name to not a part of the discussion at all.
r_lee 8 hours ago
I think that's because there's less local AI usage now since there's all kinds of image models by the big labs, so there's really no rush of people self hosting stable diffusion etc anymore
the space moved from Consumer to Enterprise pretty fast due to models getting bigger
zozbot234 8 hours ago
Today's free models are not really bigger when you account for the use of MoE (with ever increasing sparsity, meaning a smaller fraction of active parameters), and better ways of managing KV caching. You can do useful things with very little RAM/VRAM, it just gets slower and slower the more you try to squeeze it where it doesn't quite belong. But that's not a problem if you're willing to wait for every answer.
r_lee 4 hours ago
segmondy 7 hours ago
part of what discussion? anyone in the AI space knows and uses HF, but the public doesn't give a care and why should they? It's just an advanced site were nerds download AI stuff. HF is super valuable with their transformers library, their code, tutorials, smol-models, etc, but how does it translate to investor dollars?
LatencyKills 8 hours ago
It isn't necessary to be part of the discussion if you are truly adding value (which HF continues to do). It's nice to see a company doing what it does best without constantly driving the hype train.
option 7 hours ago
Isn't HF banned in China? Also, how are many Chinese labs on Twitter all the time?
In either case - huge thanks to them for keeping AI open!
dragonwriter 6 hours ago
> Isn't HF banned in China?
I think, for some definition of “banned”, that’s the case. It doesn’t stop the Chinese labs from having organization accounts on HF and distributing models there. ModelScope is apparently the HF-equivalent for reaching Chinese users.
disiplus 7 hours ago
I think in the West we think everything is blocked. But for example, if you book an eSIM, when you visit you already get direct access to Western services because they route it to some other server. Hong Kong is totally different: they basically use WhatsApp and Google Maps, and everything worked when I was there.
embedding-shape 6 hours ago
But also yes, parent is right, HF is more or less inaccessible, and Modelscope frequently cited as the mirror to use (although many Chinese labs seems to treat HF as the mirror, and Modelscope as the "real" origin).
woadwarrior01 7 hours ago
HF is indeed banned in China. The Chinese equivalent of HF is ModelScope[1].
periodjet 6 hours ago
Prediction: Amazon will end up buying HuggingFace. Screenshot this.
ukblewis 5 hours ago
Honestly I’m shocked to be the only one I see of this opinion: HuggingFace’s `accelerate`, `transformers` and `datasets` have been some of the worst open source Python libraries I have ever used that I had to use. They break backwards compatibility constantly, even on APIs which are not underscore/dunder named even on minor version releases without even documenting this, they refuse PRs fixing their lack of `overloads` type annotations which breaks type checking on their libraries and they just generally seem to have spaghetti code. I am not excited that another team is joining them and consolidating more engineering might in the hands of these people
ukblewis 5 hours ago
And clearly I say all of this in my name and not my employers name
ukblewis 5 hours ago
And I said all of that despite us continuing to use their platform and libraries extensively… We just don’t have a choice due to their dominance of open source ML
rvz 8 hours ago
This acquisition is almost the same as the acquisition of Bun by Anthropic.
Both $0 revenue "companies", but have created software that is essential to the wider ecosystem and has mindshare value; Bun for Javascript and Ggml for AI models.
But of course the VCs needed an exit sooner or later. That was inevitable.
andsoitis 7 hours ago
I believe ggml.ai was funded by angel investors, not VC.