Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code (ai.georgeliu.com)

382 points by vbtechguy a day ago

d4rkp4ttern 8 hours ago

You can use llama.cpp server directly to serve local LLMs and use them in Claude Code or other CLI agents. I’ve collected full setup instructions for Gemma4 and other recent open-weight LLMs here, tested on my M1 Max 64 GB MacBook:

https://pchalasani.github.io/claude-code-tools/integrations/...

The 26BA4B is the most interesting to run on such hardware, and I get nearly double the token-gen speed (40 tok/s) compared to Qwen3.5 35BA3B. However the tau2 bench results[1] for this Gemma4 variant lag far behind the Qwen variant (68% vs 81%), so I don’t expect the former to do well on heavy agentic tool-heavy tasks:

[1] https://news.ycombinator.com/item?id=47616761

peder 6 hours ago

Did you have any Anthropic vs OpenAI specification issues with Claude Code? I have been using mlx_vlm and vMLX and I get 400 Bad Request errors from Claude Code. Presumably you're not seeing those issues with llama-server ?

d4rkp4ttern 4 hours ago

Correct, no issues because since at least a few months, llama.cpp/server exposes an Anthropic messages API at v1/messages, in addition to the OpenAI-compatible API at v1/chat/completions. Claude Code uses the former.

selectodude 5 hours ago

I’ve jumped over to oMLX. A ton of rough edges but I think it’s the future.

vlowther 3 hours ago

seifbenayed1992 12 hours ago

Local models are finally starting to feel pleasant instead of just "possible." The headless LM Studio flow is especially nice because it makes local inference usable from real tools instead of as a demo.

Related note from someone building in this space: I've been working on cloclo (https://www.npmjs.com/package/cloclo), an open-source coding agent CLI, and this is exactly the direction I'm excited about. It natively supports LM Studio, Ollama, vLLM, Jan, and llama.cpp as providers alongside cloud models, so you can swap between local and hosted backends without changing how you work.

Feels like we're getting closer to a good default setup where local models are private/cheap enough to use daily, and cloud models are still there when you need the extra capability.

SeriousM 10 hours ago

How does cloclo differ from pi-mono?

seifbenayed1992 2 hours ago

pi-mono is a great toolkit — coding agent CLI, unified LLM API, web UI, Slack bot, vLLM pods.

cloclo is a runtime for agent toolkits. You plug it into your own agents and it gives them multi-agent orchestration (AICL protocol), 13 providers, skill registry, native browser/docs/phone tools, memory, and an NDJSON bridge. Zero native deps.

hackerman70000 11 hours ago

The real story here isn't Gemma 4 specifically, it's that the harness and the model are now fully decoupled. Claude Code, OpenCode, Pi, Codex all work with any backend. The coding agent is becoming a commodity layer and the competition is moving to model quality and cost. Good for users, bad for anyone whose moat was the harness

satvikpendem 5 hours ago

Sounds like the exact opposite, models are being commoditized while the harness and tooling around a model is what actually gets significant gains, especially with RL around specific models.

For example, this article was posted recently, Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed [0].

[0] https://news.ycombinator.com/item?id=46988596

bckr 4 hours ago

I think it’s ALL getting commoditized. The winners here are engineers (who are onboard with the agentic surge) and, hopefully, users who get more and better software.

vkou an hour ago

Havoc 4 hours ago

You could always point Claude Code and open code at a local http endpoint

trvz a day ago

  ollama launch claude --model gemma4:26b

gcampos 19 hours ago

You need to increase the context window size or the tool calling feature wont work

mil22 18 hours ago

For those wondering how to do this:

  OLLAMA_CONTEXT_LENGTH=64000 ollama serve
or if you're using the app, open the Ollama app's Settings dialog and adjust there.

Codex also works:

  ollama launch codex --model gemma4:26b

datadrivenangel a day ago

It's amazing how simple this is, and it just works if you have ollama and claude installed!

pshirshov a day ago

For some reason, that doesn't work for me, claude never returns from some ill loop. Nemotron, glm and qwen 3.5 work just fine, gemma - doesn't.

trvz a day ago

Since that defaults to the q4 variant, try the q8 one:

  ollama launch claude --model gemma4:26b-a4b-it-q8_0

pshirshov 20 hours ago

martinald a day ago

Just FYI, MoE doesn't really save (V)RAM. You still need all weights loaded in memory, it just means you consult less per forward pass. So it improves tok/s but not vram usage.

functional_dev 4 hours ago

This confused me at first as well.. inactive experts skip compute, but weights are sill loaded. So memory does not shrink at all.

I found this visualisation helpful - https://vectree.io/c/sparse-activation-patterns-and-memory-e...

IceWreck a day ago

It does if you use an inference engine where you can offload some of the experts from VRAM to CPU RAM. That means I can fit a 35 billion param MoE in let's say 12 GB VRAM GPU + 16 gigs of memory.

Yukonv a day ago

With that you are taking a significant performance penalty and become severely I/O bottlenecked. I've been able to stream Qwen3.5-397B-A17B from my M5 Max (12 GB/s SSD Read) using the Flash MoE technique at the brisk pace of 10 tokens per second. As tokens are generated different experts need to be consulted resulting in a lot of I/O churn. So while feasible it's only great for batch jobs not interactive usage.

IceWreck 21 hours ago

zozbot234 21 hours ago

charcircuit a day ago

You never need to have all weights in memory. You can swap them in from RAM, disk, the network, etc. MOE reduces the amount of data that will need to be swapped in for the next forward pass.

martinald a day ago

Yes you're right technically, but in reality you'd be swapping them the (vast?) majority in and out per inference request so would create an enormous bottleneck for the use case the author is using for.

zozbot234 21 hours ago

charcircuit 19 hours ago

edinetdb 18 hours ago

Claude Code has become my primary interface for iterating on data pipeline work — specifically, normalizing government regulatory filings (XBRL across three different accounting standards) and exposing them via REST and MCP.

The MCP piece is where the workflow gets interesting. Instead of building a client that calls endpoints, you describe tools declaratively and the model decides when to invoke them. For financial data this is surprisingly effective — a query like "compare this company's leverage trend to sector peers over 10 years" gets decomposed automatically into the right sequence of tool calls without you hardcoding that logic.

One thing I haven't seen discussed much: tool latency sensitivity is much higher in conversational MCP use than in batch pipelines. A 2s tool response feels fine in a script but breaks conversational flow. We ended up caching frequently accessed tables in-memory (~26MB) to get sub-100ms responses. Have you noticed similar thresholds where latency starts affecting the quality of the model's reasoning chain?

tatrions 16 hours ago

Interesting question. I've seen the threshold land around 300-500ms per tool call in practice. Below that, multi-step chains feel fluid. Above it the compounding gets you -- a 20-step chain at 2s/call is 40s wall time minimum, and I've noticed models tend to generate more filler reasoning between slow tool calls that just bloats context without adding value.

Your caching approach sounds right. The other thing that made a big difference for me was reducing round trips -- bundling related data into a single tool response (table + schema + metadata in one call vs three separate calls) helped more than speeding up individual calls.

mjlee 8 hours ago

I find MCP beneficial too, but do be aware of token usage. With a naive implementation MCP can use significantly more input tokens (and context) than equivalent skills would. With a handful of third party MCPs I’ve seen tens of thousands of tokens used before I’ve started anything.

Here’s an article from Anthropic explaining why, but it is 5 months old so perhaps it's irrelevant ancient history at this point.

https://www.anthropic.com/engineering/code-execution-with-mc...

drob518 4 hours ago

Seems like this might be a great way to do web software testing. We’ve had Selenium and Puppeteer for a long time but they are a bit brittle with respect to the web design. Change something about the design and there’s a high likelihood that a test will break. Seems like this might be able to be smarter about adapting to changes. That’s also a great use for a smaller model like this.

robot_jesus an hour ago

Yeah. I think that's an interesting use case. Especially if I can kick it off or schedule it when I'm not actively working. Inference speed (especially with tool calling involved) won't be great on my machines, but if I schedule nightly usability tests of dev sites while I sleep, that could be really cool.

drob518 20 minutes ago

You’re right about inference speed being a concern. I was assuming it’s a small model but even then, one of the browser automation frameworks is going to be faster.

vbtechguy a day ago

Here is how I set up Gemma 4 26B for local inference on macOS that can be used with Claude Code.

canyon289 a day ago

This is a nice writeup!

ttul 5 hours ago

I could see a future in which the major AI labs run a local LLM to offload much of the computational effort currently undertaken in the cloud, leaving the heavy lifting to cloud-hosted models and the easier stuff for local inference.

dominotw 5 hours ago

wouldnt that be counter to their whole business model?

ttul 4 hours ago

I don't think so. Acquiring hardware for inference is a chokepoint on growth. If they can offload some inference to the customer's machine, that allows them to use more of their online capacity to generate money.

jonplackett a day ago

So wait what is the interaction between Gemma and Claude?

unsnap_biceps a day ago

lm studio offers an Anthropic compatible local endpoint, so you can point Claude code at it and it'll use your local model for it's requests, however, I've had a lot of problems with LM Studio and Claude code losing it's place. It'll think for awhile, come up with a plan, start to do it and then just halt in the middle. I'll ask it to continue and it'll do a small change and get stuck again.

Using ollama's api doesn't have the same issue, so I've stuck to using ollama for local development work.

keerthiko a day ago

Claude Code is fairly notoriously token inefficient as far as coding agent/harnesses go (i come from aider pre-CC). It's only viable because the Max subscriptions give you approximately unlimited token budget, which resets in a few hours even if you hit the limit. But this also only works because cloud models have massive token windows (1M tokens on opus right now) which is a bit difficult to make happen locally with the VRAM needed.

And if you somehow managed to open up a big enough VRAM playground, the open weights models are not quite as good at wrangling such large context windows (even opus is hardly capable) without basically getting confused about what they were doing before they finish parsing it.

unsnap_biceps a day ago

storus a day ago

mbesto a day ago

I don't get why I would use Claude Code when OpenCode, Cursor, Zed, etc. all exist, are "free" and work with virtually any llm. Seems like a weird use case unless I'm missing something.

superb_dev 21 hours ago

blitzar a day ago

panagathon 16 hours ago

bdangubic a day ago

Imanari 6 hours ago

How well do the Gemma 4 models perform on agentic coding? What are your impressions?

asymmetric a day ago

Is a framework desktop with >48GB of RAM a good machine to try this out?

pshirshov 20 hours ago

Only for chat sessions, not for agentic coding. It's just too slow to be practical (10 minutes to answer a simple question about a 2k LoC project - and that's with a 5070 addon card).

ac29 13 hours ago

This article is about a MoE model with only 4B active parameters, it shouldn't take 10 minutes to answer a question about a small project.

I measured a 4bit quant of this model at 1300t/s prefill and ~60t/s decode on Ryzen 395+.

nl 16 hours ago

Doesn't the framework desktop have a Ryzen 395 AI? That's a unified memory architecture like the Macs.

pshirshov 6 hours ago

pshirshov 9 hours ago

janalsncm 12 hours ago

Qwen3-coder has been better for coding in my experience and has similar sizes. Either way, after a bunch of frustration with the quality and price of CC lately I’m happy there are local options.

AbuAssar 11 hours ago

omlx gives better performance than ollama on apple silicon

Someone1234 a day ago

Using Claude Code seems like a popular frontend currently, I wonder how long until Anthropic releases an update to make it a little to a lot less turn-key? They've been very clear that they aren't exactly champions of this stuff being used outside of very specific ways.

nerdix a day ago

I don't think there is any incentive to do so right now because the open models aren't as good. The vast majority of businesses are going to just pay the extra cost for access to a frontier model. The model is what gives them a competitive advantage, not the harness. The harness is a lot easier to replicate than Opus.

There are benefits too. Some developers might learn to use Claude Code outside of work with cheaper models and then advocate for using Claude Code at work (where their companies will just buy access from Anthropic, Bedrock, etc). Similar to how free ESXi licenses for personal use helped infrastructure folks gain skills with that product which created a healthy supply of labor and VMware evangelists that were eager to spread the gospel. Anthropic can't just give away access to Claude models because of cost so there is use in allowing alternative ways for developers to learn how to use Claude Code and develop a workflow with it.

deskamess 20 hours ago

Are the Claude Code (desktop) models very different from what Bedrock has? I thought you could hook up VSCode (not Claude Desktop) to Bedrock Anthropic models. Are there features in Claude Desktop that are not in VSCode/cli?

chvid a day ago

Is it not about the same as using OpenCode?

And is running a local model with Claude Code actually usable for any practical work compared to the hosted Anthropic models?

falcor84 a day ago

Well, if they did, it would probably be shooting themselves in the foot, seeing that the Claude Code source is out there now, and people are waiting for an excuse to "clean-room" reimplement and fork it

moomin a day ago

Right now it suits them down to the ground. You pay for the product and you don’t cost their servers anything.

phainopepla2 a day ago

You don't pay anything to use Claude Code as a front end to non-Anthropic models

quinnjh a day ago

alfiedotwtf 12 hours ago

Yet Codex specifically aims out to be compatible with all backends! Up until Gemma 4 though it’s been pretty solid, but totally fails with unknown tool (I’m guessing a template issue)

wyre a day ago

I think CC is popular because they are catering to the common denominator programmer and are going to continue to do that, not because CC is particularly turn-key.

jedisct1 4 hours ago

Running Gemma 4 with llama.cpp and Swival:

$ llama-server --reasoning auto --fit on -hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q4_K_XL --temp 1.0 --top-p 0.95 --top-k 64

$ uvx swival --provider llamacpp

Done.

aetherspawn 20 hours ago

Can you use the smaller Gemma 4B model as speculative decoding for the larger 31B model?

Why/why not?

tiku 6 hours ago

I hate that my M5 with 24 gb has so much trouble with these models. Not getting any good speeds, even with simple models.

inzlab 19 hours ago

awesome, the lighter the hardware running big softwares the more novelty.

NamlchakKhandro 19 hours ago

I don't know why people bother with Claude code.

It's so jank, there are far superior cli coding harness out there

loveparade 19 hours ago

What do you recommend? I've tried both pi and opencode and both are better than claude imo, but I wonder if there are others.

tarruda 19 hours ago

Codex is the best out-of-box experience, especially due to its builtin sandboxing. Only drawback is that its edit tool requires the LLM to output a diff which only GPTs are trained to do correctly.

loveparade 19 hours ago

prettyblocks 18 hours ago

dimgl 19 hours ago

Vagueposting in Hacker News?

z0mghii 19 hours ago

Can you elaborate what is jank about it?

threethirtytwo 11 hours ago

it has visual artifacts when inferencing.

smcleod 9 hours ago

Did you try the MLX model instead? In general MLX tends provide much better performance than GGUF/Llama.cpp on macOS.