OpenCode – Open source AI coding agent (opencode.ai)

1146 points by rbanffy a day ago

logicprog 21 hours ago

OpenCode was the first open source agent I used, and my main workhorse after experimenting briefly with Claude Code and realizing the potential of agentic coding. Due to that, and because it's a popular an open source alternative, I want to be able to recommend it and be enthusiastic about it. The problem for me is that the development practices of the people that are working on it are suboptimal at best; they're constantly releasing at an extremely high cadence, where they don't even spend the time to test or fix things (or even build a proper list of changes for each release), and they add, remove, refine, change, fix, and break features constantly at that accelerated pace.

More than that, it's an extremely large and complex TypeScript code base — probably larger and more complex than it needs to be — and (partly as a result) it's fairly resource inefficient (often uses 1GB of RAM or more. For a TUI).

On top of that, at least I personally find the TUI to be overbearing and a little bit buggy, and the agent to be so full of features that I don't really need — also mildly buggy — that it sort of becomes hard to use and remember how everything is supposed to work and interact.

jmmv 10 hours ago

> and (partly as a result) it's fairly resource inefficient (often uses 1GB of RAM or more. For a TUI).

That's (one of the reasons) why I'm favoring Codex over Claude Code.

Claude Code is an... Electron app (for a TUI? WTH?) and Codex is Rust. The difference is tangible: the former feels sluggish and does some odd redrawing when the terminal size changes, while the latter definitely feels more snappy to me (leaving aside that GPT's responses also seem more concise). At some point, I had both chewing concurrently on the same machine and same project, and Claude Code was using multiple GBs of RAM and 100% CPU whereas Codex was happy with 80 MB and 6%.

Performance _is_ a feature and I'm afraid the amounts of code AI produces without supervision lead to an amount of bloat we haven't seen before...

ctmnt 8 hours ago

I think you’re confusing capital c Claude Code, the desktop Electron app, and lowercase c `claude`, the command line tool with an interactive TUI. They’re both TypeScript under the hood, but the latter is React + Ink rendered into the terminal.

The redraw glitches you’re referring to are actually signs of what I consider to be a pretty major feature, a reason to use `claude` instead of `codex` or `opencode`: `claude` doesn’t use the alternate screen, whereas the other two do. Meaning that it uses the standard screen buffer, meaning that your chat history is in the terminal (or multiplexer) scrollback. I much prefer that, and I totally get why they’ve put so much effort into getting it to work well.

In that context handling SIGWINCH has some issues and trickiness. Well worth the tradeoff, imo.

conradev an hour ago

jimmydoe 7 hours ago

petcat 9 hours ago

Anthropic needs to spend some tokens rewriting Claude Code in Rust (yes, really).

The difference in feel between Codex and Claude Code is obvious.

The whole thing is vibed anyway, I'm sure they could get it done in a week or two for their quality standards.

MithrilTuxedo an hour ago

seunosewa 8 hours ago

leonardcser 6 hours ago

phillipcarter 4 hours ago

doug_durham 6 hours ago

I run many instances of Claude Code simultaneously and have not experienced what you are seeing. It sounds like you have a bias of Rust over Typescript.

jazzypants 5 hours ago

smugtrain 2 hours ago

On the 100% cpu issue, I’m curious to know, what is the processor and was it performing any other cpu intensive work?

RagnarD 10 hours ago

Totally agree. I'm baffled by those who don't clearly see that Codex works better than C.C. in many ways.

Aeolun 9 hours ago

trq_ 5 hours ago

Claude Code is not an electron app.

tmstieff 5 hours ago

1dom 18 minutes ago

> Due to that, and because it's a popular an open source alternative, I want to be able to recommend it and be enthusiastic about it. The problem for me is that the development practices of the people that are working on it are suboptimal at best;

This is my experience with most AI tools that I spend more than a few weeks with. It's happening so often it's making me question my own judgement: "if everything smells of shit, check your own shoes." I left professional software engineering a couple of years ago, and I don't know how much of this is also just me losing touch with the profession, or being an old man moaning about how we used to do it better.

It reminds me of social media: there was a time where social media platforms were defined by their features, Vine was short video, snapchat was disappearing pictures, twitter was short status posts etc. but now they're all bloated messes that try do everything.

The same looks to be happening with AI and agent software. They start off as defined by one features, and then become messes trying to implement the latest AI approach (skills, or tools, or functions, or RAG, or AGENTS.md, or claws etc. etc.)

rbehrends 20 hours ago

I am more concerned about their, umm, gallant approach to security. Not only that OpenCode is permissive by default in what it is allowed to do, but that it apparently tries to pull its config from the web (provider-based URL) by default [1]. There is also this open GitHub issue [2], which I find quite concerning (worst case, it's an RCE vulnerability).

[1] https://opencode.ai/docs/config/#precedence-order

[2] https://github.com/anomalyco/opencode/issues/10939

heavyset_go 16 hours ago

It also sends all of your prompts to Grok's free tier by default, and the free tier trains on your submitted information, X AI can do whatever they want with that, including building ad profiles, etc.

You need to set an explicit "small model" in OpenCode to disable that.

integralid 15 hours ago

mrighele 7 hours ago

vbernat 11 hours ago

adam_mckenna 13 hours ago

indigodaddy 4 hours ago

rsanheim 11 hours ago

ct520 18 hours ago

I second that.

Have fun on windows - automatic no from me. https://github.com/anomalyco/opencode/issues?q=is%3Aissue%20...

larschdk 12 hours ago

foxygen 18 hours ago

woctordho 19 hours ago

RCE is exactly the feature of coding agents. I'm happy with it that I don't need to launch OpenCode with --dangerously-skip every time.

mrln 11 hours ago

TZubiri 17 hours ago

I assign a specific user for it, which doesn't have much access to my files. So what I want is complete autonomy.

westoque 20 hours ago

> The problem for me is that the development practices of the people that are working on it are suboptimal at best; they're constantly releasing at an extremely high cadence, where they don't even spend the time to test or fix things (or even build a proper list of changes for each release), and they add, remove, refine, change, fix, and break features constantly at that accelerated pace.

this is what i notice with openclaw as well. there have been releases where they break production features. unfortunately this is what happens when code becomes a commidity, everyone thinks that shipping fast is the moat but at the expense of suboptimality since they know a fix can be implemented quickly on the next release.

siddboots 19 hours ago

Openclaw has 20k commits, almost 700k lines of code, and it is only four months old. I feel confident that that sort of code base would have a no coherent architecture at all, and also that no human has a good mental model of how the various subsystems interact.

I’m sure we’ll all learn a lot from these early days of agentic coding.

girvo 16 hours ago

blks 11 hours ago

bredren 16 hours ago

Claude Code breaks production features and doesn't say anything about it. The product has just shifted gears with little to no ceremony.

I expect that from something guiding the market, but there have been times where stuff changes, and it isn't even clear if it is a bug or a permanent decision. I suspect they don't even know.

heavyset_go 15 hours ago

We're still in the very early days of generative AI, and people and markets are already prioritizing quality over quantity. Quantity is irrelevant when it comes value.

All code is not fungible, "irreverent code that kinda looks okay at first glance" might be a commodity, but well-tested, well-designed and well-understood code is what's valuable.

danielovichdk 13 hours ago

the_black_hand 13 hours ago

It's understandable and even desirable that a new piece of code rapidly evolves as they iterate and fix bugs. I'd only be concerned if they keep this pattern for too long. In the early phases, I like keeping up with all the cutting edge developments. Projects where dev get afraid to ship because of breaking things end up becoming bloated with unnecessary backward compatibility.

paustint 20 hours ago

I recently listened to this episode from the Claude Code creator (here is the video version: https://www.youtube.com/watch?v=PQU9o_5rHC4) and it sounded like their development process was somewhat similar - he said something like their entire codebase has 100% churn every 6 months. But I would assume they have a more professional software delivery process.

I would (incorrectly) assume that a product like this would be heavily tested via AI - why not? AI should be writing all the code, so why would the humans not invest in and require extreme levels of testing since AI is really good at that?

causal 17 hours ago

I've gotta say, it shows. Claude Code has a lot of stupid regressions on a regular basis, shit that the most basic test harness should catch.

mattmanser 12 hours ago

logicprog 20 hours ago

I mean, I'm slowly trying to learn lightweight formal methods (i.e. what stuff like Alloy or Quint do), behavior driven development, more advanced testing systems for UIs, red-green TDD, etc, which I never bothered to learn as much before, precisely because they can handle the boilerplate aspects of these things, so I can focus on specifying the core features or properties I need for the system, or thinking through the behavior, information flow, and architecture of the system, and it can translate that into machine-verifiable stuff, so that my code is more reliable! I'm very early on that path, though. It's hard!

slopinthebag 9 hours ago

I heard from somebody inside Anthropic that it's really two companies, one which are using AI for everything and the other which spends all their time putting out fires.

cpeterso 20 hours ago

OpenCode's creator acknowledged that the ease of shipping has let them ship prototype features that probably weren't worth shipping and that they need to invest more time cleaning up and fixing things.

https://x.com/thdxr/status/2031377117007454421

rdedev 18 hours ago

Uff. This is exactly what Casey Muratori and his friend was talking about in of their more recent podcast. Features that would never get implemented because of time constraints now do thanks to LLMs and now they have a huge codebase to maintain

alansaber 8 hours ago

logicprog 20 hours ago

Well that's good to hear, maybe they'll improve moving forward on the release aspect at least.

j45 17 hours ago

What to release > What to build > Build anything faster

sauercrowd 2 hours ago

Highly recommend trying pi.dev

It's fully open, fairly minimal, very extensible and (while getting very frequent updates) never has broken on me so far.

Been using it more and more in the last two months, switching more and more from codex to it now.

arcanemachiner 19 hours ago

I'm still trying to figure out how "open" it really is; There are reports that it phones home a lot[0], and there is even a fork that claims to remove this behavior[1]:

[0] https://www.reddit.com/r/LocalLLaMA/comments/1rv690j/opencod...

[1] https://github.com/standardnguyen/rolandcode

nikcub 19 hours ago

the fact that somebody was able to fork it and remove behaviour they didn't want suggests that it is very open

that #12446 PR hasn't even been resolved to won't merge and last change was a week ago (in a repo with 1.8k+ open PRs)

drdaeman 17 hours ago

nsonha 18 hours ago

so how is telemetry not open? If you don't like telemetry for dogmatic reasons then don't use it. Find the alternative magical product whose dev team is able to improve the software blindfolded

heavyset_go 15 hours ago

ipaddr 18 hours ago

blks 20 hours ago

Probably all describe problems stem from the developers using agent coding; including using TypeScript, since these tools are usually more familiar with Js/Js adjacent web development languages.

logicprog 20 hours ago

Perhaps the use of coding agents may have encouraged this behavior, but it is perfectly possible to do the opposite with agents as well — for instance, to use agents to make it easier to set up and maintain a good testing scaffold for TUI stuff, a comprehensive test suite top to bottom, in a way maintainers may not have had the time/energy/interest to do before, or to rewrite in a faster and more resource efficient language that you may find more verbose, be less familiar with, or find annoying to write — and nothing is forcing them to release as often as they are, instead of just having a high commit velocity. I've personally found AIs to be just as good at Go or Rust as TypeScript, perhaps better, as well, so I don't think there was anything forcing them to go with TypeScript. I think they're just somewhat irresponsible devs.

jeremyjh 4 hours ago

sorentwo 12 hours ago

The moment that OpenCode, after helping fix a Dockerfile issue, decided it was time to deploy to prod without asking for consent, I was out.

brabel 11 hours ago

You must never rely on AI itself for authorization… don’t let it run on an environment where it can do that. I can’t believe this needs to be said but everyone seems to have lost their mind and decided to give all their permissions away to a non deterministic thing that when prompted correctly will send it all out to whoever asks it nicely.

BenGosub 3 hours ago

I agree that Opencodr is using a lot of RAM, but regarding the features, I am ak only using the built in features and I wouldn't say they are too many, they are just enough for a complete workflow. If you need more you can install plugins, which I haven't done yet and it's my daily driver for four months.

thatmf 19 hours ago

The value of having (and executing) a coherent product vision is extremely undervalued in FOSS, and IMO the difference between a successful project in the long-term and the kind of sploogeware that just snowballs with low-value features.

rounce 19 hours ago

> The value of having (and executing) a coherent product vision is extremely undervalued in FOSS

Interesting you say this because I'd say the opposite is true historically, especially in the systems software community and among older folks. "Do one thing and do it well" seems to be the prevailing mindset behind many foundational tools. I think this why so many are/were irked by systemd. On the other hand newer tools that are more heavily marketed and often have some commercial angle seem to be in a perpetual state of tacking on new features in lieu of refining their raison d'etre.

Aperocky 19 hours ago

negative values even.

AppleAtCha 7 hours ago

Is there a name for these types of "overbearing" and visually busy "TUIs"? It seems like all the other agents have the same aesthetic and it is unlike traditional nurses or plain text interfaces in a bad way IMO. The constant spinners, sidebars and needless margins are a nuisance to me. Especially over an ssh connection in a tmux session it feels wrong.

theshrike79 4 hours ago

I’ve pretty much ended up with a pi.dev+gpt-5 and Claude combo. Sometimes I use GLM with Pi if I run out of quota or need some simple changes.

I tried Opencode but it was just too much? Same with Crush, 10/10 pretty but lacking in features I need. LSP support was cool though.

dopidopHN2 3 hours ago

Can you expands on the cool part of LSP support ? I"m curious and "on paper" it sounds desirable but I'm unclear on the pluses

tshaddox 19 hours ago

I’m a little surprised by your description of constant releases and instability. That matches how I would describe Claude Code, and has been one of the main reasons I tend to use OpenCode more than Claude Code.

OpenCode has been much more stable for me in the 6 months or so that I’ve been comparing the two in earnest.

hboon 17 hours ago

I use Droid specifically because Claude Code breaks too often for me. And then Droid broke too (but rarely), and I just stuck to not upgrading (like I don't upgrade WebStorm. Dev tools are so fragile)

thayne 3 hours ago

That sounds a lot like my experience with claude code. IDK about OpenCode, but claude code is also largely written by LLMs, and you can tell.

plastic3169 12 hours ago

I’ve been testing opencode and it feels TUI in appearance only. I prefer commandline and TUIs and in my mind TUI idea is to be low level, extremely portable interface and to get out of the way. Opencode does not have low color, standard terminal theme so had to switch to a proper terminal program. Copy paste is hijacked so I need to write code out to file in order to get a snippet. The enter key (as in the return by the keypad) does not work for sending a line. I have not tested but don’t think this would work over SSH even. I have been googling around to find if I am holding it wrong but it feels to break expectations of a terminal app in a way that I wish they would have made it a gui. Makes me sad because I think the goods are there and it’s otherwise good.

msh 11 hours ago

I don’t think good TUI’s are the same as good command line programs. Great tui apps would to me be things like Norton/midnight commander, borlands turbo pascal, vim, eMacs and things like that

plastic3169 11 hours ago

zackify 19 hours ago

Yeah every time I want to like it, scrolling is glitched vs codex and Claude. And other various things like: why is this giant model list hard coded for ollama or other local methods vs loading what I actually have...

On top of that. Open code go was a complete scam. It was not advertised as having lower quality models when I paid and glm5 was broken vs another provider, returning gibberish and very dumb on the same prompt

tmatsuzaki 18 hours ago

I agree. Since tools like Codex let you use SOTA models more cheaply and with looser weekly limits, I think they’re the smarter choice.

scuff3d 18 hours ago

Drives me nuts that we have TUIs written in friggin TS now.

That being said, I do prefer OpenCode to Codex and Claude Code.

cies 15 hours ago

Why to you prefer? I have a different experience, and want to learn.

(I'm also hating on TS/JS: but some day some AI will port it to Rust, right?)

esafak 4 hours ago

scuff3d 14 hours ago

rco8786 7 hours ago

> they add, remove, refine, change, fix, and break features constantly at that accelerated pace.

I wonder how much of this is because the maintainers are using OpenCode to vibe the code for OpenCode.

bjackman 9 hours ago

That is very disappointing coz I've been wanting to try an alternative to Gemini CLI for exactly these reasons. The AI is great but the actual software is a buggy, slow, bloated blob of TypeScript (on a custom Node runtime IIUC!) that I really hate running. It takes multiple seconds to start, requires restarting to apply settings, constantly fucks up the terminal, often crashes due to JS heap overflows, doesn't respect my home dir (~/.gemini? Come on folks are we serious?), has an utterly unusable permission system, etc etc. Yet they had plenty of energy to inject silly terminal graphics and have dumb jokes and tips scroll across the screen.

Is Claude Code like this too? I wonder if Pi is any better.

A big downside would be paying actual cost price for tokens but on the other hand, I wouldn't be tied to Google's model backend which is also extremely flaky and unable to meet demand a lot of the time. If I could get real work done with open models (no idea if that's the case yet) and switch providers when a given provider falls over, that would be great.

WhyNotHugo 4 hours ago

I use Pi with Aliyun, which cost a flat ¥40 (~€5) per month for GLM-5, Kimi K2.5, Minmax and a few other models.

Honestly, these models seem quite on par with Claude. Some days they seem slightly worse, some days I can't tell the difference.

AFAIK, the usage quota is comparable to the Claude $200 subscription.

knocte 8 hours ago

> Is Claude Code like this too? I wonder if Pi is any better.

I'm very happy with Pi myself (running it on a small VPS so that I don't need to do sandboxing shenanigans).

badlogic 5 hours ago

you can use subscriptions with pi.

plagiarist 5 hours ago

Claude will also happily write a huge pile of junk into your home directory, I am sad to report. The permissions are idiotic as well, but I always use it in a container anyway. But I have not had it crash and it hasn't been slow starting for me.

horsh1 9 hours ago

You are describing a typical state of a wibecoded project.

fuy 3 hours ago

claude code easily uses 10+GB in single session :) 1Gb sounds very efficient by comparison

nico 16 hours ago

> they're constantly releasing at an extremely high cadence, where they don't even spend the time to test or fix things

Tbf, this seems exactly like Claude Code, they are releasing about one new version per day, sometimes even multiple per day. It’s a bit annoying constantly getting those messages saying to upgrade cc to the latest version

ctxc 16 hours ago

Oh wow. I got multiple messages in a day and just assumed it was a cache bug.

It's annoying how I always get that "claude code has a native installer xyz please upgrade" message

auggierose 10 hours ago

lanyard-textile 13 hours ago

stego-tech 5 hours ago

This is why I'm taking a wait-and-see approach to these tools on HN myself. My month with Claude Code (the TUI, not the GUI) was amazing from an IT POV, just slop-generating niche tools I could quickly implement and audit (not giant-ass projects), but I ain't outsourcing that to another company when Qwen et al are right there for running on my M1 Pro or RTX 3090.

I'm looking forward to more folks building these kinds of tools with a stronger focus on portability via API or loading local models, as means of having a genuinely useful assistant or co-programmer rather than paying some big corp way too much money (and letting them use my data) for roughly the same experience.

jazzypants 5 hours ago

The types of models you can run locally on that hardware are toys in comparison to the foundation models

627467 5 hours ago

Curious about your setup of qwen on m1 pro. Care to share the toolkit?

plagiarist 5 hours ago

Do you have a setup with a local Qwen that can write out niche tools pretty well? I have been curious about how much I could do local.

grapheneposter 19 hours ago

Yeah I tried using it when oh-my-opencode (now oh-my-openagent) started popping off and found it had highly unstable. I just stick with internal tooling now.

darepublic 7 hours ago

Why not just code your own agent harness

namlem 14 hours ago

How much of the development is being done by humans?

foobarqux 20 hours ago

What is a better option?

logicprog 20 hours ago

For serious coding work I use the Zed Agent; for everything else I use pi with a few skills. Overall, though, I'd recommend Pi plus a few extensions for any features you miss extremely highly. It's also TypeScript, but doesn't suffer from the other problems OC has IME. It's a beautiful little program.

mmcclure 19 hours ago

noelsusman 6 hours ago

pi.dev is worth checking out. The basic idea is they provide a minimalist coding agent that's designed to be easy to extend, so you can tailor the harness to suit your needs without any bloat.

One of the best features is they haven't been noticed by Anthropic yet so you can still use your Claude subscription.

vinhnx 17 hours ago

I've been building VT Code (https://github.com/vinhnx/vtcode), a Rust-based semantic coding agent. Just landed Codex OAuth with PKCE exchange, credentials go into the system keyring.

I build VT Code with Tree-sitter for semantic understanding and OS-native sandboxing. It's still early but I confident it usable. I hope you'll give it a try.

andreynering 20 hours ago

rao-v 19 hours ago

jruz 12 hours ago

yeah I agree is way too buggy, nice tho and I appreciate the effort but really feels sloppy

mmaunder 4 hours ago

Yeah just try to select text to copy. Nope. Try to scroll back in terminal or tmux. Nope. Overbearing for sure.

alienbaby 17 hours ago

its hard not to wonder if they are taking their own medicine, but not quite properly

wvlia5 7 hours ago

this is a bot comment

mihaaly 9 hours ago

I tried it briefly and the practice - argued for strategy for operation actually - to override my working folder seelction and altering to the parent root git folder is a no go.

bakugo 20 hours ago

Isn't this pretty much the standard across projects that make heavy use of AI code generation?

Using AI to generate all your code only really makes sense if you prioritize shipping features as fast as possible over the quality, stability and efficiency of the code, because that's the only case in which the actual act of writing code is the bottleneck.

logicprog 19 hours ago

I don't think that's true at all. As I said, in a response to another person blaming it on agentic coding above, there are a very large number of ways to use coding agents to make your programs faster, more efficient, more reliable, and more refined that also benefit from agents making the code writing research, data piping, and refactoring process quicker and less exhausting. For instance, by helping you set up testing scaffolding, handling the boilerplate around tests while you specify some example features or properties you want to test and expands them, rewriting into a more efficient language, large-scale refactors to use better data structures or architectures, or allowing you to use a more efficient or reliable language that you don't know as well or find to have too much boilerplate or compiler annoyance to otherwise deal with yourself. Then there are sort of higher level more phenomenological or subjective benefits, such as helping you focus on the system architecture and data flow, and only zoom in on particular algorithms or areas of the code base that are specifically relevant, instead of forever getting lost in the weeds of thinking about specific syntax and compiler errors or looking up a bunch of API documentation that isn't super important for the core of what you're trying to do and so on.

Personally, I find this idea that "coding isn't the bottleneck" completely preposterous. Getting all of the API documentation, the syntax, organizing and typing out all of the text, finding the correct places in the code base and understanding the code base in general, dealing with silly compiler errors and type errors, writing a ton of error handling, dealing with the inevitable and inoraticable boilerplate of programming (unless you're one of those people that believe macros are actually a good idea and would meaningfully solve this), all are a regular and substantial occurrence, even if you aren't writing thousands of lines of code a day. And you need to write code in order to be able to get a sense for the limitations of the technology you're using and the shape of the problem you're dealing with in order to then come up with and iterate on a better architecture or approach to the problem. And you need to see your program running in order to evaluate whether it's functionality and design a satisfactory and then to iterate on that. So coding is actually the upfront costs that you need to pay in order to and even start properly thinking about a problem. So being able to get a prototype out quickly is very important. Also, I find it hard to believe that you've never been in a situation where you wanted to make a simple change or refactor that would have resulted in needing to update 15 different call sites to do properly in a way that was just slightly variable enough or complex enough that editor macros or IDE refactoring capabilities wouldn't be capable of.

That's not to mention the fact that if agentic coding can make deploying faster, then it can also make deploying the same amount at the same cadence easier and more relaxing.

adithyassekhar 15 hours ago

Imustaskforhelp 11 hours ago

I tried running Opencode on my 7$/yr 512mb vps but it had the OOM issue and yes it needs 1GB of ram or more.

I then tried running other options like picoclaw/picocode etc but they were all really hard to manage/create

The UI/UX I want is that I can just put my free openrouter api key in and then I am ready to go to get access to free models like Arcee AI right now

After reading your comments/I read this thread, I tried crush by charmbracelet again and it gives the UI/UX that I want.

I am definitely impressed by crush/ the charm team. They are on HN and they work great for me, highly recommended if you want something which can work on low constrained devices

I do feel like Charm's TUI's are too beautiful in the sense that running a connection over SSH can delay so when I tried to copy some things, the delay made things less copy-able but overall, I think that I am using Crush and I am happy for the most part :-)

Edit: That being said, just as I was typing this, Crush took all the Free requests from Openrouter that I get for free so it might be a bit of minor issue but overall its not much of an issue from Crush side, so still overall, my point is that Crush is worth checking out

Kudos to the CharmBracelet team for making awesome golang applications!

fHr 14 hours ago

Rust > TS Codex > OpenCode

heavyset_go 16 hours ago

By default OpenCode sends all of your prompts to Grok's free tier to come up with chat summaries for the UI.

To change that, you need to set a custom "small model" in the settings.

solarkraft 7 hours ago

This is my main problem I have with it: It sends data and loads code left and right by default. For instance, the latest plugin packages are automatically installed on every startup. Their “Zen” provider is enabled by default so you might accidentally upload your code base to their servers. Better yet: The web UI has a button that just uploads the entire session to their servers WITH A SINGLE CLICK for sharing.

The situation is ... pretty bad. But I don’t think this is particularly malicious or even a really well considered stance, but just a compromise in order to move fast and ship useful features.

To make it easily adoptable by anyone privacy conscious without hours of tweaking, there should be an effort to massively improve this situation. Luckily, unlike Claude Code, the project is open source and can he changed!

moffkalast 6 hours ago

There is some kind of fitting irony around agentic coding harnesses mainly being maintained by coding agents themselves, and as a result they are all a chaotic mess.

ekjhgkejhgk 11 minutes ago

> By default OpenCode sends all of your prompts to Grok's free tier

Just my prompts, or everything the agent has in the context window?

Also, could you please provide a reference for this claim? Thank you

daliusd 5 hours ago

I had to double check this. Here is summary:

The model selection for title generation works as follows (prompt.ts:1956-1960): 1. If the title agent has an explicit model configured — that model is used. 2. Otherwise, it tries Provider.getSmallModel(providerID) — which picks a "small" model from the same provider as the current session, using this priority list (provider.ts:1396-1402): - claude-haiku-4-5 / claude-haiku-4.5 / 3-5-haiku / 3.5-haiku - gemini-3-flash / gemini-2.5-flash - gpt-5-nano - (Copilot adds gpt-5-mini at the front; opencode provider uses only gpt-5-nano) 3. If no small model is found — it falls back to the same model currently being used for the session. So by default, title generation uses a cheaper/faster small model from the same provider (e.g., Haiku if on Anthropic, Flash if on Google, nano if on OpenAI), and if none are available, it just uses whatever model the user is chatting with. You can also override this entirely by configuring a model on the title agent.

heavyset_go 5 hours ago

When I did this, I used a single local llama.cpp server instance as my main model without setting a small model and it did not use it for chat titles while I used it for prompts.

Chat titles would work even when the local llama.cpp server hadn't started, and it was never in the the llama.cpp logs, it used an external model I hadn't set up and had not intended to use.

It was only when I set `small_model` that I was able to route title generation to my own models.

kmod 3 hours ago

Fwiw this got changed about a week ago, where they changed the logic to match the documentation rather than default to sending your prompts to their servers. This is why so many people have noticed this happening but if you ask an AI about it right now it will say this is not true.

Personally I think it's necessary to run opencode itself inside a sandbox, and if you do that you can see all of the rejected network calls it's trying to make even in local mode. I use srt and it was pretty straightforward to set up

agilob 12 hours ago

Also, even when using local models in ollama or lmstudio, prompts are proxied via their domain, so never put anything sensitive even when using local setup

https://old.reddit.com/r/LocalLLaMA/comments/1rv690j/opencod...

They also don't let you run all local models, but specific whitelisted by another 3rd party: https://github.com/anomalyco/opencode/issues/4232

embedding-shape 9 hours ago

To be clear, that seems to be about the webui only, the TUI doesn't seem affected. I haven't fully investigated this myself, but when I run opencode (1.2.27-a6ef9e9-dirty) + mitmproxy and using LM Studio as the backend, when starting opencode + executing a prompt, I only see two requests, both to my LM Studio instance, both normal inference requests (one for the chat itself + one for generating the title).

Everything you read on the internet seems exaggerated today. Especially true for reddit, and especially especially true for r/LocalLllama which is a former shadow of itself. Today it's mostly sockpuppets pushing various tools and models, and other sockpuppets trying to push misinformation about their competitors tools/models.

zingar 10 hours ago

Geez there should be a big warning on the tin about this. They’re so neatly integrated with copilot that I assumed (and told others) that they had all the privacy guarantees of copilot :(

thdxr 7 hours ago

this isn't true

it will use whatever small model there is in your provider

we had a fallback where we provided free small models if your provider did not have one (gpt nano)

some configs fell back to this unexpectedly which upset people so we removed it

solarkraft 7 hours ago

I can tell that you’re doing all of this in the name of first-use UX. It’s working: The out of the box experience is really seamless.

But for serious (“grown up”) use, stuff like this just doesn’t fly. At all. We have to know and be able to control exactly where data gets sent. You can’t just exfiltrate our data to random unvetted endpoints.

Given the hurt trust of the past, there also needs to be a communication campaign (“actually we’re secure now”), because otherwise people will keep going around claiming that OpenCode sends all of your data to Grok. This would really unnecessarily hurt the project in the long run.

Iolaum 7 hours ago

Not true according to a CGPT question:

More importantly, the current dev branch source for packages/opencode/src/session/summary.ts shows summarizeMessage() now only computes diffs and updates the message summary object; it does not make an LLM call there anymore. The current code path calls summarizeSession() and summarizeMessage(), and summarizeMessage() just filters messages, computes diffs, sets userMsg.summary.diffs, and saves the message.

https://github.com/anomalyco/opencode/blob/dev/packages/open...

arcadianalpaca 7 hours ago

Yikes... sending prompts to a third party by default with no disclosure in the setup flow is a rough look for a tool that positions itself as the open sources alternative. "Open" loses meaning fast if the defaults work against the user.

gmassman 13 hours ago

Seems like an anti-pattern to me to run AI models without user’s consent.

kuboble 13 hours ago

? The whole idea of a coding assistant is to send all your interactions with the program to the llm model.

movq 11 hours ago

exitb 13 hours ago

My understanding is that it’s best to set a whitelist in enabled_providers, which prevents it from using providers you don’t anticipate.

phantomCupcake 11 hours ago

Are you using Grok for the coding? Because I have Copilot connected and I can see the request to Copilot for the summaries - with no "small model" setting even visible in my settings.

solarkraft 7 hours ago

I found out about OpenCode through the Anthropic feud. I now spend most of my AI time in it, both at work and at home. It turns out to be pretty great for general chat too, with the ability to easily integrate various tools you might need (search being the top one of course).

I have things to criticize about it, their approach to security and pulling in code being my main one, but over all it’s the most complete solution I’ve found.

They have a server/client architecture, a client SDK, a pretty good web UI and use pretty standard technologies.

The extensibility story is good and just seems like the right paradigms mostly, with agents, skills, plugins and providers.

They also ship very fast, both for good and bad, I’ve personally enjoyed the rapid improvements (~2 days from criticizing not being able to disable the default provider in the web ui to being able to).

I think OpenCode has a pretty bright future and so far I think that my issues with it should be pretty fixable. The amount of tasteful choices they’ve made dwarfs the few untasteful ones for me so far.

theshrike79 4 hours ago

Try pi.dev+gpt-5, it works amazingly well

Just note that you need to either create any special features yourself or find an implementation by someone else. It’s pretty bare bones by default

softwaredoug a day ago

The team also is not breathlessly talking about how coding is dead. They have pretty sane takes on AI coding including trying to help people who care about code quality.

blackqueeriroh 15 hours ago

Couldn’t tell by the way they write their software.

m463 20 hours ago

They probably don't have to write OKRs every quarter saying the opposite.

vortegne 7 hours ago

Do you follow them? They most definitely pump out insane takes on twitter. But maybe that’s just engagement bait for a check, of course.

jFriedensreich 5 hours ago

opencode stands out as one of the few agents with a proper client server architecture that allows something like openchambers great vscode extension so its possible to seamlessly switch between tui, vscode, webapp, desktop app. i think there is hardly a usable alternative for most coding agent usecases (assuming agents from model providers are a no go, they cannot be allowed to own the tools AND the models). But its also far from perfect: the webui is secretly served from their servers instead of locally for no reason. worse the fallback route gets also sent to their servers so any unknown request to opencode api ends up being sent to opencode servers potentially leaking data. the security defaults are horrific, its impossible to use it safely outside a controlled container. it will just serve your whole hard drive via rest endpoint and not constrain to project folders. the share feature uploading your conversations to their servers is also so weirdly communicated and implemented that it leaves a bad taste. I dont think this will become much better until the agent ecosystem is more modular and less monolith, acp, a2a and mcp need to become good enough so tools, prompts, skills, subagent setups and workflow engines and UIs are completely swappable and the agent core has to only focus on the essentials like runtime and glue architecture. i really hope we dont see all of these grow into full agent oses with artificial lock in effects and big effort buy in.

ramon156 a day ago

The Agent that is blacklisted from Anthropic AI, soon more to come.

I really like how their subagents work, as a bonus I get to choose which model is in which agent. Sadly I have to resort to the mess that Anthropic calls Claude Code

pczy a day ago

They are not blacklisted. You are allowed to use the API at commercial usage pricing. You are just not allowed to use your Claude Code subscription with OpenCode (or any other third‑party harness for the record).

boxedemp 18 hours ago

I have my own harness I wrap Claude CLI in, I wonder if I'm breaking the rules...

arcanemachiner 17 hours ago

theshrike79 4 hours ago

hrmtst93837 11 hours ago

So it's less 'blacklist' and more a licensing gotcha designed to crush price arbitrage, basically rent-seeking by toggling where the tollbooth sits.

Robdel12 21 hours ago

Has it occurred to anyone that Anthropic highest in the industry API pricing is a play to drive you into their subscription? For the lock-in?

Macha 20 hours ago

wilg 21 hours ago

Sometimes people want to be real pedants about licensing terms when it comes to OSS, assuming such terms are completely bulletproof, other times people don't think the terms of their agreement with a service provider should have any force at all.

oldestofsports a day ago

I dont understand this, what is the difference, technically!

KronisLV 21 hours ago

miki123211 21 hours ago

hereme888 a day ago

hackingonempty 21 hours ago

jwpapi 21 hours ago

hereme888 a day ago

Was it not obvious what the OP meant by blacklisted?

Maxatar 21 hours ago

enraged_camel a day ago

lima a day ago

You can still use OpenCode with the Anthropic API.

pimeys a day ago

Yep. That's what I do. Just API keys and you can switch from Opus to GPT especially this week when Opus has been kind of wonky.

stavros 21 hours ago

gwd a day ago

jatora a day ago

raincole 18 hours ago

More what to come?

heywinit 13 hours ago

probably more agents to be blocked by anthropic. i've seen theo from t3.gg go through a bunch of loopholes to support claude in his t3code app just so anthropic doesn't sue their asses.

cyanydeez 21 hours ago

a $3000 AMD395+ will get you pretty close to a open development environment.

anonym29 21 hours ago

There are boards starting in the $1500-$2000 range, and complete systems in the $2500-$2700 range. I actually don't know of any Strix Halo mini PCs that cost $3000, do you?

EDIT: The system I bought last summer for $1980 and just took delivery of in October, Beelink GTR 9 Pro, is now $2999.... wow...

UncleOxidant 15 hours ago

free652 20 hours ago

Shebanator 20 hours ago

ricardobeat 20 hours ago

hippycruncher22 21 hours ago

I'm a https://pi.dev man myself.

krzyk 13 hours ago

Why most of those tools are written in js/ts?

JS is not something that was developed with CLI in mind and on top of that that language does not lend itself to be good for LLM generation as it has pretty weak validation compared to e.g. Rust, or event C, even python.

Not to mention memory usage or performance.

solarkraft 7 hours ago

TS is just a boring default.

It’s simply one of the most productive languages. It actually has a very strong type system, while still being a dynamic language that doesn’t have to be compiled, leading to very fast iteration. It’s also THE language you use when writing UIs. Execution is actually pretty fast through the runtimes we have available nowadays.

The only other interpreted language is Python and that thoroughly feels like a toy in comparison (typing situation still very much in progress, very weak ORM situation, not even a usable package manger until recently!).

jpc0 4 hours ago

plipt 4 hours ago

_ache_ 6 hours ago

manmal 13 hours ago

For a TUI agent, runtime performance is not the bottleneck, not by far. Hackability is the USP. Pi has extensions hotreloading which comes almost for free with jiti. The fact that the source is the shipped artifact (unlike Go/Rust) also helps the agent seeing its own code and the ability to write and load its own extensions based on that. A fact that OpenClaw’s success is in part based on IMO.

I can’t find the tweet from Mario (the author), but he prefers the Typescript/npm ecosystem for non-performance critical systems because it hits a sweet spot for him. I admire his work and he’s a real polyglot, so I tend to think he has done his homework. You’ll find pi memory usage quite low btw.

krzyk 10 hours ago

the_mitsuhiko 10 hours ago

In pi’s case there is a plugin system. It’s much easier to make a self extending agent work with Python or JavaScript than most other languages. JavaScript has the benefit that it has a great typing system on top with TypeScript.

6ak74rfy 19 hours ago

Same.

Pi is refreshingly minimal in terms of system prompts, but still works really well and that makes me wonder whether other harnesses are overdoing. Look at OpenCode's prompts, for instance - long, mostly based on feels and IMO unnecessary. I would've liked to just overwrite OC's system prompts with Pi's (to get other features that Pi doesn't have) but that isn't possible today (without maintaining a custom fork)

onetom 5 hours ago

Pi is the Emacs of coding AI agents.

It's a pity it's written in TS, but at least it can draw from a big contributor pool.

There is https://eca.dev/ too, which might worth considering, which is a UI agnostic agent, a bit like LSP servers.

szatkus 10 hours ago

I just found out about pi yesterday. It's the only agent that I was able to run on RISC-V. It's quite scary that it runs commands without asking though.

theshrike79 4 hours ago

It has zero safeguards by default

But the magic is that it knows how to modify itself, if you need a plan mode you can ask it to implement it :)

pontussw 13 hours ago

Same here!

The simplicity of extending pi is in itself addictive, but even in its raw form it does the job well.

Before finding pi I had written a lot of custom stuff on top of all the provider specific CLI tools (codex, Claude, cursor-agent, Gemini) - but now I don’t have to anymore (except if I want to use my anthropic sub, which I will now cancel for that exact reason)

wyre 19 hours ago

Same.

I’m sure there’s a more elegant way to say this, but OpenCode feels like an open source Claude Code, while pi feels like an open source coding agent.

vorticalbox 8 hours ago

> Sessions are stored as trees

that is actually really nice

Richard_Jiang 18 hours ago

Pi is a great project, and the lightweight Agent development is really recommended to refer to Pi's implementation method.

cmrdporcupine 18 hours ago

Pi is good stuff and refreshingly simple and malleable.

I used it recently inside a CI workflow in GitLab to automatically create ChangeLog.md entries for commits. That + Qwen 3.5 has been pretty successful. The job starts up Pi programatically, points it at the commits in question, and tells it to explore and get all the context it needs within 600 seconds... and it works. I love that this is possible.

monkey26 39 minutes ago

I do like OpenCode, and have been using it in and off since last July. But I feel like they’re trying to stuff too much GUI into a TUI? Due to this I find myself using Codex and Pi more often. But am still glad OpenCode and their Zen product exist.

planckscnst 21 hours ago

I love OpenCode! I wrote a plugin that adds two tools: prune and retrieve. Prune lets the LLM select messages to remove from the conversation and replace with a summary and key terms. The retrieve tool lets it get those original messages back in case they're needed. I've been livestreaming the development and using it on side projects to make sure it's actually effective... And it turns out it really is! It feels like working with an infinite context window.

https://www.youtube.com/live/z0JYVTAqeQM?si=oLvyLlZiFLTxL7p0

computerex 13 hours ago

Hey I built that into my harness! http://github.com/computerex/z

Long tool outputs/command outputs everything in my harness is spilled over to the filesystem. Context messages are truncated and split to filesystem with a breadcrumb for retrieving the full message.

Works really well.

signal_v1 5 hours ago

The infinite context window framing is the right way to think about it. Running inside Claude Code continuously, the prune step matters more than retrieve in practice — most of what gets dropped stays dropped. More useful is being deliberate about what goes in at the start of each loop iteration rather than managing what comes out at the end.

weird-eye-issue 16 hours ago

That doesn't sound all that useful to be honest and would likely increase costs overall due to the hit to prompt caching by removing messages

embedding-shape 9 hours ago

> would likely increase costs overall

Assuming you pay per token, which seems like a really strange workflow to lock yourself into at this point. Neither paid monthly plans nor local models suffer from that issue.

I tried once to use APIs for agents but seeing a counter of money go up and eventually landing at like $20 for one change, made it really hard to justify. I'd rather pay $200/month before I'd be OK with that sort of experience.

weird-eye-issue 6 hours ago

signal_v1 5 hours ago

manmal 13 hours ago

Have a look how pi.dev implements /tree. Super useful

esafak 2 hours ago

That borks the cache and costs you more.

advael 21 hours ago

Seems interesting, but at a glance I can't find a repo or a package manager download for this. Have you made it available anywhere?

sheo 19 hours ago

I found the opencode fork repo, but no plugin seems available so far

https://github.com/Vibecodelicious/opencode

brendanmc6 21 hours ago

I’ve been extraordinarily productive with this, their $10 Go plan, and a rigorous spec-driven workflow. Haven’t touched Claude in 2 months.

I sprinkle in some billed API usage to power my task-planner and reviewer subagents (both use GPT 5.4 now).

The ability to switch models is very useful and a great learning experience. GLM, Kimi and their free models surprised me. Not the best, not perfect, but still very productive. I would be a wary shareholder if I owned a stake in the frontier labs… that moat seems to be shrinking fast.

helloplanets 14 hours ago

> Moat seems to be shrinking fast.

It's been a moving target for years at this point.

Both open and closed source models have been getting better, but not sure if the open source models have really been closing the gap since DeepSeek R1.

But yes: If the top closed source models were to stop getting better today, it wouldn't take long for open source to catch up.

xvector 19 hours ago

The moat is having researchers that can produce frontier models. When OpenCode starts building frontier models, then I'd be worried; otherwise they're just another wrapper

brendanmc6 19 hours ago

Of course, my point is that these trailing models are close behind, and cost me a lot less, and work great with harnesses like OpenCode.

troymc 17 hours ago

"OpenCode Go" (a subscription) lets you use lots of hosted open-weights frontier AI models, such as GLM-5 (currently right up there in the frontier model leaderboards) for $10 per month.

xvector 5 hours ago

quietsegfault 21 hours ago

Can you talk more about how you leverage higher quality models for the stuff that counts? Anywhere I can read more on the philosophy of when to use each?

brendanmc6 20 hours ago

Sure happy to share. It’s been trial and error, but I’ve learned that for agents to reliably ship a large feature or refactor, I need a good spec (functional acceptance criteria) and I need a good plan for sequencing the work.

The big expensive models are great at planning tasks and reviewing the implementation of a task. They can better spot potential gotchas, performance or security gaps, subtle logic and nuance that cheaper models fail to notice.

The small cheap models are actually great (and fast) at generating decent code if they have the right direction up front.

So I do all the spec writing myself (with some LLM assistance), and I hand it to a Supervisor agent who coordinates between subagents. Plan -> implement -> review -> repeat until the planner says “all done”.

I switch up my models all the time (actively experimenting) but today I was using GPT 5.4 for review and planning, costing me about $0.4-$1 for a good sized task, and Kimi for implementation. Sometimes my spec takes 4-5 review loops and the cost can add up over an 8 hour day. Still cheaper than Claude Max (for now, barely).

Each agent retains a fairly small context window which seems to keep costs down and improves output. Full context can be catastrophic for some models.

As for the spec writing, this is the fun part for me, and I’ve been obsessing over this process, and the process of tracking acceptance criteria and keeping my agents aligned to it. I have a toolkit cooking, you can find in my comment history (aiming to open source it this week).

letsgethigh 14 hours ago

stavros 20 hours ago

Frannky 21 hours ago

I don't use it for coding but as an agent backend. Maybe opencode was thought for coding mainly, but for me, it's incredibly good as an agent, especially when paired with skills, a fastapi server, and opencode go(minimax) is just so much intelligence at an incredibly cheap price. Plus, you can talk to it via channels if you use a claw.

solarkraft 6 hours ago

I see great potential in this use case, but haven’t found that many documented cases of people doing this.

Do you have resources you can point to / mind sharing your setup? What were the biggest problems / delights doing this?

krzyk 12 hours ago

By "agent" you mean what?

Coding is mostly "agentic" so I'm bit puzzled.

epolanski 11 hours ago

It's defined in opencode docs, but it's an overall cross industry term for custom system prompt with it's own permissions:

https://opencode.ai/docs/agents/

65a 19 hours ago

I'd really like to get more clarification on offline mode and privacy. The github issues related to privacy did not leave a good feeling, despite being initially excited. Is offline mode a thing yet? I want to use this, but I don't want my code to leave my device.

solaire_oa 19 hours ago

taosx 13 hours ago

The only thing I'm wondering is if they have eval frameworks (for lack of a better word). Their prompts don't seem to have changed for a while and I find greater success after testing and writing my own system prompts + modification to the harness to have the smallest most concise system prompt + dynamic prompt snippets per project.

I feel that if you want to build a coding agent / harness the first thing you should do is to build an evaluation framework to track performance for coding by having your internal metrics and task performance, instead I see most coding agents just fiddle with adding features that don't improve the core ability of a coding agent.

epolanski 11 hours ago

You can't write your system prompt in opencode, there's no API to override the default anthropic.txt as far as I'm aware.

I considered creating a PR for that, but found that creating new agents instead worked fine for me.

taosx 11 hours ago

I've forked it locally, to be honest I haven't merged upstream in a while as I haven't seen any commits that I found relevant and would improve my usage, they seem to work on the web and desktop version which I don't use.

The changes I've made locally are:

- Added a discuss mode with almost on tools except read file, ask tool, web search only based no heuristics + being able to switch from discuss to plan mode.

Experiments:

- hashline: it doesn't bring that much benefit over the default with gpt-5.4.

- tried scribe [0]: It seems worth it as it saves context space but in worst case scenarios it fails by reading the whole file, probably worth it but I would need to experiment more with it and probably rewrite some parts.

The nice thing about opencode is that it uses sqlite and you can do experiments and then go through past conversation through code, replay and compare.

[0] https://github.com/sibyllinesoft/scribe

embedding-shape 8 hours ago

> You can't write your system prompt in opencode

Now I just started looking into OpenCode yesterday, but seems you can override the system prompts by basically overloading the templates used in for example `~/.opencode/agents/build.md`, then that'd be used instead of the default "Build" system prompt.

At least from what I gathered skimming the docs earlier, might not actually work in practice, or not override all of it, but seems to be the way it works.

khimaros a day ago

i've been using this as my primary harness for llama.cpp models, Claude, and Gemini for a few months now. the LSP integration is great. i also built a plugin to enable a very minimal OpenClaw alternative as a self modifying hook system over IPC as a plugin for OpenCode: https://github.com/khimaros/opencode-evolve -- and here's a deployment ready example making use of it which runs in an Incus container/VM: https://github.com/khimaros/persona

riedel a day ago

Very cool! I have been using opencode, as almost everybody else in the lab is using codex. I found the tools thing inside your own repo amazing but somehow I could not get it to reliably get opencode to write its own tools. Seems also a bit scary as there is pretty much not much security by default. I am using it in a NixOS WSL2 VM

yogurt0640 7 hours ago

You could try something like this https://github.com/andersonjoseph/jailed-agents

I'm actually moving to containerised isolation. I realised the agents waste too much time trying to correctly install dependencies, not unlike a normal nixos user.

hrpnk 3 hours ago

I wish the team would be more responsive to popular issues - like inability to provide a dynamic api key helper like claude has. This one even has a PR open: https://github.com/anomalyco/opencode/issues/1302

arthurjean 2 hours ago

I've used both. I stuck with Claude Code, the ergonomics are better and the internals are clearly optimized for Opus which I use daily, you can feel it. That said OpenCode is still a very good alternative, well above Codex, Gemini CLI or Mistral Vibe in my experience.

dalton_zk 7 hours ago

Dax post on x:

"we see occasional complaints about memory issues in opencode

if you have this can you press ctrl+p and then "Write heap snapshot"

Upload here: https://romulus.warg-snake.ts.net/upload

Original post:https://x.com/i/status/2035333823173447885

shaneofalltrad 20 hours ago

What would be the advantage using this over say VSCode with Copilot or Roo Code? I need to make some time to compare, but just curious if others have a good insight on things.

javier123454321 20 hours ago

In terms of output, it's comparable. In terms of workflow, it suits my needs a lot more as a VIM terminal user.

ray_v 20 hours ago

I started out using VSCode with their Claude plugin; it seemed like a totally unnecessary integration. A better workflow seems to just run Claude Code directly on my machine where there are fewer restrictions - it just opens a lot more possibilities on what it can do

zingar 20 hours ago

Aren’t those in-editor tools? Opencode is a CLI

shaneofalltrad 14 hours ago

Ok I get it now, same with the vim comment above, it seems VSCode has the more IDE setup while OpenCode is giving the vim nerdtree vibe? I'll have to take a look, it makes sense to possibly have both for different use cases I guess.

zingar 5 hours ago

01100011 17 hours ago

Stupid question, but are there models worth using that specialize in a particular programming language? For instance, I'd love to be able to run a local model on my GPU that is specific to C/C++ or Python. If such a thing exists, is it worth it vs one of the cloud-based frontier models?

I'm guessing that a model which only covers a single language might be more compact and efficient vs a model trained across many languages and non-programming data.

Fulgidus 2 hours ago

Months ago I tested a concept revolving this issue and made a weird MCP-LSP-LocalLLM hybrid thing that attempts to enhance unlucky, fast changing, or unpopular languages (mine attempts with Zig)

Give it a look, maybe it could inspire you: https://github.com/fulgidus/zignet

Bottom-line: fine-tuning looks like the best option atm

girvo 16 hours ago

I'm currently experimenting with (trying to) fine tune Qwen3.5 to make it better at a given language (Nim in this case); but I am quite bad at this, and honestly am unsure if it's even really fully feasible at the scale I have access to. Certainly been fun so far though, and I have a little Asus GX10 box on the way to experiment some more!

embedding-shape 8 hours ago

Been playing around with fine-tuning models for specific languages as well (Clojure and Rust mostly), but the persistent problem is high quality data sets, mostly I've been generating my own based on my own repositories and chat sessions, what approach are you taking for gathering the data?

numberwan9 4 hours ago

girvo 6 hours ago

epolanski 11 hours ago

My own experience trying many different models is that general intelligence of the model is more important.

If you want it to stick to better practices you have to write skills, provide references (example code it can read), and provide it with harnessing tools (linters, debuggers, etc) so the agent can iterate on its own output.

cpburns2009 17 hours ago

I'd be interested in this too. I think that's what post-training can achieve but I've never looked into it.

zkmon 9 hours ago

OpenCode works awesome for me. The BigPickle model is all I want. I do not throw some large work at the agent that requires lot of reasoning, thinking or decision making. It's my role to chop the work down to bite-size and ask the fantastic BigPickle to just do the damn coding or bit of explaining. It works very well with interactive sessions with small tasks. Not giving something to work over night.

I used Claude with paid subscription and codex as well and settled to OpenCode with free models.

hmcdona1 19 hours ago

Can someone explain how Claude Code can instantly determine what file I have open and what lines I have selected in VS Code even if it's just running in a VS Code terminal instance, yet I cannot for the life of me get OpenCode to come anywhere close to that same experience?

The OpenCode docs suggest its possible, but it only works with their extension (not in an already open VS Code terminal) with a very specific keyboard shortcut and only barely at that.

aduermael 18 hours ago

I started my own fully containerized coding agent 100% in Go recently. Looking for testers: https://github.com/aduermael/herm

heywinit 12 hours ago

i like the containerization idea. i wish you used the opencode cli as the actual underlying agent.

aduermael 12 hours ago

What do you like particularly about the opencode cli?

ndom91 10 hours ago

Since this is blowing up, gonna plug my opencode/claude-code plugin that allows you to annotate LLMs plans like a Google doc with strikethroughs, comments, etc. and loop with your agent until you're happy with the plan.

https://github.com/ndom91/open-plan-annotator

sebastianconcpt 5 hours ago

OpenCode is the almost good IDE I need.

What does well: helps context switching by using one window to control many repos with many worktrees each.

What can do better? It's putting AI too much in control? What if I want to edit a function myself in the workspace I'm working on? or select a snippet and refer that in the promp? without that I feel it's missing a non-negotiable feature.

xpe 5 hours ago

Do you think the design direction of “chat first” is compatible with editor first? I don’t know if any tools do both well. Seems like a fork in the road, design wise.

justacatbot 5 hours ago

The decision to build this as a TUI rather than a web app is interesting. Terminal-native tools tend to get out of the way and let you stay in flow -- curious how the context management works when you have a large codebase, do you chunk by file or do something smarter?

solarkraft 2 hours ago

It’s both! The core is implemented as a server and any UI (the TUI being one) can connect to it.

It’s actually “dumber” than any of your suggestions - they just let the agent explore to build up context on its own. “ls” and “grep” are among the most used discovery tools. This works extraordinarily well and is pretty much the standard nowadays because it lets the agent be pretty smart about what context it pulls in.

__mharrison__ a day ago

This replaced Aider for me a couple months back.

I use it with Qwen 3.5 running locally when my daily limits run out on my other subscriptions.

The harness is great. Local models are just slow enough that the subscription models are easier to use. For most of my tasks these days, the model's capability is sufficient; it is just not as snappy.

plipt 4 hours ago

Could you say more about the differences between Aider and OpenCode?

I briefly dabbled with Aider some months back but never got any real work done with it. Without installing each one of these new tools I'm having trouble grokking what is changing about them that moves the LLM-assisted software dev experience forward.

psibi 16 hours ago

One thing I like with Aider is the fact that I can control the context by using /add explicitly on a subset of files. Can you achieve the same wit OpenCode ?

esafak 2 hours ago

Yes, using at @ sign; CC and Codex use this too.

__mharrison__ 14 hours ago

I feel like I haven't really needed to manage context with newer models. Rarely I will restart the session to clear out out.

cyanydeez 21 hours ago

I'm curious: I'venever touched cloud models beyond a few seconds. I run a AMD395+ with the new qwen coder. Is there any intelligence difference, or is it just speed and context? At 128GB, it takes quite awhile before getting context wall.

__mharrison__ 14 hours ago

There's a difference in intelligence. However for 90% of what I'm doing I don't really need it. The online models are just faster.

I just did a one hour vibe session today, ripping out a library dependency and replacing it with another and pushing the library to pypi. I should take my task list and let the local model replicate the work and see how it works out.

rurban 5 hours ago

That's my favorite CLI agent, over codex, claude, copilot and qwen-code.

It has beautified markdown output, much more subagents, and access to free models. Unlike claude and codex. Best is opencode with GitHub opus 4.6, but the fun only lasts for a day, then you're out of tokens for a month.

jee599 10 hours ago

The security concerns here are real but not unique to OpenCode. Most AI coding agents have the same fundamental problem: they need broad file system access to be useful, but that access surface is also the attack surface. The config-from-web issue is particularly bad because it's essentially remote code execution through prompt injection.

What I'd want to see from any of these tools is a clear permissions model — which files the agent can read vs write, whether it can execute commands, and an audit log of what it actually did. Claude Code's hooks system at least gives you deterministic guardrails before/after agent actions, but it's still early days for this whole category.

jy-tan 3 hours ago

I created a tool for this: https://github.com/Use-Tusk/fence

Same thoughts - I wanted a "permission manager" that defines a set of policies agnostic to coding agents. It also comes with "monitor mode" that shows operations blocked, but not quite an audit log yet though.

solarkraft 6 hours ago

This is another one of OpenCode’s current weak points in the security complex: They consider permissions a “UX feature” rather than actual guardrails. The reasoning is that you’re giving the agent access to the shell, so it’ll be able to sidestep everything.

This is of course a cop-out: They’re not considering the case in which you’re not blindly doing that.

Fun fact: In the default setup, the agent can fully edit all of the harnesses files, including permissions and session history. So it’s pretty trivial for it to a) escalate privileges and then even b) delete evidence of something nefarious happening.

It’s pretty reckless and even pretty easy to solve with chroot and user permissions. There just has been (from what I see currently) relatively little interest from the project in solving this issue.

embedding-shape 9 hours ago

Granted, I just started playing around with OpenCode (but been using Codex and Claude Code since they were initially available, so not first time with agents), but anyways:

> they need broad file system access to be useful, but that access surface is also the attack surface

Do they? You give them access to one directory typically (my way is to create a temporary docker container that literally only has that directory available, copied into the container on boot, copied back to the host once the agent completed), and I don't think I've needed them to have "broad file system access" at any point, to be useful or otherwise.

So that leads me to think I'm misunderstanding either what you're saying, or what you're doing?

thevinchi 8 hours ago

This is the way. If you’re not running your agent harness/framework in a container with explicit bind mounts or copy-on-build then you’re doing it wrong. Whenever I see someone complain about filesystem access and sequirity risk it’s a clear signal of incompetence imo.

embedding-shape 8 hours ago

aniviacat 9 hours ago

Codex has some OS-level sandboxing by default that confines its actions to the current workspace [1].

OpenCode has no sandboxing, as far as I know.

That makes Codex a much better choice for security.

[1] https://developers.openai.com/codex/concepts/sandboxing

luk4 9 hours ago

Greywall/Greyproxy aims to address this. I haven't tried it yet though.

https://greywall.io/

knocte 8 hours ago

Or just run it in your VPS?

weitendorf 7 hours ago

I built a product solving this problem about a year ago, basically a serverless, container-based, NATed VScode where you can eg "run Claude Code" (or this) in your browser on a remote container.

There's a reason I basically stopped marketing it, Cursor took off so much then, and now people are running Claude/Codex locally. First, this is something people only actually start to care about once they've been bitten by it hard enough to remember how much it hurt, and most people haven't got there yet (but it will happen more as the models get better).

Also, the people who simultaneously care a lot about security and systems work AND are AI enthusiasts AND generally highly capable are potentially building in the space, but not really customers. The people who care a lot about security and systems work aren't generally decision makers or enthusiastic adopters of AI products (only just now are they starting to do so) and the people who are super enthusiastic about AI generally aren't interested in spending a lot of time on security stuff. To the extent they do care about security, they want it to Just Work and let them keep building super fast. The people who are decision makers but less on the security/AI trains need to this happen more, and hear about the problem from other executives, before they're willing to spend on it.

To the extent most people actualy care about this, they still want to Just Work like they do now and either keep building super fast or not thinking about AI at all. It's actually extremely difficult to give granular access to agents because the entire point is them acting autonomously or keeping you in a flow state. You either need to have a really compatible threat model to doing so (eg open source work, developer credentials only used for development and kept separate from production/corp/customer data), spend a lot of time setting things up so that agents can work within your constraints (which also requires a willingness to commit serious amounts of time or resources to security, and understanding of it), or spend a lot of time approving things and nannying it.

So right now everybody is just saying, fuck it, I trust Anthropic or Microsoft or OpenAI or Cursor enough to just take my chances with them. And people who care about security are of course appalled at the idea of just giving another company full filesystem access and developer credentials in enterprises where the lack of development velocity and high process/overhead culture was actually of load-bearing importance. But really it's just that secure agentic development requires significant upfront investment in changing the way developers work, which nobody is willing to pay for yet, and has no perfect solutions yet. Dev containers were always a good idea and not that much adopted either, btw.

It takes a lot more investment in actually providing good permissions/security for agent development environments still too, which even the big companies are still working on. And I am still working on it as well. There's just not that much demand for it, but I think it's close.

cgeier a day ago

I‘m a big fan of OpenCode. I’m mostly using it via https://github.com/prokube/pk-opencode-webui which I built with my colleague (using OpenCode).

systima 21 hours ago

Open Code has been the backbone of our entire operation (we used Claude Code before it, and Cursor before that).

Hugely grateful for what they do.

james2doyle 21 hours ago

What caused the switch? Also, are you still trying to use Claude models in OpenCode?

systima 10 hours ago

Sorry, I missed part of your question:

What caused the switch was that we're building AI solutions for sometimes price-conscious customers, so I was already familiar with the pattern of "Use a superior model for setting a standard, then fine-tuning a cheaper one to do that same work".

So I brought that into my own workflows (kind of) by using Opus 4.6 to do detailed planning and one 'exemplar' execution (with 'over documentation' of the choices), then after that, use Opus 4.6 only for planning, then "throw a load of MiniMax M2.5s at the problem".

They tend to do 90% of the job well, then I sometimes do a final pass with Opus 4.6 again to mop up any issues, this saves me a lot of tokens/money.

This pattern wasn't possible with Claude Code, thus my move to Open Code.

zingar 20 hours ago

You can access anthropic models with subscription pricing via a copilot license.

xvector 19 hours ago

systima 14 hours ago

Yes I regularly plan in Opus 4.6 and execute in “lesser” models ie MiniMax

frankdejonge 10 hours ago

I've used it but recently moved back to plain claude code. We use claude at the company and weirdly the experience has become less and less productive using opencode. I'm a bit sad about it as it was the first experience that really clicked and got great results out of. I'm actually curious if Anthropic knows which client is used and if they negatively influence the experience on purpose. It's very difficult to prove because nothing about this is exact science.

ec109685 2 hours ago

I think Anthropic just highly RL’s their model to work best with it’s Claude Code’s particular ways of going about things.

All the background capability Claude code now has makes things way more complex and I saw a meaningful improvement with 4.6 versus 4.5, so imagine other harnesses will take time to catch up.

lairv 21 hours ago

I tried to use it but OpenCode won't even open for me on Wayland (Ubuntu 24.04), whichever terminal emulator I use. I wasn't even aware TUI could have compatibility issues with Wayland

flexagoon 21 hours ago

> I wasn't even aware TUI could have compatibility issues with Wayland

They shouldn't, as long as your terminal emulator doesn't. Why do you think it's Wayland related?

mhast 20 hours ago

Strange. I've been running it on several different ubuntu 24 04 machines with standard terminal with no issues.

smetannik 21 hours ago

This shouldn't be related to Wayland.

It works perfectly fine on Niri, Hyprland and other Wayland WMs.

What problem do you have?

lairv 20 hours ago

Blank screen, and it's referenced in the official docs as potentially a Wayland issue https://opencode.ai/docs/troubleshooting/#linux-wayland--x11...

I didn't dig further

Seems like there's many github issues about this actually

https://github.com/anomalyco/opencode/issues/14336

https://github.com/anomalyco/opencode/issues/14636

https://github.com/anomalyco/opencode/issues/14335

aarondf 17 hours ago

samtheprogram 21 hours ago

Definitely not Wayland related, or so I doubt. I'm on wayland and never had any issues, and it's a TUI, where the terminal emulator does or does not do GPU work. What led you to that conclusion?

lairv 20 hours ago

This issue: https://github.com/anomalyco/opencode/issues/9505

And then the official docs: https://opencode.ai/docs/troubleshooting/#linux-wayland--x11...

> Linux: Wayland / X11 issues

> On Linux, some Wayland setups can cause blank windows or compositor errors.

> If you’re on Wayland and the app is blank/crashing, try launching with OC_ALLOW_WAYLAND=1.

> If that makes things worse, remove it and try launching under an X11 session instead.

OC_ALLOW_WAYLAND=1 didn't work for me (Ubuntu 24.04)

Suggesting to use a different display server to use a TUI (!!) seems a bit wild to me. I didn't put a lot of time into investigating this so maybe there is another reason than Wayland. Anyway I'm using Pi now

Yokohiii 7 hours ago

dboon 19 hours ago

samtheprogram 18 hours ago

Gigachad 21 hours ago

Probably vibe coded

pixelmelt 21 hours ago

Some of the more recent versions of it had memory leaks so you couldn't just leave it on in the background

NicuCalcea 8 hours ago

boomskats 10 hours ago

I've been using opencode for a few months and really like it, both from a UX and a results perspective.

It started getting increasingly flaky with Anthropic's API recently, so I switched back to Claude Code for a couple of days. Oh my, what a night and day difference. Tokens, MCP use, everything.

For anyone reading at OpenAI, your support for OpenCode is the reason I now pay you 200 bucks a month instead.

embedding-shape 9 hours ago

I've been paying OpenAI 200 bucks a month for what feels like forever by now, but used OpenCode for the first time yesterday, been using Codex (and Claude Code from time to time, to see if they've caught up with Codex) since then.

But I don't use MCP, don't need anything complicated, and not sure what OpenCode actually offers on top. The UI is slightly nicer (but oh so much heavier resource usage), both projects source code seems vibecoded and the architecture is held together with hopes and dreams, but in reality, minor difference really.

Also, didn't find a way in OpenCode to do the "Fast Mode" that Codex has available, is that just not possible or am I missing some setting? Not Codex-Spark but the mode that toggles faster inference.

madduci 13 hours ago

Have they "squatted" the name? It's the same name for the digital Sovereignty initiative in Germany

https://opencode.de/

embedding-shape 8 hours ago

If it was a somewhat unique name, then yeah maybe. But "opencode" is probably as generic as you could make it, hard to claim to be "squatting" something so well used already... Earliest project on GitHub named "opencode" seems to date back to 2010, but I'm sure there are even earlier projects too: https://github.com/search?q=opencode&type=repositories&s=upd...

heywinit 13 hours ago

you'll be surprised the name was actually a controversy on x/twitter since opencode was originally another dev's idea who joined the charmcli team. they wanted to keep that name but dax somehow (?) ended up squatting it. the charmcli team has renamed their tool to "crush" which matches their other tools a lot better than "opencode"

dominotw 8 hours ago

oh yea that whole drama turned me off from this project. dax guy seems to be some sort of grumpy cat.

Fabricio20 15 hours ago

I wish they would add back support for anthropic max/pro plans via calling the claude cli in -p mode. As I understand thats still very much allowed usage of claude code cli (as you are still using claude cli as it was intended anyway and fixes the issue of cache hits which I believe were the primary reason anthropic sent them the c&d). I love the UX from OpenCode (I loved setting it up in web mode on my home server and code from the web browser vs doing claude code over ssh) but until I can use my pro/max subscription I can't go back, the API pricing is way too much for my third world country wallet.

griffiths 11 hours ago

They had that?! I saw that some people wrote skills and plugins to call claude cli and gemini cli to still be able to use the subscription. I would also wish that this was supported out of the box, something similar to goose cli providers or acp providers (https://block.github.io/goose/docs/guides/acp-providers). But I don't want to spend testing yet another agent harness or change the workflow when I somewhat got used to one way of working on things (the churn is real).

unixfox 11 hours ago

I guess you could look into my plugin for that use case of CC inside opencode: https://github.com/unixfox/opencode-claude-code-plugin

tomasz-tomczyk 7 hours ago

I'd love for all these tools to standardise on the structure of plugins / skills / commands / hooks etc., so I can swap between them to compare without feeling handicapped!

zingar 20 hours ago

Anecdotal pros and one annoyance:

- GH copilot API is a first class citizen with access to multiple providers’ models at a very good price with a pro plan - no terminal flicker - it seems really good with subagents - I can’t see any terminal history inside my emacs vterm :(

gregman1 5 hours ago

Is there any initiative to port it to rust (or preferably golang) and remove weird tracking/telemetry?

I guess golang is better since we need goroutines that will basically wait for i/o and api calls.

https://github.com/charmbracelet/crush ?

delduca 4 hours ago

You can do it.

pink_eye 10 hours ago

Question: How do we use Agents to Schedule and Orchestrate Farming and Agricultural production, or Manufacturing assembly machines, or Train rail transportation, or mineral and energy deposit discovery and extraction or interplanetary terraforming and mining, or nuclear reactor modulation, or water desalination automation, or plutonium electric fuel cell production with a 24,000 year half-life radiation decay, or interplanetary colonization, or physics equation creation and solving for faster-than-light travel?

- With love The Official Pink Eye #ThereIsNoOther

anonyggs 8 hours ago

I don’t know why people use opencode. I mean it’s better than copilot but it’s pretty terrible in general when there are better options available.

embedding-shape 8 hours ago

Rather than listing what tooling you think is worse than OpenCode, wouldn't it make sense to list what tooling you think is better?

anonyggs 7 hours ago

Amp. CC. Codex. They all have a better harness.

alansaber 8 hours ago

Interested if these TUI agent systems have any unique features. They all seem to be shipping the standard agent swarm/bg agent approach.

Yokohiii 7 hours ago

"This guy is coding everything in the terminal, he must be really good!"

JSR_FDED 16 hours ago

I’ve been having a very good experience with OpenCode and Kimi 2.5. It’s fast enough and smart enough that I can stay in a state of flow.

hereme888 a day ago

The reason I'm switching again next month, from Claude back to OpenAI.

hungryhobbit 21 hours ago

Yeah, support the company that promised to help your government illegally mass surveil and mass kill people, because they support a use case slightly better than the non-mass-murdering option.

stavros 21 hours ago

Both of them promised to help their government illegally mass surveil and mass kill people. One of them just didn't want it done to US citizens.

I'm not a US citizen, so both companies are the same, as far as I'm concerned.

hungryhobbit 21 hours ago

hereme888 4 hours ago

That a gross exaggeration. But to your point, I could say the same for almost any product I use from Big Tech, every laptop company I buy my hardware from, etc. I'm sure the same applies to you. I can't fight every vendor all the time. For now I pick what works best for my use case.

Robdel12 21 hours ago

xvector 19 hours ago

You're right, Anthropic shouldn't have even taken a moral stance here at all. They should have just gone full send and allowed everything, because there will never be satisfying some people. Why even try?

everlier 21 hours ago

OpenCode is an awesome tool.

Many folks from other tools are only getting exposed to the same functionality they got used to, but it offers much more than other harnesses, especially for remote coding.

You can start a service via `opencode serve`, it can be accessed from anywhere and has great experience on mobile except a few bugs. It's a really good way to work with your agents remotely, goes really well with TailScale.

The WebUI that they have can connect to multiple OpenCode backends at once, so you may use multiple VPS-es for various projects you have and control all of them from a single place.

Lastly, there's a desktop app, but TBH I find it redundant when WebUI has everything needed.

Make no mistakes though, it's not a perfect tool, my gripes with it:

- There are random bugs with loading/restoring state of the session

- Model/Provider selection switch across sessions/projects is often annoying

- I had a bug making Sonnet/Opus unusable from mobile phone because phone's clock was 150ms ahead of laptop's (ID generation)

- Sometimes agent get randomly stuck. It especially sucks for long/nested sessions

- WebUI on laptop just completely forgot all the projects at one day

- `opencode serve` doesn't pick up new skills automatically, it needs to be restarted

Rithan 10 hours ago

Interesting timing — I've been building on Cloudflare Workers with edge-first constraints, and the resource footprint of most AI coding tools is striking by comparison. A TypeScript agent that uses 1GB+ RAM for a TUI feels like the wrong abstraction. The edge computing model forces you to think differently about state, memory, and execution — maybe that's where lighter agentic tools will emerge.

fhouser 7 hours ago

aider.chat was my entry to agentic coding. OpenCode followed. Not looking back.

arikrahman 21 hours ago

Can anyone clarify how this compares with Aider?

derodero24 17 hours ago

Being able to assign different models to subagents is the feature I've been wanting. I use Claude Code daily and burning the same expensive model on simple file lookups hurts. Any way to set default model routing rules, or is it manual per task?

instalabsai 33 minutes ago

In Claude Code, you can use the (undocumented command) "/model opusplan" to use opus for planning and sonnet for development

zuntaruk 17 hours ago

With OpenCode, I've found that I can do this by defining agents, assigning each agent a specifically model to use. Then K manually flip to that agent when I want it or define some might rules in my global AGENTS.nd file to gives some direction and OpenCode will automatically subtask out to the agent, which then forces the use of the defined model.

frasermarlow 19 hours ago

If you are doing data engineering, there is a specific fork of Open Code with an agentic harness for data tasks: https://github.com/AltimateAI/altimate-code

solenoid0937 18 hours ago

The maintaining team is incredibly petty though. Tantrums when they weren't allowed to abuse Claude subscriptions and had to use the API instead. They just removed API support entirely.

Maxious 17 hours ago

> we did our best to convince anthropic to support developer choice but they sent lawyers

https://x.com/i/status/2034730036759339100

solenoid0937 8 hours ago

Anthropic has zero problems with API billing, there's no chance they told him to rip that out.

Reading through his X comments and GitHub comments he is behaving immaturely. I don't trust what he's saying here. Ripping out Claude API support was just throwing a tantrum. Weird given his age - he's old enough to be more mature.

lemontheme 12 hours ago

‘abuse’. The same rate limits apply, the requests still go to the same endpoints.

Even as a CC user I’m glad someone is forcing the discussion.

My prediction: within two years ‘model neutrality’ will be a topic of debate. Creating lock-in through discount pricing is anti-competitive. The model provider is the ISP; the tool, the website.

solenoid0937 8 hours ago

> The same rate limits apply, the requests still go to the same endpoints.

That is not the point. That is a mere technicality.

You signed a contract. If you don't ignore the terms of the contract to use the product in a way that is explicitly prohibited, you're abusing the product. It is as simple as that.

They offer a separate product (API) if you don't like the terms of the contract.

Also, if you really want to get technical: the limits are under the assumption that caching works as intended, which requires control of the client. 3P clients suck at caching and increase costs. But that is not the overarching point.

> Creating lock-in through discount pricing is anti-competitive.

Literally everyone does this. OpenAI is doing this with Codex, far more than Anthropic is. It's not great but players much bigger than Anthropic are using discount pricing to create an anti-competitive advantage.

lemontheme 5 hours ago

Daviey 8 hours ago

jy-tan 15 hours ago

Agree, I find it hard to support them when the team is so obnoxious on X.

thdxr 7 hours ago

API support was never removed

p0w3n3d a day ago

For some reason opencode does not have option to disable streaming http client, which renders some inference providers unavailable...

There's also a request and a PR to add such option but it was closed due to "not adhering to community standards"

dalton_zk 20 hours ago

I had been using open code and admire they effort to create something huge and help a lot of developers around the world, connecting LLM our daily work without use a browser!

diablevv 17 hours ago

The MCP (Model Context Protocol) support is what makes this interesting to me. Most coding agents treat the file system and shell as the only surfaces — MCP opens up the possibility of connecting to any structured data source or API as a first-class tool without custom integration work each time.

Curious how the context window management works in practice. With large repos, the "what files to include" problem tends to dominate — does it have a strategy beyond embedding-based retrieval, or is that the main approach here?

Duplicake 21 hours ago

Why is this upvoted again on hacker news this is an old thing

zer0tonin 21 hours ago

Because this site is basically dead for any other subject than vibecoding and AI agents.

TheRealPomax 3 hours ago

I want to love this, but the "just install it globally, what could go wrong?" is simply not happening for an AI-written codebase. Open Source was never truly "you can trust it because everyone can vet it", so you had to do your due diligence. Now with AI code bases, that's "it might be open source, but no one actually knows how it works and only other AIs can check if it's safe because no one can read the code". Who's getting the data? No idea. How would you find out? I guess you can wireshark your network? This is not a great feeling.

justindotdev 11 hours ago

why is this trending, we've been using it since its beta

busfahrer 21 hours ago

I haven't been able to successfully get their CLI to reliably edit files when using local models, anybody else having the same problem?

arunakt 16 hours ago

Does it support hybrid models, for e.g deep research by Model 1 vs faster response from Model2

convnet 16 hours ago

Yes

hacker_88 4 hours ago

Use it with zed

wagslane 16 hours ago

I've been using opencode for months with codex. best combo I've tried so far

siliconc0w a day ago

I reach for OpenCode + Kimi to save tokens on lower priority stuff and because it's quite fast on Fireworks AI.

polski-g 20 hours ago

I'm 90% sure Fireworks serves up quantized models.

comboy 16 hours ago

OpenX is becoming a bit like that hindu symbol associated with well being..

kristopolous 21 hours ago

Geminis cli is clearly a fork of it btw

sunaookami 11 hours ago

No because Gemini CLI is slow and barely functioning.

Squarex 10 hours ago

It is clearly not. Why would you think so?

fareesh 4 hours ago

easily the best one

nullorigin 2 hours ago

yep

nopurpose a day ago

Claude Code subscription is still usable, but requires plugin like https://github.com/griffinmartin/opencode-claude-auth

solenoid0937 18 hours ago

Or just don't abuse the subscription and use the API instead.

canadiantim a day ago

Sure but will you get banned by anthropic anyway?

vadepaysa a day ago

Things that make an an OpenCode fanboy 1. OpenCode source code is even more awesome. I have learned so much from the way they have organized tools, agents, settings and prompts. 2. models.dev is an amazing free resource of LLM endpoints these guys have put together 3. OpenCode Zen almost always has a FREE coding model that you can use for all kinds of work. I recently used the free tier to organize and rename all my documents.

moron4hire 7 hours ago

What is this pervasive use of yield*? I've been writing Typescript for quiet some time and I've never seen yield* used in this way: https://github.com/anomalyco/opencode/blob/dev/packages/open...

vexna 6 hours ago

They are using effect-ts which uses yield as their equivalent to haskells Do notation.

sankalpnarula 20 hours ago

I personally like this better than claude code

solomatov 21 hours ago

Do they have any sandbox out of the box?

l72 7 hours ago

I use bubblewrap. This ensures it only has access to the current working directory and its own configuration. No ability to commit or push (since it doesn't have access to ssh keys) or try to run aws commands (no access to awscli configuration) and so on. It can read anything from my .envrc, since it doesn't have access to direnv or the parent directory. You could lock down the network even further if you wanted to limit web searches.

  exec bwrap \
    --unshare-pid \
    --unshare-ipc \
    --unshare-uts \
    --share-net \
    --bind "$OPENCODE_ROOT" "$OPENCODE_ROOT" \
    --bind "$CURRENT_DIR" "$CURRENT_DIR" \
    --bind "$HOME/.config/opencode/" "$HOME/.config/opencode/" \
    --ro-bind /bin /bin \
    --ro-bind /etc /etc \
    --ro-bind /lib /lib \
    --ro-bind /lib64 /lib64 \
    --ro-bind /usr /usr \
    --bind /run/systemd /run/systemd \
    --tmpfs /tmp \
    --proc /proc \
    --dev /dev \
    --setenv OPENCODE_EXPERIMENTAL_LSP_TOOL true \
    --setenv EDITOR emacs \
    --setenv PATH "$OPENCODE_BINDIR:/usr/bin:/bin" \
    --setenv HOME "$HOME" \
    -- \
    "opencode" "$@"

jy-tan 3 hours ago

I built Fence for this! https://github.com/Use-Tusk/fence

fence -t code -- opencode

decodebytes 20 hours ago

nope - most folks wrap it in nono: https://nono.sh/docs/cli/clients/opencode

tallesborges92 20 hours ago

I’m happy with the one I built. (ZDX)

epec254 21 hours ago

Honestly I was a Claude code only guy for a while. I switched to opencode and I’m not going back.

IMO, the web UI is a killer feature - it’s got just enough to be an agent manager - without any fluff. I run it on my remote VMs and connect over HTTP.

caderosche 21 hours ago

I feel like Anthropic really need to fork this for Claude Code or something. The render bugs in Claude Code drive me nuts.

QubridAI a day ago

OpenCode feels like the “open-source Copilot agent” moment the more control, hackability, and no black-box lock-in.

thefnordling 21 hours ago

opus/sonnet 4.6 can be used in opencode with a github copilot subscription

solomatov 21 hours ago

Does github copilot ToS allow this?

fresh_broccoli 20 hours ago

edg5000 14 hours ago

swingboy 21 hours ago

I don't see why not. It's just using the Github Copilot API.

singpolyma3 21 hours ago

OpenCode vs Aider vs Crush?

polski-g 20 hours ago

OpenCode, by reason of plugins alone, is better than all of them.

avereveard a day ago

isn't this the one with default-on need code change to turn off telemetry?

flexagoon a day ago

No

jedisct1 21 hours ago

For open models with limited context, Swival works really well: https://swival.dev

sergiotapia a day ago

If I wanted to switch from Claude Code to this - what openai model is comparable to opus 4.6? And is it the same speed or slower/faster? Thank you!

pimeys a day ago

GPT 5.4 has been the winner this week. Last week Opus 4.6. You can use both in OpenCode.

solenoid0937 18 hours ago

5.4 kind of falls apart in big/large projects.

arbuge a day ago

How does it compare to using GPT 5.4 inside Codex?

pimeys 21 hours ago

nxpnsv a day ago

Well not anymore with Claude pro…

alexdrydew 17 hours ago

rbanffy a day ago

If you want faster, anything running on a Cerebras machine will do.

Never tried it for much coding though.

eli 21 hours ago

Outside of their (hard to buy) GLM 4.7 coding plans, it's also extremely expensive.

swyx a day ago

do you care about harness benchmarks or no?

sergiotapia a day ago

Just a data point, I would need to use it for my workflows. I do have a monorepo with a root level claude.md, and project level claude.md files for backend/frontend.

swyx 44 minutes ago

awaseem 15 hours ago

Love opencode!

DeathArrow 12 hours ago

I see it uses massive resources for TUI interfaces like 1GB+ of RAM.

I wonder why did they use Typescript and not a more resource efficient language like C, C++, Rust, Zig?

Since their code is generated by AI human preferences shouldn't matter much and AI is happy to work with any language.

globular-toast 12 hours ago

I use this. I run it in a sandbox[0]. I run it inside Emacs vterm so it's really quick for me to jump back and forth between this and magit, which I use to review what it's done.

I really should look into more "native" Emacs options as I find using vterm a bit of a clunky hack. But I'm just not that excited about this stuff right now. I use it because I'm lazy, that's all. Right now I'm actually getting into woodwork.

[0] https://blog.gpkb.org/posts/ai-agent-sandbox/

villgax 15 hours ago

The fact that I wasn’t able to link llama.cpp server locally without fuss kinda beats the whole open point. Open for proprietary APIs only?

TZubiri 17 hours ago

I started with Codex, then switched to OpenCode, then switched to Codex.

OpenCode just has more bugs, it's incredibly derivative so it doesn't really do anything else than Codex.

The advantage of OpenCode is that it can use any underlying model, but that's a disadvantage because it breaks the native integration. If you use Opus + Claude Code, or Gpt-Codex + Codex App, you are using it the way it was designed to be used.

If you don't actually use different models, or plan to switch, or somehow value vendor neutrality strategically, you are paying a large cost without much reward.

This is in general a rule, vendor neutrality is often seen as a generic positive, but it is actually a tradeoff. If you just build on top of AWS for example, you make use of it's features and build much faster and simpler than if you use Terraform.

alsjdG19 17 hours ago

You do not "write" code. Stop these euphemisms. It is an intellectual prosthetic for feeble minded people that plagiarizes code by written by others. And it connects to the currently "free" providers who own the means of plagiarizing.

There is nothing open about it. Please do not abuse the term "open" like in OpenBSD.

andrekandre 9 hours ago

  > Please do not abuse the term "open" like in OpenBSD.
this is such a pet peeve of mine; all these "open" products (except when they're not)

nbevans 3 hours ago

Codex is 15MB of memory per process. Just sayin'

ftchd a day ago

minus Claude login

ymaeda 16 hours ago

nice

gigatexal 19 hours ago

If you have to post something like this line already loser the plot

I only boot my windows 11 gaming machine for drm games that don’t work with proton. Otherwise it’s hot garbage

voidfunc 20 hours ago

I fucking love OpenCode.

alienchow 16 hours ago

What I don't understand is that, if coding agents are making coding obsolete, why do these vibe coders not choose a language that doesn't set their users' compute resources on fire? Just vibe rust or golang for their cli tools, no one reviews code slop nowadays anyway /s.

I do not understand the insistence on using JavaScript for command line tools. I don't use rust at all, but if I'm making a vibe coded cli I'm picking rust or golang. Not zig because coding agents can't handle the breaking changes. What better test of agentic coders' conviction in their belief in AI than to vibe a language they can't read.

anonym29 21 hours ago

Just remember, OpenCode is sending telemetry to their own servers, even when you're using your own locally hosted models. There are no environment variables, flags, or other configuration options to disable this behavior.¹

At least you can easily turn off telemetry in Claude Code - just set CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC to 1.

You can use Claude Code with llama.cpp and vLLM, too right out of the box with no additional software necessary, just point ANTHROPIC_BASE_URL at your inference server of choice, with any value in ANTHROPIC_API_KEY.

Some people think that Anthropic could disable this at any time, but that's not really true - you can disable automatic updates and back up and reuse native Claude Code binaries, ensuring Anthropic cannot change your existing local Claude Code binary's behavior.

With that said, I like the idea of an open source TUI agent that won't spy on me without my consent and no way to disable it much better than a closed source TUI agent that I can effectively neuter telemetry on, but sadly, OpenCode is not the former. It's just another piece of VC-funded spyware that's destined for enshittification.

¹https://github.com/anomalyco/opencode/blob/4d7cbdcbef92bb696...

debazel 21 hours ago

Are you sure that endpoint is sending all traffic to opencode? I'm not familiar with Hono but it looks like a catch all route if none of the above match and is used to serve the front-end web interface?

flexagoon 21 hours ago

You are correct, it is indeed a route for the web interface

anonym29 21 hours ago

updated post accordingly

ianschmitz 21 hours ago

That linked code is not used by the opencode agent instance though right? Looks related to their web server?

flexagoon 21 hours ago

They don't. That is just the route for their WebUI, which is completely optional.

kristopolous 21 hours ago

I've point thought about making things that just send garbage to any data collecting service.

You'd be surprised how useless datasets become with like 10% garbage data when you don't know which data is garbage

cyanydeez 21 hours ago

Does opencode still work if you blackhole the telemetry?

hippycruncher22 21 hours ago

this is a big red flag

delduca 20 hours ago

Sadly Antropic have blocked the usage of claude on it.

tamimy 4 hours ago

You can use Github Copilot and also use Claude that way.

ricardobeat 20 hours ago

No, they haven’t. You can use claude like any other model via API, you just can’t reuse your subscription token.

evulhotdog 17 hours ago

There’s plenty of options to get around that.

swarmgram 19 hours ago

This is extremely cool; will download now and check it out. Thank you!