DeepSeek V4–almost on the frontier, a fraction of the price (simonwillison.net)
116 points by indigodaddy 17 hours ago
rsanek 7 minutes ago
I'm not sure I'd call it "almost on the frontier," but I do think that v4 Pro is the most usable coding model I've seen out of China. I've used it via Ollama Cloud (coding) and OpenRouter (data processing). Feels Sonnet-level to me -- solid at implementation when given a specification, but falls a good bit short of Opus 4.7 max thinking when planning out larger changes or when given open-ended prompts.
wg0 9 hours ago
Deepseek v4 Pro feels like Claude Opus 4.6 in it's personality but here's what I did find out about costs:
I did cut loose Deepseek v4 on a decent sized Typescript codebase and asked it to only focus on a single endpoint and go in depth on it layer by layer (API, DTOs, service, database models) and form a complete picture of types involved and introduced and ensure no adhoc types are being introduced.
It developed a very brief but very to the point summary of types being introduced and which of them were refunded etc.
Then I asked it to simplify it all.
It obviously went through lots of files in both prompts but total cost? Just $0.09 for the Pro version.
On Claude Opus I think (from past experience before price hikes) these two prompts alone would have burned somewhere between $9 to $13 easily with not much benefit.
Note - I didn't use Open router rather used the Deepseek API directly because Open router itself was being rate limited by Deep seek.
ithkuil 2 hours ago
Even taking into account the fact that they are billing at 75% discount it's still quite cheaper
amelius an hour ago
Aren't they all billing at discount?
stavros 35 minutes ago
stavros 29 minutes ago
How did you use it? OpenRouter, or provider directly?
baldai an hour ago
Only similarity it has to Opus 4.6 is the 4 in the name. I do not understand these dishonest comparisons. OOS models are vool, cheap and promising for a future -- but why are we pretending they are better than they are?
gmerc 43 minutes ago
Speak for yourself. I found switching from Opus 4.7 to be completely painless and in fact, due to the reliability of Anthropic’s API, less of a friction despite slower response times. Zero issues on a large mono repro
deaux 4 hours ago
I'm surprised that people here don't care at all about these models openly training on your data, especially if you use them straight from the model developer. Whereas things like "GitHub now automatically opts everyone into using their code for model training" get hundreds of justifiably angry comments, I never see this brought up anymore on posts like these talking about using Chinese models through OpenRouter. This might be explained by "well they're different people", but the difference is very stark for that to be the whole explanation.
edg5000 6 minutes ago
My policy is that I don't allow agents to access all code. Some of it is shielded behind bind mounts. Maybe this is a pathetic, artisanal (or ego-driven), reaction of mine to the inevitable. I allow them to work on about 90% of the code (most codebases fully), with some code being considered too valuable to expose to the vendor. When data is involved, LLMs only get to see anonymized data.
This cute policy of mine won't affect anything though. The more we use the models, the more the models will replace this kind of work. Centralisation of power is inevitable; in Medival Europe, we used to have state & church ruling. In modern times but before the internet, it was probably state and banks. Maybe with ongoing digitization (bank offices disappearing) making banks less costly to operate; combined with with bank bailouts, maybe govenments will fully nationalize or at least banks will consolidate.
Then the AI companies will consolidate with the internet information and communication companies (Google/Meta for the US, and Alibaba/Tencent for China). Maybe we'll end up with a few de-facto governmental megacorps that rule in tandem and close cooperation with the formal government, who might handle mostly infra, utilities and the army. The megacorp would control narrative more and take more of a paternal role (educating and protecting the citizens, normally handled by formal governments).
Does this make sense?
dbeley 42 minutes ago
The cool thing about open-weights model is that you are free to use alternative providers that won't phone home to the original model creators.
I see 6 alternative providers listed on Openrouter for DeepSeek V4 Pro for example.
duskdozer an hour ago
What do you mean specifically? Data passed through OpenRouter? Or that they too indiscriminately ingest data all over the web? If the former, I assume it's just that anyone still using them just doesn't care where the data comes from. If the latter, well, it seems like every day there's some news on some new model from somewhere, and it takes dedication to complain every time. There's also the factor that I believe DeepSeek is more open with the model, while others keep it entirely proprietary, which feels fairer and (personally) is also less offensive.
gmerc 2 hours ago
Because they give it away for free and offer APIs at very acceptable rates. Not that hard to figure out, Robin Hood stealing our data tax back comes to mind.
deaux 2 hours ago
GitHub is free.
notrealyme123 an hour ago
pheggs 3 hours ago
I am personally okay helping them as long as they publish the models and dont keep them closed. And I dont trust the settings where providers say they wont train on it.
prism56 3 hours ago
If the data is opensource on github, then in my opinion it should be fair game.
notrealyme123 an hour ago
Things being public should not be enough. just because someone leaked your medical information to the public via a data breach should not make it fair game. There should be some rules.
prism56 an hour ago
prism56 an hour ago
ozgrakkurt 2 hours ago
IMO this is unfair for GPL or similarly licensed code.
Seems ok for MIT like licensed code though
edg5000 5 minutes ago
ForHackernews an hour ago
antiloper 3 hours ago
AWS Bedrock has DeepSeek models running on their infrastructure. That should be enough to prevent training on user data (there's a markup compared to DeepSeek's pricing though).
And unfortunately AWS doesn't have prepaid billing, so you can't just give the internet access to your API key without getting FinDDoS'd.
ThreatSystems a minute ago
If anyone is looking for a solution in this space. Fire me an email, I have a partner whose focussed closely on that problem set!
deaux 2 hours ago
The latest one available for serverless inference looks to be from 8 months (Deepseek v3.1), which is an eternity and far behind.
raincole an hour ago
Two factors. First is anti-americanism (or at least anti-american-capitalism).
But the more important one is the social contract. Github came far before LLM era. The branding around it is being the storage of open source projects and many users want to it stay away from AI hype. You won't expect LLM providers to stay away from AI hype (duh) so it's less an issue for them.
stavros 31 minutes ago
If they give me the resulting model in the end, they can train on my data all they want. Hell, I'll send them more of it.
holysantamaria 3 hours ago
From the pricing page of deepseek:
(3) The deepseek-v4-pro model is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC.
Was this taken into account when reviewing the model?
gmerc 2 hours ago
obviously everyone subsidizes for user acquisition - after all people need to be coaxed to test your model, claude code subscriptions come to me one.
DeepSeek pro is 65/86% cheaper (i/o tokens) in subsidized pro vs pro and 91/97% cheaper with current subsidies.
Flash vs Sonnet 4.6 is 95/98%
cyber_kinetist 3 hours ago
Yeah even the Chinese open models have a problem that inference costs for these aren't that cheap. The only way out for the AI bubble collapse is simply more efficient hardware at lower costs and infrastructure setup downtime.
gmerc 2 hours ago
It’s just an introduction price to speed up adoption for the rest of the month, hardly worth mentioning compared to subsidized coding plans.
We know DS runs profitable, they also indicate in their paper they expect prices to drop as they get access to the next gen Huawei cards.
raincole an hour ago
The V3/R1 time and now are in such contrast. V3/R1 were hyped hard and barely usable for coding. V4 is much less hyped but (anecdotally) it has completely demolished all the Flash/Lite/Spark models.
zozbot234 an hour ago
Huh? R1 was one of the earliest openly available MoE and reasoning models, that's definitely not "hype". People tried to do reasoning before by asking the model to "think it through step by step" but that was a hack. The later V3.1 and V3.2 releases AIUI unified reasoning/non-reasoning use under a single model.
KronisLV 9 hours ago
I'm currently paying for Anthropic's Max subscription (the 100 USD one) and I quite often hit or approach the 5 hour limits, but usually get to around 60-80% of the weekly limits before they reset (Opus 4.7 with high thinking for everything, unless CC decides to spawn sub-agents with Haiku or something).
Those tokens are heavily subsidized, but DeepSeek's API pricing is looking really good. For example, with an agentic coding setup (roughly 85% input, 15% output and around 90% cache reads) I'd get around 150M tokens per month for the same 100 USD. Even at more output tokens and worse cache performance, it'd still most likely be upwards of 100M.
aitchnyu 2 hours ago
What would be the non-subsidized price for a V4 api? Can it be priced 3x cheaper than bigger models? In Openrouter, this 1600B param model costs 0.4$. Whereas Kimi 2.6, 1000B params is 0.7; GLM 5.1, 754B params is 1.0$.
KronisLV 29 minutes ago
Here’s their pricing docs, they’re running a discount for now https://api-docs.deepseek.com/quick_start/pricing/
The 150M assumption of mine is for 100 USD at the regular prices (though even that needs sufficient cache hits). Anthropic subsidizes way more per-token I think, though.
try-working 8 hours ago
Someone on Twitter got >200M tokens for around $10 at the current pricing level
rvz 5 hours ago
So it begins.
myaccountonhn 4 hours ago
I recently switched from Claude to Opencode Go + pi.dev. It has Deepseek v4 pro along with Kimi K2.6, and it's performing quite well for basic coding, without hitting any limits.
jdasdf 11 hours ago
I've been using v4 pro for the past few days and honestly in terms of quality it seems more or less on par with open AIs 5.4 or opus 4.6 (i havent tried 4.7)
To be clear, i'm not doing state of the art stuff. I mostly used it for frontend development since i'm not great at that and just need a decent looking prototype.
But for my purposes it's a perfectly good model, and the price is decent.
I can't wait for open model small enough for me to run locally come out though. I hate having to rely on someone elses machines (and getting all my data exfiltrated that way)
enochthered 9 hours ago
Thanks for sharing your experience, I’m looking to try it out.
Which provider are you using for inference? Opencode or the DeepSeek api?
teruakohatu 11 hours ago
The pelican is really getting old as an a standalone evaluation metric. By now they are certainly going to be in training set if not explicitly tuned to produce it for the press on HN alone.
Keep the pelican but isn’t it time to add something else more novel that all current and past models struggle with?
caseyf7 5 hours ago
It also seems like all of the models have converged on very similar images.
justinclift 10 hours ago
chaosprint an hour ago
I doubt if those models already knew this pelican test...
alasano 9 hours ago
I tweeted about some implementation and review runs that used V4 Pro.
Even without the currently discounted pricing, the value is incredible.
It takes about twice as long to finish code reviews given an identical context compared to opus 4.7/gpt 5.5 but at 1/10 the cost of less, there's just no comparison.
swingboy 10 minutes ago
Did you do this test through OpenRouter?
taffydavid 2 hours ago
I tried deepseek v4 through open code at the weekend. I'm a daily Claude/Claude code user.
I tried to build something simple and while it got the job done the thinking displayed did not fill me with confidence. It was pages and pages of "actually no", "hang on", "wait that makes no sense". It was like the model was having a breakdown.
Bear in mind open code was also new to me so I could be just seeing thinking where I usually don't
Jtarii an hour ago
I see similar things using GLM 5.1 in pi.
I had to turn off thinking traces because it was just giving me anxiety looking at it.
atoav 2 hours ago
> Bear in mind open code was also new to me so I could be just seeing thinking where I usually don't
Well there's your problem.
Edit: I remember seeing similar things with ChatGPT or Codex, although I can't remember in which context.
Tarcroi 17 hours ago
[dead]