Darkbloom – Private inference on idle Macs (darkbloom.dev)

443 points by twapi 16 hours ago

kennywinker 15 hours ago

I have a hard time believing their numbers. If you can pay off a mac mini in 2-4 months, and make $1-2k profit every month after that, why wouldn’t their business model just be buying mac minis?

eigengajesh 9 hours ago

The numbers are optimistically legit -- it's calculated based purely considering we have demand for all machines at all times. We don't have that right now, but fairly optimistic that people will do it.

That's why we don't recommend purchasing a new machine. Existing machine is no cost for you to run this.

Electricity is one cost, but it will get paid off from every request it receives. Electricity is only deducted when you run an inference. If you have any questions, DM me @gajesh on Twitter.

mbesto 5 hours ago

> That's why we don't recommend purchasing a new machine. Existing machine is no cost for you to run this.

You misunderstood. If the ROI is there, there is enough capital in existence for you to accelerate your profit. So why even deal with the complexity of renting people's hardware when you can do it yourself?

dmitrygr 13 minutes ago

> Existing machine is no cost for you to run this.

That is not at all how modern chips work. Idle chips are mostly powered down, non-idle ones are working and that causes real measurable wear and tear on the silicon. CPU, RAM, NAND all wear and tear measurably with use on current manufacturing processes.

https://en.wikipedia.org/wiki/Electromigration

BuildTheRobots 3 hours ago

I don't worry about bandwidth or constant CPU use, but the one thing that will kill my mac is burning out the SSD.

The calculator gives numbers for nearly everything, but I can't obviously see how much space it needs for model storage or how many writes of temp files I should expect if I'm running flat out.

sleepybrett 38 minutes ago

stavros 9 hours ago

You're not taking into account the thermal strain on the machine, though. A machine that's 100% utilized (even worse if it's in bursts) will last less than an idle machine.

washadjeffmad 7 hours ago

embedding-shape 8 hours ago

dgacmu 6 hours ago

No provider maintains 100% utilization of GPUs at full rate. Demand is bursty - even if this project is successful, you might expect, e.g., things to be busy during the stock market times when Claude is throwing API errors and then severely underutilized during the same times that Anthropic was offering two-for-one off peak use.

And then there's a hit for overprovisioning in general. If the network is not overprovisioned somewhat, customers won't be able to get requests handled when they want, and they'll flee. But the more overprovisioned it is, the worse it is for compute seller earnings.

I suspect an optimistic view of earnings from a platform like this would be something like 1/8 utilization on a model like Gemma 4. Their calculator estimates my m4 pro mini could earn about $24/month at 3 hours/day on that model. That seems plausible.

liuliu 3 hours ago

pgporada 2 hours ago

avidphantasm 8 hours ago

If you start buying minis, then you need to house, power, and cool them. So you are building a mini data center. If you are building a small data center, economies of scale will drive you to want to build larger and larger. However, this gets expensive and neighbors tend to not like data centers (for good reason). To me this seems like asymmetric warfare against hyper-scalers.

xhkkffbf 5 hours ago

Yup. This way, the people pay for the air conditioning themselves and they probably don't even notice the extra cost.

psychoslave 12 hours ago

Because they don’t have that much initial money in their pocket, while the idle computer is already there, and the biggest friction point is convincing people to install some software. Both producing rhetoric and software are several order of magnitude cheaper than to directly own and maintain a large fleet of hardware with high guarantee of getting the electrical stable input in a safe place to store them.

Assuming that getting large chunk of initial investment is just a formality is out of touch with 99% of people reality out there, when it’s actually the biggest friction point in any socio-economical endeavour.

liuliu 3 hours ago

Of course these numbers are ridiculous. Mac Mini (let's assume Apple releases M5 Pro) tops Int8 (let's assume it is the same as FP8, which it is not) at ~50 TFLOPs, with Draw Things, we recently developed hybrid NAX + ANE inference, which can get you ~70 TFLOPs.

A H200 gives you ~4 PFLOPs, which is ~60x at only ~40x price (assuming you can get a Mac Mini at $1000). (Not to mention, BTW, RTX PRO 6000 is ~7x price for ~40x more FLOPs).

Your M4 Mac Mini only has ~20 TFLOPs.

bentobean 3 hours ago

> Your computer only has ~20 TFLOPs.

What a time to be alive.

chaoz_ 15 hours ago

Solid q. I think the part of it is that it’s really easy to attract some “mass” (capital) of users, as there are definitely quite a few of idle Macs in the world.

Non-VC play (not required until you can raise on your own terms!) and clear differentiation.

If you want to go full-business-evaluation, I would be more worried about someone else implementing same thing with more commission (imo 95% and first to market is good enough).

jonplackett 12 hours ago

I think the point they’re making though is that the numbers seem too good to be true.

ie. Does anyone know the payback time for a B100 used just for inference? I assume it’s more than a couple of months? Or is it just training that costs so much?

Saline9515 9 hours ago

Eigenlayer (which this is spun off from) is a massively VC-funded crypto company.

dnnddidiej 13 hours ago

It is too good to be true. When you see it is making more than a claude code subscription for fuck all work per day.

Prolly gonna make $50 a year tops.

CTDOCodebases 4 hours ago

Or like anything else it will be too good to be true at the very beginning but then once people hear about it and it gets popular supply overtakes demand and the mac minis go back to being idle most of the day.

When YouTubers start making videos about it you know it's too late.

thih9 14 hours ago

> These are estimates only. We do not guarantee any specific utilization or earnings. Actual earnings depend on network demand, model popularity, your provider reputation score, and how many other providers are serving the same model.

Others are reporting low demand, eg.: https://news.ycombinator.com/item?id=47789171

znnajdla 14 hours ago

The numbers are obviously high, because if this takes off then the price for inference will also drop. But I still think it’s a solid economic model that benefits low income countries the most. In Ukraine, for example, I know people who live on $200/month. A couple Mac Minis could feed a family in many places.

As a business owner, I can think of multiple reasons why a decentralized network is better for me as a business than relying on a hyperscaler inference provider. 1. No dependency on a BigTech provider who can cut me off or change prices at any time. I’m willing to pay a premium for that. 2. I get a residential IP proxy network built-in. AI scrapers pay big money for that. 3. No censorship. 4. Lower latency if inference nodes are located close to me.

kennywinker 14 hours ago

How many of those people who could live off $200USD/month can afford or already have a mac mini in the house?

znnajdla 11 hours ago

aacid 10 hours ago

Isn't this same premise as "lets buy few GPUs to mine crypto and have passive income"? It didn't work very well and it probably won't work now either. If there is money to be made, bigger players will get in there, buy out all mac minis they can, drive price up for regular people and inevitably drive inference price down so you'll be lucky to get initial investment back

znnajdla 10 hours ago

NiloCK 11 hours ago

On the latency point - your requests are still going through the coordinator of the system here. So on average strictly worse than a large provider.

You - Darkbloom - Operator - Darkbloom - you, vs

You - Provider - you

---

On the censorship point - this is an interesting risk surface for operators. If people are drawn my decentralized model provisioning for its lax censorship, I'm pretty sure they're using it to generate things that I don't want to be liable for.

If anything, I could imagine dumber and stricter brand-safety style censorship on operator machines.

znnajdla 10 hours ago

yard2010 11 hours ago

It's quite funny thinking about a chimpanzee seeing a lot of bananas thinking this could feed my family and then same with humans only with Mac Minis

gleenn 15 hours ago

Power and racking are difficult and expensive?

kennywinker 15 hours ago

How difficult? Is running 1000 minis worth $1,000,000/month of effort? I feel like it is.

ffsm8 14 hours ago

runako 14 hours ago

agnosticmantis 13 hours ago

"You could see a single robotaxi being worth, or providing, about $30,000 of gross profit per year. ... A Tesla is an appreciating asset..."

- Elon Musk during Tesla's Autonomy Day in April 2019.

foota 15 hours ago

Capital and availability?

kennywinker 15 hours ago

I guess if it only works at scale capital is maybe the answer. Like enough cash to buy 5 or 10 or even 100 minis seem doable - but if the idea only works well when you have 10,000 running - that makes some sense.

Filligree 7 hours ago

Because their numbers don’t work out. When you do the math on token cost versus inference speed, you get something that barely breaks even even with cheap power.

Also they’ve already launched a crypto token, which is a terrible sign.

znpy 7 hours ago

Being the middleman is often way more profitable

tgma 14 hours ago

I installed this so you don't have to. It did feel a bit quirky and not super polished. Fails to download the image model. The audio/tts model fails to load.

In 15 minutes of serving Gemma, I got precisely zero actual inference requests, and a bunch of health checks and two attestations.

At the moment they don't have enough sustained demand to justify the earning estimates.

iepathos 17 minutes ago

You can see in their stats view they have a lot of providers/nodes connected but practically no actual demand/consumers. They just launched and I'm sure get providers was top of their agenda, but it's essentially unusable as a provider unless they perform some serious lift to get actual paying customers.

splittydev 13 hours ago

They released this like a day ago, I'm not surprised that there's not enough demand right now. Give it some time to take off

tgma 13 hours ago

You'd think to bootstrap a marketplace you'd spend your own money to feed fake requests (or perhaps allow free chat so that they induce requests).

Still, absolute zero is an unacceptable number. Had this running for more than an hour.

splittydev 12 hours ago

subroutine 12 hours ago

Has anyone tested the system from the other end... sending a prompt and getting a response?

lxglv 13 hours ago

weird to learn that they do not generate inference requests to their network themselves to motivate early adopters at least to host their inference software

lostmsu 9 hours ago

If they paid promised > $1k/m for FLUX 2B on a Mac they would go broke in less than a month. On a single 5090 that model would provide an inference througput so high they'd have to pay close to $50k/m for the results.

The numbers are absolute fraud. You shouldn't be installing their software cause fraud could be not just about numbers.

rjmunro 9 hours ago

thatxliner 13 hours ago

and I don't think they ever will unless they're highly competitive (hopefully that price they have stays? at least for users)

I was thinking of building this exact thing a year ago but my main stopper was economics: it would never make sense for someone to use the API, thus nobody can make money off of zero demand.

I guess we just have to look at how Uber and Airbnb bootstrapped themselves. Another issue with my original idea was that it was for compute in general, when the main, best use-case, is long(er)-running software like AI training (but I guess inference is long running enough).

But there already exist software out there that lets you rent out your GPU so...

tgma 13 hours ago

People underestimate how efficient cost/token is for beefy GPUs if you are able to batch. Unlikely for one off consumer unit to be able to compete long term.

starkeeper 13 hours ago

What's a good place to do this?

lostmsu 9 hours ago

gleenn 12 hours ago

You have to install their MDM device management software on your computer. Basically that computer is theirs now. So don't plan on just handing over your laptop temporarily unless you don't mind some company completely owning your box. Still might be a validate use for people with slightly old laptops lying around, but beware trying to share this computer with your daily activities if you e.g. use a bank on a browser on this computer regularly. MDM means they can swap out your SSL certs level of computer access, please correct me if I'm wrong.

mirashii 11 hours ago

MDMs on macOS are permissioned via AccessRights, and you can verify that their permission set is fairly minimal and does not allow what you've described here (bits 0, 4, 10).

That said, their privacy posture at the cornerstone of their claims is snake oil and has gaping holes in it, so I still wouldn't trust it, but it's worth being accurate about how exactly they're messing up.

mike_hearn 11 hours ago

Edit: deleted post. I see your other post now.

You are right - the "nonce binding" the paper uses doesn't seem convincing. The missing link is that Apple's attestation doesn't bind app generated keys to a designated requirement, which would be required to create a full remote attestation.

mirashii 11 hours ago

keremimo 10 hours ago

MDM is the absolute deal breaker. No way in hell will I ever make my Macbook into an unsellable brick if they so decide to lock my computer with MDM.

Even moreso, not for pennies/month

ezfe 6 hours ago

The MDM profile doesn’t grant that

nl 15 hours ago

They use the TEE to check that the model and code is untampered with. That's a good, valid approach and should work (I've done similar things on AWS with their TEE)

The key question here is how they avoid the outside computer being able to view the memory of the internal process:

> An in-process inference design that embeds the in- ference engine directly in a hardened process, elimi- nating all inter-process communication channels that could be observed, with optional hypervisor mem- ory isolation that extends protection from software- enforced to hardware-enforced via ARM Stage 2 page tables at zero performance cost.[1]

I was under the impression this wasn't possible if you are using the GPU. I could be misled on this though.

[1] https://github.com/Layr-Labs/d-inference/blob/master/papers/...

nitros 12 hours ago

This entire paper smells of LLM, I'm sure even the most distinguished academic would refrain from using notation to prove that the SIP status cannot change during operation.

flockonus 15 hours ago

While they do make this argument, realistically anyone sending their prompt/data to an external server should assume there will be some level of retention.

And more so in particular, anyone using Darkbloom with commercial intents should only really send non-sensitive data (no tokens, customer data, ...) I'd say only classification tasks, imagine generation, etc.

joelthelion 9 hours ago

There's a difference between trusting Anthropic and trusting random mac owners.

mike_hearn 11 hours ago

Apple Silicon systems have unified memory between CPU and GPU. The hypervisor page table trick is thus claimed to protect GPU memory from RDMA.

ramoz 15 hours ago

Macs do not have an accessible hardware TEE.

Macs have secure enclaves.

nl 14 hours ago

Good point!

But they argue that:

> PT_DENY_ATTACH (ptrace constant 31): Invoked at process startup before any sensitive data is loaded. Instructs the macOS kernel to permanently deny all ptracerequests against this process, including from root. This blocks lldb, dtrace, and Instruments.

> Hardened Runtime: The binary is code-signed with hardened runtime options and explicitly without the com.apple.security.get-task-allow entitlement. The kernel denies task_for_pid() and mach_vm_read()from any external process.

> System Integrity Protection (SIP): Enforces both of the above at the kernel level. With SIP enabled, root cannot circumvent Hardened Runtime protections, load unsigned kernel extensions, or modify protected sys- tem binaries. Section 5.1 proves that SIP, once verified, is immutable for the process lifetime.

gives them memory protection.

To me that is surprising.

190n 21 minutes ago

mirashii 12 hours ago

dinobones 14 hours ago

saagarjha 12 hours ago

ramoz 14 hours ago

ramoz 15 hours ago

Unfortunately, verifiable privacy is not physically possible on MacBooks of today. Don't let a nice presentation fool you.

Apple Silicon has a Secure Enclave, but not a public SGX/TDX/SEV-style enclave for arbitrary code, so these claims are about OS hardening, not verifiable confidential execution.

It would be nice if it were possible. There's a lot of cool innovations possible beyond privacy.

mike_hearn 11 hours ago

I wrote a whole SDK for using SGX, it's cool tech. But in theory on Apple platforms you can get a long way without it. iOS already offers this capability and it works OK.

macOS has a strong enough security architecture that something like Darkbloom would have at least some credibility if there was a way to remotely attest a Mac's boot sequence and TCC configuration combined with key-to-DR binding. The OS sandbox can keep apps properly separated if the kernel is correct and unhacked. And Apple's systems are full of mitigations and roadblocks to simple exploitation. Would it be as good as a consumer SGX enclave? Not architecturally, but the usability is higher.

znnajdla 14 hours ago

As if you get privacy with the inference providers available today? I have more trust in a randomly selected machine on a decentralized network not being compromised than in a centralized provider like OpenAI pinky promising not to read your chats.

ramoz 14 hours ago

Inference providers don't claim private inference. However, they must uphold certain security and legal compliances.

You have no guarantees over any random connected laptop connected across the world.

znnajdla 11 hours ago

geon 14 hours ago

Every hardware key will be broken if there is enough incentive to do so. Their claims read like pure hubris.

znnajdla 14 hours ago

Who cares about AI privacy? Most people don’t. If you do, run locally.

dkroy 34 minutes ago

Cool idea, though hats off to anyone who got cohere-transcribe to show up as serving the model. I could get device to show up, but kept having issues getting their server to properly serve the model though it could just be the device I tested.

poorman 13 minutes ago

Yeah I think there's a dependency issue going on there. Something isn't installed that needs to be.

MyUltiDev 2 hours ago

The hardware-attested privacy path is the interesting part of this, but the economic side has a quieter risk the thread has not named: the load tax per request. MiniMax M2.5 239B from your catalog still has to load all 239B weights even though only 11B are active — that is roughly 120GB at Q4_K_M, and cold load from SSD on Apple Silicon is measurable in tens of seconds. Even the Qwen3.5 122B MoE lands around 65GB cold. If the coordinator routes request number two to a different idle Mac than request number one, or if the owner's machine spun the model out to free memory in between, each request pays that cold load before the first token. Keeping the model resident 24/7 solves the latency but eats into the power budget the operator is trying to amortize in the first place. How does the coordinator decide which provider to keep warm for which model? A 16GB or 32GB home Mac cannot host Qwen3.5 122B MoE at all, and the Mac Studios that can are a much smaller slice of the 100M machine estimate.

pants2 15 hours ago

Cool idea. Just some back-of-the-envelope math here (not trusting what's on their site):

My M5 Pro can generate 130 tok/s (4 streams) on Gemma 4 26B. Darkbloom's pricing is $0.20 per Mtok output.

That's about $2.24/day or $67/mo revenue if it's fully utilized 24/7.

Now assuming 50W sustained load, that's about 36 kWh/mo, at ~$.25/kWh approx. $9/mo in costs.

Could be good for lunch money every once in a while! Around $700/yr.

mavamaarten 14 hours ago

Well. Running your machine to do inference will utilize more than 50W sustained load, I'd say more than double that. Plus electricity is more expensive here (but granted, I do have solar panels). Plus don't forget to factor in that your hardware will age faster.

I'd say it's not worth it. But the idea is cool.

jorvi 12 hours ago

Your hardware will age slower if you have consistent load.

Thermal stress from bursty workloads is much more of a wearing problem than electromigration. If you can consistently keep the SoC at a specific temperature, it'll last much longer.

This is also why it was very ironic that crypto miner GPUs would get sold at massive discounts. Everyone assumed that they had been ran ragged, but a proper miner would have undervolted the card and ran it at consistent utilization, meaning the card would be in better condition than a secondhand gamer GPU that would have constantly been shifting between 1% to 80% utilization, or rather, 30°C to 75°C

kennywinker 14 hours ago

Their estimate is based on significantly lower consumption when under load. E.g. 25W for an M4 Pro mac mini. I have no idea if that’s realistic - but the m4s are supposedly pretty efficient (https://www.jeffgeerling.com/blog/2024/m4-mac-minis-efficien...)

kennywinker 14 hours ago

Their example big earner models are FLUX.2 Klein 4B and FLUX.2 Klein 9B, which i imagine could generate a lot more tokens/s than a 26B model on your machine.

For Gemma 4 26B their math is:

single_tok/s = (307 GB/s / 4 GB) * 0.60 = 46.0 tok/s

batched_tok/s = 46.0 * 10 * 0.9 = 414.4 tok/s

tok/hr = 414.4 * 3600 = 1,492,020

revenue/hr = (1,492,020 / 1M) * $0.200000 = $0.2984

I have no idea if that is a good estimate of how much an M5 Pro can generate - but that’s what it says on their site.

They do a bit of a sneaky thing with power calculation: they subtract 12Ws of idle power, because they are assuming your machine is idling 24/7, so the only cost is the extra 18W they estimate you’ll use doing inference. Idk about you, but i do turn my machine off when i am not using it.

pants2 9 hours ago

Interesting token numbers they're using, because I've benchmarked it at 69 tok/s single steam and 130 multi stream.

torginus 10 hours ago

Also this assumes hardware never fails. I learned about this the hard way back when I started mining crypto on my 5700XT way back when.

I figured since I already used it a lot, and I've never had a GPU fail on me, it would be fine.

The fans on it died in a month of constant use, replacing them was more money than what I made on mining.

nnx 14 hours ago

> My M5 Pro can generate 130 tok/s (4 streams) on Gemma 4 26B.

This seems high. At which quantization? Using LM Studio or something else?

Note: Darkbloom seems to run everything on Q8 MLX.

pants2 9 hours ago

Ah good point, this is using Q4, benchmarked total throughout serving with Llama.cpp.

todotask2 15 hours ago

OpenAI has only about 5% paying customers, how does it generate revenue?

I don’t think this is a sustainable business model. For example, Cubbit tried to build decentralised storage, but I backed out because better alternatives now exist, and hardware continues to improve and become cheaper over time.

Your electricity and ownership are going to get lower return and does not actually requce CO2.

chaoz_ 15 hours ago

Genuinely curious, is there any way to estimate amortization of Mac?

I’d imagine 1 year of heavy usage would somehow affect its quality.

pants2 14 hours ago

Yeah, only way to get there is assuming they're not giving prompt caching discounts while my laptop is getting prompt caching benefits, with very many large prompts. So yes I am skeptical of their numbers.

xendo 15 hours ago

Any idea what makes for such a diff between your and theirs numbers? Batching? Or could they do a crazy prefix caching across all nodes to reduce the actual processing.

znnajdla 14 hours ago

Maybe lunch money for you, but there are people in some parts of the world who live on $200/month. Like Ukraine.

sethherr 14 hours ago

But they probably don’t have M5 MacBook pros idling

tonyedgecombe 13 hours ago

znnajdla 10 hours ago

MrDrMcCoy 15 hours ago

Don't forget to factor in cooling costs.

pants2 15 hours ago

Or saved heating costs in the winter!

dgacmu 6 hours ago

@eigengajesh - Your cost estimator lists Mac Mini M4 Pro with only 24 or 48GB options, but the M4 Pro mini can also be configured with 64GB. At least, I hope so, as I'm typing this on one. ;-)

Oh, also, you seem to have some bugs:

Gemma: WARN [vllm_mlx] RuntimeError: Failed to load the default metallib. This library is using language version 4.0 which is not supported on this OS. library not found library not found library not found

cohere: 2026-04-16T14:25:10.541562Z WARN [stt] File "/Users/dga/.darkbloom/bin/stt_server.py", line 332, in load_model 2026-04-16T14:25:10.541614Z WARN [stt] from mlx_audio.stt.models.cohere_asr import audio as audio_mod 2026-04-16T14:25:10.541643Z WARN [stt] ModuleNotFoundError: No module named 'mlx_audio.stt.models.cohere_asr'

Trying to download the flux image models fails with:

curl: (56) The requested URL returned error: 404

darkbloom earnings does not work

your documentation is inconstent between saying 100% of revenue to providers vs 95%

I think .. this needs a little more care and feeding before you open it up widely. :) And maybe lay off the LLM generated text before it gets you in trouble for promising things you're not delivering.

dchuk 5 hours ago

Interesting concept. Two sided marketplaces are hard to bootstrap but maybe just enough curiosity would get the flywheel going. Hell they should just try and convince people to enroll as providers but then also use the service even if it’s hitting their own machines until there’s some degree of supply and demand pressure then try and get only providers to sign up. Or set up some way to encourage providers to promote others to use the service (the 100% rev share kind of breaks that concept but anything can change).

I wish this was self hostable, even for a license fee. Many businesses have fleets of Macs, sometimes even in stock as returned equipment from employees. Would allow for a distributed internal inference network, which has appeal for many orgs who value or require privacy.

poorman 2 hours ago

As one of the only people running a Mac Studio M3 Ultra with 512 GB of RAM on the network, I can tell you at sustained 100% GPU utilization I am measuring 250 watts max (at the power outlet). My solar panels are easily producing this. The power calculation goes away once you connect a solar panel. You can get a 400 watt solar panel on Amazon for $300.

qurren an hour ago

> You can get a 400 watt solar panel on Amazon for $300.

Too expensive. It's probably producing 200 watts average for 8 hours a day. That's 1600 watt hours, which is about $1.60 at PG&E prices. That would take 187 days to recoup the cost of just the panel.

If you include installation costs and "what PG&E steals if you wire it to the same grid" it's probably more like 4x that, which is too long.

Tell me when we can have 400 watt solar panels for $50. Stupid capitalism literally forces solar panel prices to make it unprofitable.

People should never have to take out loans for solar. Solar should be subsidized and forced by the government to be so cheap that it repays for its cost within a month. Then we're talking. Most things I buy to save money, I expect them to repay within a month. Maybe 2 months max.

dgacmu 29 minutes ago

You can get them used for that price, and new for $107 if you buy qty 10+. See signature solar as one example.

Installation costs and inverters not included, however.

poorman an hour ago

If you are worried about a $300 solar panel you are not going to like the cost of a Mac Studio M3 Ultra 512 GB! haha

qurren an hour ago

frankfrank13 4 hours ago

This is one of those ideas I think makes perfect sense, but requires so much operational change for the entire stack, that it would be very difficult to scale:

- Convincing labs to run distributed, burst-y inference

- Convincing people to run their Mac all day, hoping to make a little profit

- Convincing users to trust a distributed network of un-trusted devices

I had a similar idea, pre-AI, just for compute in general. But solving even 1 of those 3 (swap AI lab for managed-compute-type-company, eg Supabase, Vercel) is nearly impossible.

jzig 3 hours ago

creamyhorror 2 hours ago

oh boooy, it's a benchmarking script, but still...

TuringNYC 15 hours ago

I'd love a way to do this locally -- pool all the PCs in our own office for in-office pools of compute. Any suggestions from anyone? We currently run ollama but manually manage the pools

damezumari 15 hours ago

zozbot234 12 hours ago

If you set CPUSchedulingPolicy=idle Nice=19 IOSchedulingClass=idle in the ollama server configuration it should run in the background with lowest priority.

utopiah 12 hours ago

Seems like so much more work than "just" paying for https://huggingface.co or whichever other neocloud who already did all the setup for you and just waits for your credit card per minute/seconds/token.

TuringNYC 5 hours ago

It is much more work because for many workloads you have geographic ringfencing and cannot send it out to the cloud

haspok 9 hours ago

Having strong SETI@Home vibes from 25 years ago, except of course, this is not for the greater good of humanity, but a for-profit project.

Problem is, from a technical point of view, what kind of made sense back then (most people running desktops, fans always on, energy saving minimal) is kind of stupid today (even if your laptop has no fan, would you want it to be always generating heat?)...

I definitely want my laptops to be cool, quiet and idle most of the time.

kamranjon 7 hours ago

My m4 max mbp with 128gb of memory is constantly training 24/7 on weekends- it’s why I bought the thing.

vorticalbox 9 hours ago

I some times play about with local models via ollama/comfyui and more recently ace-step to generate music.

This is short bursts of heat 5-10 m during the render I would not be happy with that for multiple hours a day. I am sure that would have a negative effect on battery health.

pants2 15 hours ago

You might not even know it as a user but the payment/distribution here is all built on crypto+stablecoins. This is a great use case for it.

rvz 14 hours ago

Good. Another great non-speculative use-case for crypto and stablecoins.

kennywinker 14 hours ago

Amazing! Let me see, doing the math r/n… carry the one, yup that makes the total number of non-speculative uses for crypto and stablecoin: 1

;P

auslegung 3 hours ago

How can one do this safely? If I create a new, non-sudo user, can I install the MDM profile only for that user? I don't understand how this all works obviously so maybe this is a very dumb question

stuxnet79 14 hours ago

So basically ... Pied Piper.

JaggerJo 14 hours ago

finally!

NiloCK 11 hours ago

Interesting to see an offering with this heritage [1] proposing flat earnings rates for inference operators here, rather than trying to sell a dynamic marketplace where operators compete on price in real-time.

Right now the dashboards show 78 providers online, but someone in-thread here said that they spun one up and got no requests. Surely someone would be willing to beat the posted rate and swallow up the demand?

I expect this is a migration target, but a tactical omission from V1 comms both for legitimate legibility reasons (I can sell x for y is easier to parse than 'I can participate in a marketplace') and slightly illegitimate legibility reasons (obscuring likely future price collapse).

Still - neat project that I hope does well.

[1] Layer Labs, formerly EigenLayer, is company built around a protocol to abstract and recycle economic security guarantees from Ethereum proof of stake.

alexpotato 8 hours ago

Wasn't there an idea about 15 years ago where you would open your browser, go to a webpage and that page would have a JavaScript based client that would run distributed workloads?

I believe the idea was that people could submit big workloads, the server would slice them up and then have the clients download and run a small slice. You as the computer owner would then get some payout.

Intersting to see this coming back again.

thekid314 7 hours ago

Or SETI which would search for signs of alien life.

woadwarrior01 12 hours ago

I won't install some random untrusted binary off of some website. I downloaded it and did some cursory analysis instead.

Got the latest v0.3.8 version from the list here: https://api.darkbloom.dev/v1/releases/latest

Three binaries and a Python file: darkbloom (Rust)

eigeninference-enclave (Swift)

ffmpeg (from Homebrew, lol)

stt_server.py (a simple FastAPI speech-to-text server using mlx_audio).

The good parts: All three binaries are signed with a valid Apple Developer ID and have Hardened runtime enabled.

Bad parts: Binaries aren't notarized. Enrolls the device for remote MDM using micromdm. Downloads and installs a complete Python runtime from Cloudflare R2 (Supply chain risk). PT_DENY_ATTACH to make debugging harder. Collects device serial numbers.

TL;DR: No, not touching that.

jaffee 6 hours ago

client side of this kind of needs to be open source unless I'm running it on a dedicated machine and firewalling it from the rest of my network. Or the company needs to have a very strong reputation and certifications. curlbash and go is a pretty hard sell for me

czk 3 hours ago

the MDM profile requirement is suspect though I get why they are doing it. but it doesn't inspire confidence to see that their profile is unsigned and still using the default micromdn scep challenge...

MicBook56 8 hours ago

I like the idea but it wont take off until Homomorphic Encryption for inference becomes a thing that's efficient and anyone can be a node.

heddycrow 10 hours ago

I think it’s important that systems like this exist, but getting them off the ground is non-trivial.

We’ve been building something similar for image/video models for the past few months, and it’s made me think distribution might be the real bottleneck.

It’s proving difficult to get enough early usage to reach the point where the system becomes more interesting on its own.

Curious how others have approached that bootstrap problem. Thanks in advance.

0xbadcafebee 14 hours ago

I'm not sure how the economics works out. Pricing for AI inference is based on supply/demand/scarcity. If your hardware is scarce, that means low supply; combine with high demand, it's now valuable. But what happens if you enable every spare Mac on the planet to join the game? Now your supply is high, which means now it's less valuable. So if this becomes really popular, you don't make much money. But if it doesn't become somewhat popular, you don't get any requests, and don't make money. The only way they could ensure a good return would be to first make it popular, then artificially lower the number of hosts.

Jn2G3Np8 11 hours ago

Love the concept, with some similarity to folding@home, though more personal gain.

But trying it out it still needs work, I couldn't download a model successfully (and their list of nodes at https://console.darkbloom.dev/providers suggests this is typical).

And as a cursory user, it took me some digging to find out that to cash out you need a Solana address (providers > earnings).

BingBingBap 15 hours ago

Generate images requested by randoms on the internet on your hardware.

What could possibly go wrong?

dr_kiszonka 15 hours ago

"These are estimates only. We do not guarantee any specific utilization or earnings. Actual earnings depend on network demand, model popularity, your provider reputation score, and how many other providers are serving the same model.

When your Mac is idle (no inference requests), it consumes minimal power — you don't lose significant money waiting for requests. The electricity costs shown only apply during active inference.

Text models typically see the highest and most consistent demand. Image generation and transcription requests are bursty — high volume during peaks, quiet otherwise."

miki123211 11 hours ago

> Operators cannot observe inference data.

Is there some actual cryptography behind this, or just fundamentally-breakable DRM and vibes?

utkarsh_apoorva 13 hours ago

Like the concept. This is not a business - should be an open source GitHub repo maybe.

They lost me with just one microcopy - “start earning”. Huge red signal.

Hamuko 13 hours ago

But why would I donate my Mac Studio's idle time if I couldn't "start earning"?

podviaznikov 7 hours ago

I've tried to install it on my mac, but not sure what macOS version it should support.

on 15.1 it failed to serve models.

updated to latest 15.5 and it fails to run binary.

Schiendelman 5 hours ago

I think macOS has jumped to 26, right?

WatchDog 12 hours ago

I installed two models, but it just always reports:

    Available models (2):
    CohereLabs/cohere-transcribe-03-2026 (4.6 GB)
    flux_2_klein_9b_q8p.ckpt (20.2 GB)
    ...
    Advertising 0 model(s) (only loaded models)

Also the benchmark just doesn't work.

Interesting idea, but needs some work.

amdivia 13 hours ago

Until we have breakthroughs in homomorphic encryption compute, I won't trust such privacy claims

drob518 9 hours ago

Seems like an interesting way for those people that purchased a Mac Mini to run OpenClaw to pay off the hardware, since mostly it’s now idle.

dangoodmanUT 6 hours ago

This feels like defi... de-ai

gndp 14 hours ago

They are almost claiming FHE, isn't it just a matter of creating the right tool to get the generated tokens from RAM before it gets encrypted for transfer. How is it fundamentally different than chutes?

ponyous 7 hours ago

Why does M1 Max project significantly higher revenue than M3 Max with double the ram?

v9v 11 hours ago

They could consider registering as a provider on something like OpenRouter if they aren't getting enough inference requests on their own site.

throwatdem12311 6 hours ago

Actually more useful than Bitcoin. Brilliant idea.

puttycat 10 hours ago

> Every request is end-to-end encrypted

Afaik you will need to decrypt the data the moment it needs to be fed into the model.

How do they do this then?

mr_mitm 9 hours ago

The system hosting the model must be one of the ends.

Remember, all encryption is E2EE if you're not picky about the ends.

jboggan 14 hours ago

Is this named after the 2011 split album with Grimes and d'Eon?

ripped_britches 7 hours ago

How does the inference work correctly if the payloads are encrypted?

eigengajesh 9 hours ago

hey guys! i'm the creator. let me know if you have any questions.

daniel_iversen 5 hours ago

Hi! Others have noted this too but you can't seem to buy credits right now, it says "This page couldn’t load" in a custom error page when you've selected the amount and click continue to checkout. Congrats on launching such a cool project that's getting people excited, thinking and discussing :)

SlavikCA 5 hours ago

Please offer new clients try it: at least let us to send few requests in the chat.

0xc133 9 hours ago

Hey Gajesh! I sent you an email with some of the teething problems I ran into trying to get started as a provider. Hope it didn't end up in your spam folder!

koliber 14 hours ago

Apple should build this, and start giving away free Macs subsidized by idle usage.

resonanormal 14 hours ago

I could imagine this working for the openclaw community if the price is right

Havoc 9 hours ago

That was my first thought too especially for talks that aren’t particularly important like daily digests of online things

subpixel 10 hours ago

Why isn’t a MacBook Air M5 on the hardware list?

ianpurton 9 hours ago

Because the model that generated that list was trained before the M5 came out.

chakintosh 9 hours ago

no fans

chaoz_ 15 hours ago

That solution actually makes great sense. So Apple won in some strange way again?

Guess there are limitations on size of the models, but if top-tier models will getting democratized I don’t see a reason not to use this API. The only thing that comes to me is data privacy concerns.

I think batch-evals for non-sensitive data has great PMF here.

59nadir 13 hours ago

> So Apple won in some strange way again?

Heh, what did they win exactly? This is just a way for another company to extract value out of the single region of the world where Apple is a relevant vendor, and it happens to be the one where it's the easiest to pull people into schemes.

rvz 15 hours ago

Yes. They never needed to participate in the AI race to zero.

Because they were already at the finish line with Apple Silicon.

> I don’t see a reason not to use this API. The only thing that comes to me is data privacy concerns.

The whole inference is end-to-end encrypted so none of the nodes can see the prompts or the messages.

chaoz_ 15 hours ago

Fun question: can some (part of it) be a crypto token that I can buy? :))

That would finally be a crypto thing which is backed by value I believe in.

grvbck 10 hours ago

Broken calculator or am I missing something here?

  Macbook Air M2  8GB   12h/day -> $647/month

  Mac Mini M4     32GB  12h/day -> $290/month
I mean, I'd be happy to buy a few used M2 Airs with minimal specs and start printing money but…

bentt 15 hours ago

I thought this was Apple’s plan all along. How is this not already their thing?

DeathArrow 15 hours ago

Why only Macs? If we think of all PCs and mobile phones running idle, the potential is much larger.

btown 15 hours ago

From the paper: https://github.com/Layr-Labs/d-inference/blob/master/papers/...

> Apple’s attestation servers will only generate the FreshnessCode for a genuine device that checks in via APNs. A software-only adversary cannot forge the MDA certificate chain (Assumption 3). Com- bined with SIP enforcement (preventing binary replace- ment) and Secure Boot (preventing bootloader tampering), this provides strong evidence that the signing key resides in genuine Apple hardware.

saagarjha 12 hours ago

I am not entirely sure they understand that System Integrity Protection and Secure Boot can be turned off.

btown 5 hours ago

nl 15 hours ago

They use the Apple TEE which they claim also protects GPU memory (I wasn't aware of this).

NVidia data center GPUs have a similar path, but not their consumer ones. Not sure about the NVidia Spark.

It's possible AMD Strix Halo can do this, but unlikely for any other PC based GPU environments.

MrDrMcCoy 15 hours ago

Epyc has that VM encrypted memory thing, which comes pretty close. It does raise an interesting question, though: would a PCIe card passed through to a VM be able to DMA access the memory of neighboring devices?

stryakr 15 hours ago

simple first target, PCs have more variability

bprasanna 9 hours ago

Like Fold@home but for profit!

logicallee 4 hours ago

It's a good project that makes sense. I recommend adding a contractual layer as well, since it's free and makes sense. Operators could legally sign that they will not look into the inference layer. After all, the operators already have a financial relationship with this provider, so it makes sense to add a contract to it and keep operators from looking into other people's data that way, too. I wish this project a lot of success.

egorfine 11 hours ago

I really want this to succeed

dcreater 14 hours ago

I cant buy credits - says page could not load

Fokamul 9 hours ago

Thanks, if this takse off. I have finally some motivation to do exploitation in kernel. :)

sharts 6 hours ago

Too much to read.

rvz 15 hours ago

Should have called it “Inferanet” with this idea.

Away this looks like a great idea and might have a chance at solving the economic issue with running nodes for cheap inference and getting paid for it.

jaylane 13 hours ago

latest (v0.3.8) tar doesn't contain image-bank or gRPCServerCLI dependencies so installer fails.

jonhohle 6 hours ago

> That is not a technology problem. It is a marketplace problem.

I cringe every time I see this sentence structure. I know the joke is about emdashes, but the “Its not …. It’s ….” drives me crazy.

sergiusignacius 6 hours ago

I see it everywhere now, it's even in videos and how people talk.

rustyhancock 5 hours ago

Maybe it's a Baader-Meinhoff phenomena and even a small difference in frequency feels overwhelming because you can't help but notice when it happens without noticing when it doesn't occur.

parasubvert 6 hours ago

It learned it from humans....

projektfu 5 hours ago

In a little while, AI will reverse the order and everyone will be happy.

Tired: That is not a technology problem. It is a marketplace problem.

Wired: This is a marketplace problem, not a technology problem.

rustyhancock 5 hours ago

I think what irritates people is that there isn't a natural variation to whatever pattern it uses.

And I don't really know why both patterns don't appear equally.

Intuitively I just can't grasp how it can be so selective with the phrases it generates.

nnevatie 6 hours ago

Only an em dash missing in between to be chefs-kiss-perfect.

biztos 5 hours ago

It's not a sentence-structure problem. It's an effort problem.

But the real solution is to do this other thing. If you'd like I can give you the three-step guide to fixing the thing you asked me to fix.

/s