Yann LeCun raises $1B to build AI that understands the physical world (wired.com)
588 points by helloplanets a day ago
A_D_E_P_T a day ago
Justifiable.
There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
andy12_ a day ago
I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body.
zelphirkalt a day ago
> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
perfmode a day ago
andy12_ a day ago
steego a day ago
conartist6 10 hours ago
jstummbillig a day ago
wiz21c a day ago
jacquesm 20 hours ago
The main difference is humans are learning all the time and models learn batch wise and forget whatever happened in a previous session unless someone makes it part of the training data so there is a massive lag.
Whoever cracks the continuous customized (per user, for instance) learning problem without just extending the context window is going to be making a big splash. And I don't mean cheats and shortcuts, I mean actually tuning the model based on received feedback.
eloisant 7 hours ago
aurareturn 16 hours ago
ben_w a day ago
> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
program_whiz a day ago
A_D_E_P_T a day ago
You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
the_black_hand 13 hours ago
yes those are bottlenecks that world models don't solve. but the promise of world models is, unlike LLMs, they might be able to learn things about the world that humans haven't written. For example, we still don't fully know how insects fly. A world model could be trained on thousands of videos of insects and make a novel observation about insect trajectories. The premise is that despite being here for millenia, humans have only observed a tiny fraction of the world.
So I do buy his idea. But I disagree that you need world models to get to human level capabilities. IMO there's no fundamental reason why models can't develop human understanding based on the known human observations.
10xDev a day ago
The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.
kergonath a day ago
lxgr a day ago
edgyquant 19 hours ago
jnd-cz a day ago
carlmr 7 hours ago
Especially they will require even more compute to get anything close to usable output. Human brains are super efficient at learning and producing output. We will need exponentially more compute for real time learning from video + audio + haptic data.
a1371 8 hours ago
I never understood why we believe humans don't backprop. Isn't it that during the day we fill up our context (short term memory) and sleep is actually where we use that to backprop? Heck, everyone knows what "sleep on it" means.
cedilla 8 hours ago
eloisant 9 hours ago
LeCun is a researcher.
From his point of view, there are not much research left on LLM. Sure we can still improve them a bit with engineering around, but he's more interested in basic research.
stanfordkid 18 hours ago
It's pretty simple... the word circle and what you can correlate to it via english language description has somewhat less to do with reality than a physical 3D model of a circle and what it would do in an environment. You can't just add more linguistic description via training data to change that. It doesn't really matter that you can keep back propagating because what you are back propagating over is fundamentally and qualitatively less rich.
slashdave 14 hours ago
If your model is poor, no amount of learning can fix it. If you don't think your model architecture is limited, you aren't looking hard enough.
energy123 a day ago
I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.
staticman2 a day ago
daxfohl a day ago
zelphirkalt a day ago
a-french-anon a day ago
andy12_ a day ago
jeltz a day ago
andsoitis a day ago
anon7000 16 hours ago
I don’t understand your view. Reality is that we need some way to encode the rules of the world in a more definitive way. If we want models to be able to make assertive claims about important information and be correct, it’s very fair to theorize they might need a more deterministic approach than just training them more. But it’s just a theory that this will actually solve the problem.
Ultimately, we still have a lot to learn and a lot of experiments to do. It’s frankly unscientific to suggest any approaches are off the table, unless the data & research truly proves that. Why shouldn’t we take this awesome LLM technology and bring in more techniques to make it better?
A really, really basic example is chess. Current top AI models still don’t know how to play it (https://www.software7.com/blog/ai_chess_vs_1983_atari/) The models are surely trained on source material that include chess rules, and even high level chess games. But the models are not learning how to play chess correctly. They don’t have a model to understand how chess actually works — they only have a non-deterministic prediction based on what they’ve seen, even after being trained on more data than any chess novice has ever seen about the topic. And this is probably one of the easiest things for AI to stimulate. Very clear/brief rules, small problem space, no hidden information, but it can’t handle the massive decision space because its prediction isn’t based on the actual rules, but just “things that look similar”
(And yeah, I’m sure someone could build a specific LLM or agent system that can handle chess, but the point is that the powerful general purpose models can’t do it out of the box after training.)
Maybe more training & self-learning can solve this, but it’s clearly still unsolved. So we should definitely be experimenting with more techniques.
andy12_ 11 hours ago
edgyquant 19 hours ago
Iirc LeCunn talks about a self organizing hierarchy of real world objects and imo this is exactly how the human brain actually learns
nurettin 21 hours ago
Who knows? Perhaps attention really is all you need. Maybe our context window is really large. Or our compression is really effective. Perhaps adding external factors might be able to indirectly teach the models to act more in line with social expectations such as being embarrassed to repeat the same mistake, unlocking the final piece of the puzzle. We are still stumbling in the dark for answers.
mxkopy 18 hours ago
The reason LLMs fail today is because there’s no meaning inherent to the tokens they produce other than the one captured by cooccurrence within text. Efforts like these are necessary because so much of “general intelligence” is convention defined by embodied human experience, for example arrows implying directionality and even directionality itself.
charcircuit a day ago
Agents have the ability of continual learning.
andy12_ a day ago
jnd-cz a day ago
The sum of human knowledge is more than enough to come up with innovative ideas and not every field is working directly with the physical world. Still I would say there's enough information in the written history to create virtual simulation of 3d world with all ohysical laws applying (to a certain degree because computation is limited).
What current LLMs lack is inner motivation to create something on their own without being prompted. To think in their free time (whatever that means for batch, on demand processing), to reflect and learn, eventually to self modify.
I have a simple brain, limited knowledge, limited attention span, limited context memory. Yet I create stuff based what I see, read online. Nothing special, sometimes more based on someone else's project, sometimes on my own ideas which I have no doubt aren't that unique among 8 billions of other people. Yet consulting with AI provides me with more ideas applicable to my current vision of what I want to achieve. Sure it's mostly based on generally known (not always known to me) good practices. But my thoughts are the same way, only more limited by what I have slowly learned so far in my life.
jandrewrogers a day ago
> virtual simulation of 3d world
Virtual simulations are not substitutable for the physical world. They are fundamentally different theory problems that have almost no overlap in applicability. You could in principle create a simulation with the same mathematical properties as the physical world but no one has ever done that. I'm not sure if we even know how.
Physical world dynamics are metastable and non-linear at every resolution. The models we do build are created from sparse irregular samples with large error rates; you often have to do complex inference to know if a piece of data even represents something real. All of this largely breaks the assumptions of our tidy sampling theorems in mathematics. The problem of physical world inference has been studied for a couple decades in the defense and mapping industries; we already have a pretty good understanding of why LLM-style AI is uniquely bad at inference in this domain, and it mostly comes down to the architectural inability to represent it.
Grounded estimates of the minimum quantity of training data required to build a reliable model of physical world dynamics, given the above properties, is many exabytes. This data exists, so that is not a problem. The models will be orders of magnitude larger than current LLMs. Even if you solve the computer science and theory problems around representation so that learning and inference is efficient, few people are prepared for the scale of it.
(source: many years doing frontier R&D on these problems)
MITSardine a day ago
daxfohl a day ago
I guess you need two things to make that happen. First, more specialization among models and an ability to evolve, else you get all instances thinking roughly the same thing, or deer in the headlights where they don't know what of the millions of options they should think about. Second, fewer guardrails; there's only so much you can do by pure thought.
The problem is, idk if we're ready to have millions of distinct, evolving, self-executing models running wild without guardrails. It seems like a contradiction: you can't achieve true cognition from a machine while artificially restricting its boundaries, and you can't lift the boundaries without impacting safety.
slibhb 6 hours ago
> LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions.
This seems wrong to me on a few levels.
First, there is no way to "experience the world directly," all experience is indirect, and language is a very good way of describing the world. If language was a bad choice or limited in some fundamental way, LLMs wouldn't work as well as they do.
Second, novel ideas are often existing ideas remixed. It's hard/impossible to point to any single idea that sprung from nowhere.
Third, you can provide an LLM with real-world information and suddenly it's "interacting with the world". If I tell an LLM about the US war on Iran, I am in a very real sense plugging it into the real world, something that isn't part of its training data.
Finally, modern LLMs are multi-modal, meaning they have the ability to handle images/video. My understanding is that they use some kind of adapter to turn non-text data into data that the LLM can make sense of.
A_D_E_P_T 5 hours ago
Re 1: You experience the world in real time (or close enough) via your senses, which combine to form a spatiotemporal sense: A sense of being a bounded entity in space and time. The LLM has none of that. They experience the world via stale old text and text derivatives.
Re 2: There's something tremendous in the fact, staring us right in the face, that LLMs are unable to meaningfully contribute to academic/medical research. I'm not saying that they need to perform on the level of a one-in-a-million Maxwell, DaVinci, or whatever. But as Dwarkesh asked one year ago: "What do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven't been able to make a single new connection that has led to a discovery?"
Re 3: Sure, you can hold it by the hand and spoonfeed it. You can also create for it a mirror reality which doesn't exist, which is pure fiction. Given how limited these systems are, I don't suppose it makes much of a difference. There's no way for it to tell. The "human in the loop" is its interaction with the world. And a pale, meager interaction it is.
Re 4: Static, old images/video that they were trained on some months ago. That, too, is no way of interacting with the world.
crazygringo 4 hours ago
slibhb 5 hours ago
ljm a day ago
I'm gonna be a cynic and say this is money following money and Yann LeCun is an excellent salesman.
I 100% guarantee that he will not be holding the bag when this fails. Society will be protecting him.
On that proviso I have zero respect for this guy.
thinkling 19 hours ago
Um, why would anyone be "holding the bag" and who needs protecting by society? He's not taking out a loan, he's getting capital investment in a startup. People are gambling that he will do well and make money for them. If they gamble wrong, that's on them. Society won't be doing anything either way because investors in startups that fail don't get anything.
roromainmain 21 hours ago
Agree. LLMs operate in the domain of language and symbols, but the universe contains much more than that. Humans also learn a great deal from direct phenomenological experience of the world, even without putting those experiences into words. I remember a talk by Yann LeCun where he pointed out that in just the first couple of years of life, a human baby is exposed to orders of magnitude more sensory data (vision, sound, etc.) than what current LLMs are typically trained on. This seems like a major limitation of purely language-based models.
Unearned5161 a day ago
I have a pet peeve with the concept of "a genuinely novel discovery or invention", what do you imagine this to be? Can you point me towards a discovery or invention that was "genuinely novel", ever?
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
davidfarrell a day ago
W Brian Arthur's book "The Nature of Technology" provides a framework for classifying new technology as elemental vs innovative that I find helpful. For example the Huntley-Mcllroy diff operates on the phenomenon that ordered correspondence survives editing. That was an invention (discovery of a natural phenomenon and a means to harness it). Myers diff improves the performance by exploiting the fact that text changes are sparse. That's innovation. A python app using libdiff, that's engineering. And then you might say in terms of "descendants": invention > innovation > engineering. But it's just a perspective.
0x3f a day ago
Novel things can be incremental. I don't think LLMs can do that either, at least I've never seen one do it.
A_D_E_P_T a day ago
Suno is transformer-based; in a way it's a heavily modified LLM.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
You can learn a lot by playing with Suno.
hodgehog11 a day ago
chpatrick a day ago
bonesss a day ago
Genuinely novel discovery or invention?
Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses.
There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume.
chpatrick a day ago
jungturk a day ago
mirekrusin 12 hours ago
Thank you for not saying "language", but "text".
It's true, but it's also true that text is very expressive.
Programming languages (huge, formalized expressiveness), math and other formal notation, SQL, HTML, SVG, JSON/YAML, CSV, domain specific encoding ie. for DNA/protein sequences, for music, verilog/VHDL for hardware, DOT/Graphviz/Mermaid, OBJ for 3D, Terraform/Nix, Dockerfiles, git diffs/patches, URLs etc etc.
The scope is very wide and covers enough to be called generic especially if you include multi modalities that are already being blended in (images, videos, sound).
I'm cheering for Yann, hope he's right and I really like his approach to openness (hope he'll carry it over to his new company).
At the same time current architectures do exist now and do work, by far exceeding his or anybody's else expectations and continue doing so. It may also be true they're here to stay for long on text and other supported modalities as cheaper to train.
vidarh 11 hours ago
It's just not true LLMs are limited to "static text". Data is data. Sensory input is still just data, and multimodal models has been a thing for a while. Ongoing learning and more extensive short term memory is a challenge, and so I am all for research in alternative architectures, but so much of the discourse about the limitations of LLMs act as if they have limitations they do not have.
masteranza a day ago
A few years ago I've made this simple thought experiment to convince myself that LLM's won't achieve superhuman level (in the sense of being better than all human experts):
Imagine that we made an LLM out of all dolphin songs ever recorded, would such LLM ever reach human level intelligence? Obviously and intuitively the answer is NO.
Your comment actually extended this observation for me sparking hope that systems consuming natural world as input might actually avoid this trap, but then I realized that tool use & learning can in fact be all that's needed for singularity while consuming raw data streams most of the time might actually be counterproductive.
kadushka 21 hours ago
Imagine that we made an LLM out of all dolphin songs ever recorded, would such LLM ever reach human level intelligence?
It could potentially reach super-dolphin level intelligence
smokel 8 hours ago
> Imagine that we made an LLM out of all dolphin songs ever recorded, would such LLM ever reach human level intelligence? Obviously and intuitively the answer is NO.
Not so fast. People have built pretty amazing thought frameworks out of a few axioms, a few bits, or a few operations in a Turing machine. Dolphin songs are probably more than enough to encode the game of life. It's just how you look at it that makes it intelligence.
hodgehog11 21 hours ago
I mean no offense here, but I really don't like this attitude of "I thought for a bit and came up with something that debunks all of the experts!". It's the same stuff you see with climate denialism, but it seems to be considered okay when it comes to AI. As if the people that spend all day every day for decades have not thought of this.
Dataset limitations have been well understood since the dawn of statistics-based AI, which is why these models are trained on data and RL tasks that are as wide as possible, and are assessed by generalization performance. Most of the experts in ML, even the mathematically trained ones, within the last few years acknowledge that superintelligence (under a more rigorous definition than the one here) is quite possible, even with only the current architectures. This is true even though no senior researcher in the field really wants superintelligence to be possible, hence the dozens of efforts to disprove its potential existence.
mountainriver 18 hours ago
Okay but most modern LLMs are multimodal, and it’s fairly easy to make an LLM multimodal.
Also there is no evidence that novel discoveries are more than remixes. This is heavily debated but from what we’ve seen so far I’m not sure I would bet against remix.
World models are great for specific kinds of RL or MPC. Yann is betting heavily on MPC, I’m not sure I agree with this as it’s currently computationally intractable at scale
jimbo808 18 hours ago
You're right that world models are the bottleneck, but people underestimate the staggering complexity gap between modeling the physical world and modeling a one-dimensional stream of text. Not only is the real world high-dimensional, continuous, noisy, and vastly more information dense, it's also not something for which there is an abundance of training data.
robrenaud a day ago
Was Alphago's move 37 original?
In the last step of training LLMs, reinforcement learning from verified rewards, LLMs are trained to maximize the probability of solving problems using their own output, depending on a reward signal akin to winning in Go. It's not just imitating human written text.
Fwiw, I agree that world models and some kind of learning from interacting with physical reality, rather than massive amounts of digitized gym environments is likely necessary for a breakthrough for AGI.
energy123 a day ago
why LLMs (transformers trained on multimodal token sequences, potentially containing spatiotemporal information) can't be a world model?
LarsDu88 a day ago
I really hate the world model terminology, but the actual low level gripe between LeCunn and autoregressive LLMs as they stand now is the fact that the loss function needs to reconstruct the entirety of the input. Anything less than pixel perfect reconstruction on images is penalized. Token by token reconstruction also is biased towards that same level of granularity.
The density of information in the spatiotemporal world is very very great, and a technique is needed to compress that down effectively. JEPAs are a promising technique towards that direction, but if you're not reconstructing text or images, it's a bit harder for humans to immediately grok whether the model is learning something effectively.
I think that very soon we will see JEPA based language models, but their key domain may very well be in robotics where machines really need to experience and reason about the physical the world differently than a purely text based world.
energy123 a day ago
ForHackernews a day ago
https://medium.com/state-of-the-art-technology/world-models-...
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
energy123 a day ago
mrguyorama a day ago
whiplash451 a day ago
The term LLM is confusing your point because VLMs belong to the same bin according to Yann.
Using the term autoregressive models instead might help.
kadushka 21 hours ago
Diffusion models are not autoregressive but have the same limitations
10xDev a day ago
Whether it is text or an image, it is just bits for a computer. A token can represent anything.
A_D_E_P_T a day ago
Sure, but don't conflate the representation format with the structure of what's being represented.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
firecall a day ago
Bombthecat 20 hours ago
Can a token represent concentration, will?
10xDev 9 hours ago
bsenftner a day ago
There will be no "unlocking of AGI" until we develop a new science capable of artificial comprehension. Comprehension is the cornucopia that produces everything we are, given raw stimulus an entire communicating Universe is generated with a plethora of highly advanceds predator/prey characters in an infinitely complex dynamic, and human science and technology have no lead how to artificially make sense of that in a simultaneous unifying whole. That's comprehension.
chilmers a day ago
Ironically, your comment is practically incomprehensible.
copperx a day ago
8bitsrule 12 hours ago
Gotta say, good luck with that effort. Lenat started Cyc 42 years ago, and after a while it seemed to disappear. 'Understanding' the 'physical world' is something that a few -may- start to approach intuitively after a decade or five of experience. (Einstein, Maxwell, et.al.) But the idea of feeding a machine facts and equations ... and dependence on human observations ... seems unlikely to lead to 'mastering the physical world'. Let alone for $1Billon.
rvz a day ago
A lot more justifiable than say, Thinking Machines at least. But we will "see".
World models and vision seems like a great use case for robotics which I can imagine that being the main driver of AMI.
kypro a day ago
> LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions.
No hate, but this is just your opinion.
The definition of "text" here is extremely broad – an SVG is text, but it's also an image format. It's not incomprehensible to imagine how an AI model trained on lots of SVG "text" might build internal models to help it "visualise" SVGs in the same way you might visualise objects in your mind when you read a description of them.
The human brain only has electrical signals for IO, yet we can learn and reason about the world just fine. I don't see why the same wouldn't be possible with textual IO.
daxfohl 21 hours ago
Yeah I don't even think you'd need to train it. You could probably just explain how SVG works (or just tell it to emit coordinates of lines it wants to draw), and tell it to draw a horse, and I have to imagine it would be able to do so, even if it had never been trained on images, svg, or even cartesian coordinates. I think there's enough world model in there that you could simply explain cartesian coordinates in the context, it'd figure out how those map to its understanding of a horse's composition, and output something roughly correct. It'd be an interesting experiment anyway.
But yeah, I can't imagine that LLMs don't already have a world model in there. They have to. The internet's corpus of text may not contain enough detail to allow a LLM to differentiate between similar-looking celebrities, but it's plenty of information to allow it to create a world model of how we perceive the world. And it's a vastly more information-dense means of doing so.
uoaei 18 hours ago
> There are a lot more degrees of freedom in world models.
Perhaps for the current implementations this is true. But the reason the current versions keep failing is that world dynamics has multiple orders of magnitude fewer degrees of freedom than the models that are tasked to learn them. We waste so much compute learning to approximate the constraints that are inherent in the world, and LeCun has been pressing the point the past few years that the models he intends to design will obviate the excess degrees of freedom to stabilize training (and constrain inference to physically plausible states).
If my assumption is true then expect Max Tegmark to be intimately involved in this new direction.
_s_a_m_ 19 hours ago
Really? As if not everyone told him the last 10 years, especially Gary Marcus which he ridiculed on Twitter at every occasion and now silently like a dog returning home switches to Gary's position. As if anyone was waiting for this, even 5 years ago this was old news, Tenenbaum is building world models for a long time. People in pop venture capital culture don't seem to know what is going on in research. Makes them easier to milk.
ml-anon a day ago
Honestly, how do people who know so little have this much confidence to post here?
mvc a day ago
You must be new here
A_D_E_P_T 20 hours ago
Care to explain what led to this reaction?
stevenhuang 11 hours ago
chriskanan a day ago
I had lunch with Yann last August, about a week after Alex Wang became his "boss." I asked him how he felt about that, and at the time he told me he would give it a month or two and see how it goes, and then figure out if he should stay or find employment elsewhere. I told him he ought to just create his own company if he decides to leave Meta to chase his own dream, rather than work on the dream's of others.
That said, while I 100% agree with him that LLM's won't lead to human-like intelligence (I think AGI is now an overloaded term, but Yann uses it in its original definition), I'm not fully on board with his world model strategy as the path forward.
yalok 20 hours ago
> I'm not fully on board with his world model strategy as the path forward
can you please elaborate on your strategy as the path forward?
echelon 16 hours ago
You have to understand the strategy of all the other players:
Build attention-grabbing, monetizable models that subsidize (at least in part) the run up to AGI.
Nobody is trying to one-shot AGI. They're grinding and leveling up while (1) developing core competencies around every aspect of the problem domain and (2) winning users.
I don't know if Meta is doing a good job of this, but Google, Anthropic, and OpenAI are.
Trying to go straight for the goal is risky. If the first results aren't economically viable or extremely exciting, the lab risks falling apart.
This is the exact point that Musk was publicly attacking Yann on, and it's likely the same one that Zuck pressed.
SilverBirch 9 hours ago
There's two points here. The first is that a strategy of monetizing models to fund the goal of reaching AI is indistinguishable from just running a business selling LLM model access, you don't actually need to be trying to reach AGI you can just run an LLM company and that is probably what these companies are largely doing. The AGI talk is just a recruiting/marketing strategy.
Secondly, it's not clear that the current LLMs are a run up to AGI. That's what LeCun is betting - that the LLM labs are chasing a local maxima.
khafra 7 hours ago
I mean, Sutskevar and Carmack are trying to one-shot AGI. We just don't talk about them as much as we do the labs with products because their labs aren't selling products.
boulos 3 hours ago
YetAnotherNick 12 hours ago
> Trying to go straight for the goal is risky.
That's the point of it. You need to take more risk for different approach. Same as what OpenAI did initially.
Oras a day ago
> But this is not an applied AI company.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
andreyk a day ago
"and we didn't see anything" is not justified at all.
Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.
He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.
Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.
Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.
nextos 14 hours ago
For instance, under Yann's direction Meta FAIR produced the ESM protein sequence model, which is less hyped than AlphaFold, but has been incredibly influential. They achieved great performance without using multiple alignments as an input/inductive bias. This is incredibly important for large classes of proteins where multiple alignments are pretty much noise.
Oras a day ago
I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.
Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.
overfeed 19 hours ago
magicalist a day ago
JMiao a day ago
nonameiguess 10 hours ago
stein1946 a day ago
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
GorbachevyChase a day ago
Do not keep bad results in context. You have to purge them to prevent them from effecting the next output. LLMs deceptively capable, but they don’t respond like a person. You can’t count on implicit context. You can’t count on parts of the implicit context having more weight than others.
torginus a day ago
Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.
But often passion and freedom to explore are often more important than resources
dabeeeenster 21 hours ago
> It could be a management issue, though
Or, maybe it's just hard?
LarsDu88 a day ago
That's such a terrible take.
For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.
At the same time Meta spat out huge breakthroughs in:
- 3d model generation
- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
- A whole new class of world modeling techniques (JEPAs)
- SAM (Segment anything)
Oras a day ago
> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.
airstrike a day ago
LarsDu88 21 hours ago
htrp 5 hours ago
this is absolutely an applied ai company, the only question is whether the applied AI will be subordinated to the research
lee 20 hours ago
In an interview, Yann mentioned that one reason he left Meta was that they were very focused on LLMs and he no longer believed LLMs were the path forward to reaching AGI.
YetAnotherNick a day ago
> we didn't see anything.
Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.
famouswaffles a day ago
You should ignore Llama because by his own admission,
>My only contribution was to push for Llama 2 to be open sourced.
rockinghigh 20 hours ago
koolala a day ago
Did he work on those vision models?
boccaff a day ago
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
oefrha a day ago
Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.
Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.
rockinghigh 20 hours ago
alecco a day ago
the_real_cher a day ago
He was suffocated by the corporate aspect Meta I suspect.
_giorgio_ a day ago
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
samrus a day ago
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
_giorgio_ a day ago
nashadelic a day ago
Your take is brutal but spot on
az226 a day ago
Yann LeCun seeks $5B+ valuation for world model startup AMI (Amilabs).
He has hired LeBrun to the helm as CEO.
AMI has also hired LeFunde as CFO and LeTune as head of post-training.
They’re also considering hiring LeMune as Head of Growth and LePrune to lead inference efficiency.
https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-ne...
vit05 a day ago
Why didn't they just call it LeLabs?
adamors a day ago
I was thinking the same, are all people he hires LeSomething like those working at Bolson Construction having -son as a suffix.
dude250711 a day ago
O4epegb 12 hours ago
LeBron is missing out an opportunity to invest
nsbk 11 hours ago
Or LeX
vrganj 21 hours ago
The guy overseeing the funds is called LeFunde and the guy doing the fine-tuning LeTune??
sinuhe69 17 hours ago
He just made a joke
har2008preet 11 hours ago
These all are claude agents name right?
doruk101 a day ago
nominative determinists are running the world
baxtr 13 hours ago
It almost sound as if an LLM thought this up!
andrepd a day ago
Bolson-ass hiring policy.
voxleone 7 hours ago
I rank with those who think human-like intelligence will require embeddings grounded in multiple physical sensory domains (vision, touch, audio, chemical sensing, etc.) fused into a shared world representation. That seems much closer to how biological intelligence works than text-only models. But if this path succeeds and produces systems with something like genuine understanding or sentience, there’s a deeper question: what is the moral status of such systems? If they have experiences or agency, treating them purely as tools could start to look uncomfortably close to slavery.
boringg 7 hours ago
Its interesting that you seem to be more concerned that we would potentially enslave human like robots (while arguing sentience) while the likelihood of events is that we are far more likely to be enslaved to/by our own creations.
Id say probability wise we don’t create sentient like behavior for a long time (low probability) much higher is the second circumstance.
nashashmi 6 hours ago
Personal Agency is a strong characteristic of a personality. AI would have to acquire a personality first. It could probably do this by copying others statistically. In that case, it is only doing what someone else has done.
There is no such thing as real sentient AI theoretically. Our current models are only emulations of humans. Maybe in the future someone will figure out a way for computers to learn how to learn. Then maybe someone will codify computers to acquire base methodologies vs just implementing any methodology it finds in the world.
djeastm 7 hours ago
It's an interesting question. On one hand we don't worry about this much with animals, the most advanced of which we know have personalities, moods, etc (Pigs, for instance). They really only seem to lack the language and higher-order reasoning skills. But where's the line?
pegasus 6 hours ago
We do worry much more about animal well-being than we worry about our "lumps of metal" (as a cousin comment fittingly put it). As we should, and generally I think we should worry much more about animal welfare. I find concerns for AI system welfare voiced by people like Thomas Metzinger wildly misguided.
confidantlake 7 hours ago
And while they don't have language like we do, dogs can understand basic commands and they aren't even the smartest animals.
carra 7 hours ago
I don't think they will have sentience or agency unless they are designed to:
1) Keep thinking continuously, as opposed to current AIs that stop functioning between prompts. 2) Have permanent memory of their previous experiences. 3) Be able to alter their own weights based on those experiences (a.k.a. learn).
snek_case 7 hours ago
That's the direction the field is already going with "agents". People want autonomous AI agents that are capable of acting independently and that have more and more capabilities. For example, something like Claude code, but that acts as a sidekick that is constantly running, and able to act without being prompted. That's what people are imagining when they talk about teams of agents. You act as a manager, but your coding agents are off working on various features and only check in periodically.
butlike 6 hours ago
They won't have sentience because it will be antithetical to capitalist business ideology. There's no good business value proposition for having the AI daydream like humans do, or 'sleep' while 'on', or have inspirational thought that might be seen as 'wrong' or useless. If that behavior ever manifests, it will probably be stamped out in a future release.
You can't justify to the board the wasted money to have the android dream.
re5i5tor 7 hours ago
Does anyone else see an echo of Severance (Apple TV series) here?
mc32 7 hours ago
What’s the difference from thinking your brain is a slave to your body or vice versa?
We only think slavery is bad because have a philosophy and language to describe and evaluate the situation. It’s unlikely Ant colonies understand the concept of slavery, eunuchs, or feminism. We have the framework to understand these concepts without them we’d be oblivious to them.
nprateem 7 hours ago
Lol. A lump of metal can't be sentient.
ToValueFunfetti 7 hours ago
Yeah, call me when Yann incorporates the four humors and the elemental force of fire, from which we draw life. Metal lacks the nature for this purpose.
heisig 7 hours ago
Says the bag of lipids and proteins :)
busyant 6 hours ago
snek_case 6 hours ago
esafak an hour ago
I think the more likely retort will be that we can't be smart, by the AI's standard.
jrrv 7 hours ago
Lol. A lump of flesh can't be sentient
https://www.mit.edu/people/dpolicar/writing/prose/text/think...
julius_eth_dev 9 hours ago
LeCun has been pushing world models and joint embedding predictive architectures (JEPA) for years now as an alternative to the generative pretraining paradigm. The core bet — that you need learned abstract representations of physical dynamics rather than just next-token prediction — is compelling, but $1B is a lot of capital to validate an architecture that still hasn't demonstrated clear advantages over scaling what already works. The interesting question is whether this funding lets them finally show JEPA-style approaches outperforming autoregressive models on tasks requiring genuine physical reasoning, or if the money just gets absorbed into the same GPU scaling game everyone else is playing.
leventilo 3 hours ago
Interesting that AMI is betting on video-first world models. A 4-year-old learns physics mostly through interaction, pushing, dropping, breaking things, not just watching. Vision helps but the feedback loop from acting in the world seems at least as important. Still, glad someone is putting $1B on a fundamentally different bet than "more text, bigger model."
mihaitoth a day ago
This couldn't have happened sooner, for 2 reasons.
1) the world has become a bit too focused on LLMs (although I agree that the benefits & new horizons that LLMs bring are real). We need research on other types of models to continue.
2) I almost wrote "Europe needs some aces". Although I'm European, my attitude is not at all that one of competition. This is not a card game. What Europe DOES need is an ATTRACTIVE WORKPLACE, so that talent that is useful for AI can also find a place to work here, not only overseas!
FartyMcFarter 9 hours ago
> What Europe DOES need is an ATTRACTIVE WORKPLACE, so that talent that is useful for AI can also find a place to work here, not only overseas!
There is DeepMind, OpenAI and Anthropic in London. Even after Brexit, London is still in Europe.
thih9 5 hours ago
Off topic, in case anyone wants to reject cookies, click the underlined "228" in the popup's:
> We, and our 228 partners use cookies
And then you'll see a "reject all" button. Can't make this up.
zahlman 2 hours ago
I just block "all cross-site cookies" in Firefox settings. This "may cause websites to break" but it hasn't affected anything I care about in years.
sbinnee 19 hours ago
So it is a startup? I expected it in fact from his reply to my concern. In my opinions, to explore the unknown, I think an institute like Mila, led by Yoshua Bengio, would have been more fitting. But Yann LeCun's career and his reply to my rant[1] speak for himself. I wonder how he is going to make money. Aside all my concerns, I wish him the best.
> You're absolutely right. Only large and profitable companies can afford to do actual research. All the historically impactful industry labs (AT&T Bell Labs, IBM Research, Xerox PARC, MSR, etc) were with companies that didn't have to worry about their survival. They stopped funding ambitious research when they started losing their dominant market position.
ZeroCool2u a day ago
Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI, this is probably a good thing for Europe. We need more well capitalized labs that aren't US or China centric and while I do like Mistral, they just haven't been keeping up on the frontier of model performance and seem like they've sort of pivoted into being integration specialists and consultants for EU corporations. That's fine and they've got to make money, but fully ceding the research front is not a good way to keep the EU competitive.
brandonb a day ago
LeCun's technical approach with AMI will likely be based on JEPA, which is also a very different approach than most US-based or Chinese AI labs are taking.
If you're looking to learn about JEPA, LeCun's vision document "A Path Towards Autonomous Machine Intelligence" is long but sketches out a very comprehensive vision of AI research: https://openreview.net/pdf?id=BZ5a1r-kVsf
Training JEPA models within reach, even for startups. For example, we're a 3-person startup who trained a health timeseries JEPA. There are JEPA models for computer vision and (even) for LLMs.
You don't need a $1B seed round to do interesting things here. We need more interesting, orthogonal ideas in AI. So I think it's good we're going to have a heavyweight lab in Europe alongside the US and China.
sanderjd a day ago
Have you published anything about your health time series model? Sounds interesting!
brandonb a day ago
mandeepj a day ago
Appreciate your work! Healthcare is a regulated industry. Everything (Research, proposals, FDA submissions, Compliance docs, Accreditation Standards, etc.) is documented and follows a process, which means there's a lot of thesis. You can't sneak in anything unverified or unreliable. Why does healthcare need a JEPA\World model?
brandonb a day ago
tomrod a day ago
I've been working to understand the potential uses for JEPA. Outside of video, has anyone made a list of any type (geared towards dummies like me)?
Brajeshwar a day ago
There seem to be other news articles mentioning that they are setting up in Singapore as their base. https://www.straitstimes.com/business/ai-godfather-raises-1-...
Signez a day ago
Hm, Singapour looks more like "one of their base"; they will have offices in Paris, Montréal, Singapour and New York (according to both this article and the interview Yann Le Cun did this morning on France Inter, the most listened radio in France).
Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
rubzah a day ago
fnands a day ago
Probably just a satellite office.
Might be to be close to some of Yann's collaborators like Xavier Bresson at NUS
stingraycharles a day ago
That's a Singaporian newspaper, though; not sure if it's objectively their main base, or just one of them
RamblingCTO a day ago
Which would be a good idea, as a European. I'd hate to see the investment go to waste on taxes that are spent on stupid shit anyway. Should go into R&D not fighting bureaucracy.
throwpoaster a day ago
"Show me the incentive and I will show you the outcome."
Almost certainly the IP will be held in Singapore for tax reasons.
re-thc a day ago
> they are setting up in Singapore as their base
Europe in general has been tightening up their rules / taxes / laws around startups / companies especially tech and remote.
It's been less friendly. these days.
Signez a day ago
Imustaskforhelp a day ago
sofixa a day ago
barrell a day ago
While I’d love there to be a European frontier model, I do very much enjoy mistral. For the price and speed it outperforms any other model for my use cases (language learning related formatting, non-code non-research).
vessenes a day ago
Partner in a fund that wrote a small check into this — I have no private knowledge of the deal - while I agree that one’s opinion on auto regressive models doesn’t matter, I think the fact of whether or not the auto regressive models work matters a lot, and particularly so in LeCun’s case.
What’s different about investing in this than investing in say a young researcher’s startup, or Ilya’s superintelligence? In both those cases, if a model architecture isn’t working out, I believe they will pivot. In YL’s case, I’m not sure that is true.
In that light, this bet is a bet on YL’s current view of the world. If his view is accurate, this is very good for Europe. If inaccurate, then this is sort of a nothing-burger; company will likely exit for roughly the investment amount - that money would not have gone to smaller European startups anyway - it’s a wash.
FWIW, I don’t think the original complaint about auto-regression “errors exist, errors always multiply under sequential token choice, ergo errors are endemic and this architecture sucks” is intellectually that compelling. Here: “world model errors exist, world model errors will always multiply under sequential token choice, ergo world model errors are endemic and this architecture sucks.” See what I did there?
On the other hand, we have a lot of unused training tokens in videos, I’d like very much to talk to a model with excellent ‘world’ knowledge and frontier textual capabilities, and I hope this goes well. Either way, as you say, Europe needs a frontier model company and this could be it.
jsnell a day ago
I don't think it's "regardless", your opinion on LeCun being right should be highly correlated to your opinion on whether this is good for Europe.
If you think that LLMs are sufficient and RSI is imminent (<1 year), this is horrible for Europe. It is a distracting boondoggle exactly at the wrong time.
vidarh a day ago
It's sufficient to think that there is a chance that they will not be, however, for there to be a non-zero value to fund other approaches.
And even if you think the chance is zero, unless you also think there is a zero chance they will be capable of pivoting quickly, it might still be beneficial.
I think his views are largely flawed, but chances are there will still be lots of useful science coming out of it as well. Even if current architectures can achieve AGI, it does not mean there can't also be better, cheaper, more effective ways of doing the same things, and so exploring the space more broadly can still be of significant value.
Tenoke a day ago
I think LeCun has been so consistently wrong and boneheaded for basically all of the AI boom, that this is much, much more likely to be bad than good for Europe. Probably one of the worst people to give that much money to that can even raise it in the field.
ainch a day ago
gozucito a day ago
Insanity a day ago
Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease. Many respectable medical professionals were convinced this was true, and they viewed the entire world through this lens. They interpreted data in ways that aligned with a miasmatic view.
Of course now we know this was delusional and it seems almost funny in retrospect. I feel the same way when I hear that 'just scale language models' suddenly created something that's true AGI, indistinguishable from human intelligence.
visarga a day ago
scarmig a day ago
dheera a day ago
Just because you raise 1 billion dollars to do X doesn't mean you can't pivot and do Y if it is in the best interest of your mission.
I won't comment on Yann LeCun or his current technical strategy, but if you can avoid sunk cost fallacy and pivot nimbly I don't think it is bad for Europe at all. It is "1 billion dollars for an AI research lab", not "1 billion dollars to do X".
andrepd a day ago
It's been 6 months away for 5 years now. In that time we've seen relatively mild incremental changes, not any qualitative ones. It's probably not 6 months away.
AStrangeMorrow a day ago
lordmathis a day ago
mfru a day ago
basket_horse a day ago
next_xibalba a day ago
> RSI
Wait, we have another acronym to track. Is this the same/different than AGI and/or ASI?
mietek a day ago
robrenaud a day ago
notnullorvoid a day ago
crystal_revenge a day ago
> fully ceding the research front is not a good way to keep the EU competitive
Tech is ultimately a red herring as far as what's needed to keep the EU competitive. The EU has a trillion dollar hole[0] to fill if they want to replace US military presence, and current net import over 50% of their energy. Unfortunately the current situation in Iran is not helping either of these as they constrains energy further and risks requiring military intervention.
0. https://www.wsj.com/world/europe/europes-1-trillion-race-to-...
AngryData a day ago
Hard disagree, military might isn't going to secure anybody into the future, modern society and our economies will only get more vulnerable as time goes on and large wars or engagements will just push economies closer to collapse. And without a solid modern economy to back up the military, modern military will fall apart.
gandalfstoe a day ago
Right, they really need a military industrial complex to be "competitive" :eyeroll. Are you suggesting regressing to the stone age?
crystal_revenge a day ago
chrisgd a day ago
33% of the business in a seed round is nuts
ak_111 a day ago
can you elaborate more, also isn't this necessary for a Lab that wants to compete with highly funded entities (like OpenAI, Anthropic)?
gigatexal a day ago
As an American here in Berlin, I, too welcome this. I would love for there to be many large well capitalized companies here for me to work at.
nailer a day ago
> Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI
My main concern with Lecunn are the amount of times he has repeatedly told people software is open source when it’s license directly violates the open source definition.
neversupervised a day ago
Is it good? This will almost certainly fail. Not because Yann or Europe, but because these sort of hyper-hyped projects fail. SSI and Thinking Machines haven’t lived to the hype.
ma2rten a day ago
Erm, ... OpenAI has hyped when it started and it took 6 years to take off. It's way to early to declare the SSI and Thinking Machines have failed.
koakuma-chan a day ago
giancarlostoro a day ago
I didn't really know who he was, so I went and found his wikipedia, which is written like either he wrote it himself to stroke his ego, or someone who likes him wrote it to stroke his ego:
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
pama a day ago
You underestimate academia. Any academic that reads these two sentences only focuses on the first one: He has a named chair at Courant. In Germany, being a a Prof is added to your ID card/passport and becomes part of your official name, like knighthood in other countries.
dr_hooo a day ago
timr a day ago
It's not comparing him to anyone. He has an endowed professorship. This is standard in academia, and you give the name because a) it's prestigious for the recipient and b) it strokes the ego of the donor.
leoc a day ago
lairv a day ago
https://cims.nyu.edu/dynamic/news/1441/
This is just the official name of a chair at NYU. I'm not even sure Jacob T. Schwartz is more well known than Yann LeCun
stephencanon a day ago
bobwaycott a day ago
That’s not a comparison to another person. That’s his job title. It is not uncommon for universities to have distinguished chairs within departments named after a notable person—in this case, the founder of NYU’s Department of Computer Science.
g947o a day ago
Eh, that paragraph reads perfectly normal to me.
Either you have not read enough Wikipedia pages, or you have too much to complain about. (Or both.)
teleforce a day ago
It's really inevitable isn't it, we are going from RAG to PAG, or physical augmented generation.
We already have PINN or physics-informed neural networks [1]. Soon we are going to have physical field computing by complex-valued network quantization or CVNN that has been recently proposed for more efficient physical AI [2].
[1] Physics-informed neural networks:
https://en.wikipedia.org/wiki/Physics-informed_neural_networ...
[2] Ultra-efficient physical field computing by complex-valued network quantization:
fs111 a day ago
verdverm a day ago
Link does not work, goes into loop at verify human check with some weird redirect
Looks like you appended the original URL to the end
Sebguer a day ago
Probably related to the reasoning behind: https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-a...
Or you're using Cloudflare DNS.
verdverm a day ago
droidjj a day ago
Huh, it's working for me (on Firefox).
paxys a day ago
I feel like I'm the only one not getting the world models hype. We've been talking about them for decades now, and all of it is still theoretical. Meanwhile LLMs and text foundation models showed up, proved to be insanely effective, took over the industry, and people are still going "nah LLMs aren't it, world models will be the gold standard, just wait."
pendenthistory a day ago
I bet LLMs and world models will merge. World models essentially try to predict the future, with or without actions taken. LLMs with tokenized image input can also be made to predict the future image tokens. It's a very valuable supervised learning signal aside from pre-training and various forms of RL.
HarHarVeryFunny a day ago
I think "world models" is the wrong thing to focus on when contrasting the "animal intelligence" approach (which is what LeCun is striving for) with LLMs, especially since "world model" means different things to different people. Some people would call the internal abstractions/representations that an LLM learns during training a "world model" (of sorts).
The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:
1) They are fundamentally a COPYING technology, not a learning or creative one. Of course, as we can see, copying in this fashion will get you an extremely long way, especially since it's deep patterns (not surface level text) being copied and recombined in novel ways. But, not all the way to AGI.
2) They are not grounded, therefore they are going to hallucinate.
The animal intelligence approach, the path to AGI, is also predictive, but what you predict is the external world, the future, not training set continuations. When your predictions are wrong (per perceptual feedback) you take this as a learning signal to update your predictions to do better next time a similar situation arises. This is fundamentally a LEARNING architecture, not a COPYING one. You are learning about the real world, not auto-regressively copying the actions that someone else took (training set continuations).
Since the animal is also acting in the external world that it is predicting, and learning about, this means that it is learning the external effects of it's own actions, i.e. it is learning how to DO things - how to achieve given outcomes. When put together with reasoning/planning, this allows it to plan a sequence of actions that should achieve a given external result ("goal").
Since the animal is predicting the real world, based on perceptual inputs from the real world, this means that it's predictions are grounded in reality, which is necessary to prevent hallucinations.
So, to come back to "world models", yes an animal intelligence/AGI built this way will learn a model of how the world works - how it evolves, and how it reacts (how to control it), but this behavioral model has little in common with the internal generative abstractions that an LLM will have learnt, and it is confusing to use the same name "world model" to refer to them both.
qsera 3 hours ago
>The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:
I am of the opinion that imagination and creativity comes from emotion, hence a machine that cannot "feel" will never be truly intelligent.
One can go ahead and ask, but you are just a lump of meat, if you can feel, then so a computer of similar structure can.
If we assume that physical reality is fundamental, then that might make sense. But what if consciousness is fundamental and reality plays on consciousness?
Then randomness, and in-turn ideas come from the attributes of the fundamental reality that we are in.
I ll try to simplify it. Imagine you having an idea that extends your life for a day. Then from all the possible worlds, in some worlds, you find yourselves living in the next day (in others you are dead). But this "idea" you had, was just one among the infinite sea of possibilities, and your consciousness inside one such world observes you having that idea and survive for a day!
If you want to create a machine that can do that, it implies that you should be a consciousness inside a world in it (because the machine cannot pick valid worlds from infinite samples, but just enables consciousness to exists such suitable worlds). So it cannot be done in our reality!
Mayyyyy be "Quantum Darwinism" is what I am trying to describe here..
HarHarVeryFunny 2 hours ago
sothatsit a day ago
RL on LLMs has changed things. LLMs are not stuck in continuation predicting territory any more.
Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.
A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes. They can write 1000s of lines of code that works. I think it’s no longer fair to categorise LLMs as just continuation-predictors.
libraryofbabel a day ago
HarHarVeryFunny a day ago
hdivider 17 hours ago
It's curious to me why we have no theory of intelligence. By which I mean an actual hard and verified theory, as in physics for gravity, electromagnetism, quantum mechanics.
Intelligence is simply not well-understood at a mathematical level. Like medieval engineers, we rely so heavily on experimentation in AI. We have no idea how far away from the human level we actually are. Or how far above the human level we can get. Or what, if anything, the limits of intelligence are.
jimbokun 17 hours ago
By now you would have to say it’s because “intelligence” is no more well defined than “consciousness” or “the soul”.
A more concrete idea like “learning” has been very strongly defined and quantifiable, which is maybe why progress in a theory of learning is so much more advanced than a theory of “intelligence“.
programjames 16 hours ago
I think this is the equivalent of a non-nuclear physicist asking, "why do we have no theory of nuclear physics?" in the late 1930s. Some people do, they're just not sharing it.
booleandilemma 14 hours ago
Who is more intelligent: a twenty-something influencer making money from her bedroom, or a grad student barely making ends meet?
Who is more intelligent: a politician, or a high school teacher?
What is intelligence, anyway?
Mistletoe 13 hours ago
We have a pretty good answer to your questions, they are called IQ tests. It’s not like measuring intelligence is uncharted territory.
https://www.scientificamerican.com/article/i-gave-chatgpt-an...
https://www.reddit.com/r/singularity/comments/1p5f0b1/gemini...
Gemini 3 Pro has an IQ of 130 now but we keep moving the goalposts and being like “not THAT intelligence, we mean this other intelligence”. I suspect, and history shows us this will be the case, that humans will judge AIs as not human and not intelligent and not needing rights way past the point where they should have rights, even when vastly superior to human intelligence.
namero999 6 hours ago
booleandilemma 8 hours ago
JimSanchez 2 hours ago
Interesting perspective from LeCun. The debate between scaling LLMs versus building systems that understand the physical world seems like one of the big open questions in AI right now. It will be fascinating to see whether “world models” end up complementing LLMs or eventually replacing parts of them.
kkwteh 8 hours ago
$1B at a $3.5B valuation. Seems problematic from a cap table perspective.
mkl a day ago
Seems like it's the second largest seed round anywhere after Thinking Machines Labs? https://news.crunchbase.com/venture/biggest-seed-round-ai-th...
That article is from June 2025 so may be out of date, and the definition of "seed round" is a bit fuzzy.
_giorgio_ a day ago
Thinking Machines looks half-dead already.
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
az226 a day ago
Was just a grift
hnarayanan a day ago
imjonse a day ago
At least some of that money should definitely go towards improving his powerpoint slides on JEPA related work :)
fauria a day ago
Archive: https://archive.md/5eZWq
The startup is Advanced Machine Intelligence Labs: https://amilabs.xyz/
insydian a day ago
As someone in the tech twitter sphere this is yann and his ideas performing a suplex on LLM based companies. It is completely unfathomable to start an ai research company… Only sell off 20% and have 1 billion for screwing around for a few years.
insydian a day ago
I liken this to watching a godzilla esque movie. Just grab some popcorn and enjoy the ride.
fennecfoxy a day ago
Why world model? To emulate how we became sentient?
A "world" is just senses. In a way the context is one sense. A digital only world is still a world.
I think more success is in a model having high level needs and aspirations that are borne from lower level needs. Model architecture also needs to shift to multiple autonomous systems that interact, in the same ways our brains work - there's a lot under the surface inside our heads, it's not just "us" in there.
We only interact with our environment because of our low level needs, which are primarily: food, water. Secondary: mating. Tertiary: social/tribal credit (which can enable food, water and mating).
omegastick a day ago
Because if you have an explicit world model you can optimize against it.
It sounds like you are imagining tacking a world model onto an LLM. That's one approach but not what LeCun advocates for.
kerlap10 a day ago
What use is it to understand the physical world if all investments are misallocated to the virtual world? Perhaps the AI will detect that there is a housing shortage and politicians will finally believe it because AI said so?
Or is it to accelerate Skynet?
halayli 18 hours ago
I feel HN comments have been getting hijacked for a long time now by LLM agents. Always so early, very positive, and hard to spot. Some replaced em-dash with --, some replace them with a single dash, some remove them all together. I wonder how much time it is taking from @dang and other moderators helping to maintain this community.
dang 17 hours ago
Can you mention some specific examples? If you don't want to post them here, emailing [email protected] would be good.
We recently promoted the no-generated-comments rule from case law [1] to the site guidelines [2], and we're being pretty active about banning accounts that break it.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://news.ycombinator.com/newsguidelines.html#generated
storus a day ago
Wasn't there some recent argument that world models won't achieve AGI either due to overlooking the normative framework, fundamental symmetries of the world purely from data and collapse in multi-step reasoning? JEPA is sacrificing fidelity for abstract representation yet how does that help in the real world where fidelity is the most important point? It's like relying on differential equations yet soon finding out they only cover minuscule amount of real world problems and almost all interesting problems are unsolvable by them.
ardawen a day ago
Does anyone have a sense of how funding like this is typically allocated? how much tends to go toward compute/training versus researchers, infrastructure, and general operations?
whiplash451 a day ago
A fair amount of negative comments here, but Yann might very well be the person who brings the Bell Labs culture back to life. It’s been badly missing, and not just in Europe.
tellarin 16 hours ago
Selfless plug here... Some collaborators and I just released a first version of a benchmark we think highlights a critical gap in recent models in understanding causality in the real-world, beyond a physics focus.
Everyday environments are rich in tangible control interfaces (TCIs), like, light switches, appliance panels, and embedded GUIs, that are designed for humans and demand commonsense and physics reasoning, but also causal prediction and outcome verification in time and space (e.g., delayed heating, remote lights).
SWITCH: Benchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios (https://huggingface.co/papers/2511.17649)
Feedback, suggestions, and collaborators are very welcome!
JhonOliver 4 hours ago
This is so stupid. AI already understands the physical world. What it can't do is interact with it. There's a hardware bottleneck. It simply isn't responsive.
catigula 2 hours ago
At this point, given that we basically literally have AGI, pursuing other avenues seems like an interesting approach.
npn a day ago
I wish him luck.
Recently all papers are about LLM, it brings up fatigue.
As GPT is almost reaching its limit, new architecture could bring out new discovery.
noiv 10 hours ago
Wouldn't that involve to read and understand an enormous amount of sensor data?
LarsDu88 a day ago
There's been a few very interesting JEPA publications from LeCun recently, particularly the leJEPA paper which claims to simplify a lot of training headaches for that class of models.
JEPAs also strike me as being a bit more akin to human intelligence, where for example, most children are very capable of locomotion and making basic drawings, but unable to make pixel level reconstructions of mental images (!!).
One thing I want to point out is that very LeCunn type techniques demonstrating label free training such as JEAs like DINO and JEPAs have been converging on performance of models that require large amounts of labeled data.
Alexandr Wang is a billionaire who made his wealth through a data labeling company and basically kicked LeCunn out.
Overall this will be good for AI and good for open source.
redgridtactical 19 hours ago
Refreshing to see some competition to the US AI scene. It's been the same three models trying to one up each other by copying and tweaking rather than pushing true innovation
mmaunder a day ago
That's between 1 and 10 training runs on a large foundational model, depending on pricing discounts and how much they manage to optimize it. I priced this out last night on AWS, which is admittedly expensive, but models have also gotten larger.
SilentM68 3 hours ago
Nice, all avenues should be explored. I'm for anything that leads to real solutions, cures, knowledge :)
pingou a day ago
Yann LeCun said a number of things that are very dubious, like autoregressive LLMs are a dead end, LLMs do not have an internal world model, and this morning https://www.youtube.com/watch?v=AFi1TPiB058 (in french) that an IA cannot find a strategy to preserve itself against the will of its creator.
As a french, I wish him good luck anyway, I'm all for exploring different avenues of achieving AGI.
manojbajaj95 a day ago
I attended a talk from Yann LeCun, and he always had a strong opinion about auto-regressive models. Its nice to see someone not just chasing hype and doing more research.
margorczynski a day ago
He couldn't achieve at least parity with LLMs during his days at Meta (and having at his disposal billions in resources most probably) but he'll succeed now? What is the pitch?
samrus a day ago
The pitch isnt to try to squeeze money out of a product like altman does. Its to lay the groundwork for the next evolution in AI. Llms were built on decades of work and theyve hit their limits. We'll need to invest alot of time building foundations without getting any tangible yeild for the next step to work. Get too greedy and youll be stuck
ernsheong 15 hours ago
One wonders why this sort of research isn’t in academia but in startups instead.
chabons 13 hours ago
Where in academia can one get a Billion (with a b) dollars to research something?
htrp a day ago
impressive that the round was 100% oversubscribed but to be expected when it's the prof that trained a good chunk of the current AI founders.
Toto336699 10 hours ago
Following in the foot steps of miss Fei Fei Li's World Lab?
They are currently estimated to be at a 5bn valuation.
groundzeros2015 20 hours ago
Well he will need to spend a lot less time on twitter to be successful in a new venture
w4yai a day ago
Europe becoming really attractive right now!
sofixa a day ago
Alternative free to read article: https://sifted.eu/articles/yann-lecun-ami-labs-meta-funding-...
blobbers 15 hours ago
Am I going to finally get a robot to fold my clothes?
whyleyc a day ago
yalogin a day ago
This feels like more justified investment as it’s try to move the needle. Hope he succeeds
Toto336699 10 hours ago
Following in the foot steps of miss fei fei li's World Lab?
secondary_op a day ago
That being sad, Yann LeCun's twitter reposts are below average IQ.
goldenarm a day ago
Do you have a recent example ?
lazyguythugman 8 hours ago
I've been following him on X for awhile. I'm surprised he has time for this because he is always retweeting anti Trump stuff all day every day.
itigges22 a day ago
I just saw a post from Yann mentioning that AMI Labs is hiring too!
semiinfinitely a day ago
should probably just link to the actual site: https://amilabs.xyz/
saxwick a day ago
It’s 4.7B actually, he confirmed it here https://x.com/ylecun/status/2031331124450931058?s=46
ardawen a day ago
That seems to be the valuation, not how much they raised afaik.
hinkley 20 hours ago
The better to make paperclips, my dear!
owlcompliance a day ago
I raised $1 to understand your physical world.
levodelellis a day ago
I have no faith in anyone doing AI to accomplish anything (especially relative to how much money they spend) except John Carmack. People should be trying to throw money at him
taytus 9 hours ago
He raises $1B, couldn't OAI, Google or Anthropic try similar approaches? Lack of funding isn't a problem those companies have. Why wouldn't they also spend $1B or 5 times that and outcompete (in theory)?
taint69 a day ago
WE HAVE RAISED A BILLION DOLLORS
but you don’t even have a product
/cape
ruler88 a day ago
Meta's greatest loss of the decade
sbcorvus a day ago
More research on more models = more betta
carabiner 20 hours ago
I wonder how Carmack's AGI work is going. He's been quite for a while.
cmrdporcupine a day ago
Looks like they'll be hiring on in Montreal in addition to Paris (and NYC and Signapore): https://jobs.ashbyhq.com/ami
I hope they grow that office like crazy. This would be really good for Canada. We have (or have had) the AI talent here (though maybe less so overall in Montreal than in Toronto/Waterloo and Vancouver and Edmonton).
And I hope Carney is promoting the crap out of this and making it worth their while to build that office out.
I don't really do Python or large scale learning etc, so don't see a path for myself to apply there but I hope this sparks some employment growth here in Canada. Smart choice to go with bilingual Montreal.
compounding_it 15 hours ago
Montreal and Paris means the europeans and French can move in and out when it comes to hiring. I really like how the world has interest in EU, Canada and Australia now that the west has become unstable for immigration.
sofixa a day ago
If he's right (that LLMs cannot achieve AGI, but what he's working on can, and does), this would be huge for AI and humanity at large.
Hope it puts to bed the "Europe can't innovate" crowd too.
bluefirebrand a day ago
I'm still just so surprised any time I encounter people who think AI will be overall good for humanity
I pretty strongly think it will only benefit the rich and powerful while further oppressing and devaluing everyone else. I tend to think this is an obvious outcome and it would be obviously very bad (for most of us)
So I wonder if you just think you will be one of the few who benefit at the expense of others, or do you truly believe AI will benefit all of humanity?
sofixa a day ago
> So I wonder if you just think you will be one of the few who benefit at the expense of others
It's not a zero sum game, IMO. It will benefit some, be neutral for others, negative for others.
For instance, improved productivity could be good (and doesn't have to result in layoffs, Jevon's paradox will come into play, IMO, with increased demand). Easier/better/faster scientific research could be good too. Not everyone would benefit from those, but not everyone has to for it to be generally good.
Autonomous AI-powered drone swarms could be bad, or could result in a Mutually Assured Destruction stalemate.
bluefirebrand a day ago
AndrewKemendo a day ago
sylware a day ago
If, for even 1s, they get in a position which is threatening, in any way, Big Tech AI (mostly US based if not all), they will be raided by international finance to be dismantled and poached hardcore with some massive US "investment funds" (which looks more and more as "weaponized" international finance!!). Only china is very immune to international finance. Those funds have tens of thousands of billions of $, basically, in a world of money, there is near zero resistance.
ismailmaj a day ago
I don't see a world where they become threatening and the employees don't become rich from investors flooding in.
sylware a day ago
Where have you been in the last 2 decades?
ismailmaj a day ago
_giorgio_ 9 hours ago
LeCun has had every advantage imaginable — and the scoreboard remains empty.
He joined Facebook (now Meta) in December 2013. That's over 12 years of access to one of the largest AI labs in the world, near-unlimited compute, and some of the best researchers money can buy.
He introduced I-JEPA in 2023, nearly 3 years ago. It was supposed to represent a fundamental shift in how machines learn — moving beyond generative models toward a deeper, more structured world understanding.
And yet: I-JEPA hasn't decisively beaten existing models on any major benchmark. No Meta product uses JEPA as a core approach. The research community hasn't adopted it — the field keeps pushing on LLMs and diffusion models. There's been no "GPT moment" for JEPA, no single result that made its value obvious to everyone.
So the question becomes simple: how many years, how many resources, and how many failed proof-of-concepts does it take before we're allowed to judge whether an idea actually works?
snek_case 7 hours ago
First, believe it or not, 3 years is not that long. It's also not a given that LeCun was given the resources he needed to work on this tech at Meta. Zuck wanted another llama.
Second, AMI Labs just secured a billion in funding, and while that's a lot of money, it's literally just a fraction of the yearly salary they are paying to Wang. Big tech companies are literally throwing tens of billions to keep doing the same thing, just on a bigger scale. Why not try something else once in a while?
rvz a day ago
Once again, US companies and VCs are in this seed round. Just like Mistral with their seed round.
Europe again missing out, until AMI reaches a much higher valuation with an obvious use case in robotics.
Either AMI reaches over $100B+ valuation (likely) or it becomes a Thinking Machines Lab with investors questioning its valuation. (very unlikely since world models has a use-case in vision and robotics)
embedding-shape a day ago
> Europe again missing out
I can't read the article, but American investors investing into European companies, isn't US the one missing out here? Or does "Europe" "win" when European investors invest in US companies? How does that work in your head?
joe_mamba a day ago
>isn't US the one missing out here?
Why would the US miss out here? The US invests in something = the US owns part of something.
This isn't a zero sum game.
embedding-shape a day ago
thibaut_barrere a day ago
It is well enough to attract worthy talents & produce interesting outcomes.
myth_drannon a day ago
This could have been 1000 seed rounds. We are creating technological deserts by going all-in on AI and star personalities.
dmix a day ago
There's seems to be no little shortage of capital in the global market.
net01 a day ago
Because for these investors the opportunity cost of this is higher than other startups.
I agree with you; there should be more diversity in investments in EU startups, but ¯\_(ツ)_/¯ not my money.
general1465 a day ago
Here you can see why it is so hard to compete as European startup with US startups - abysmal access to money. Investment of 1B USD in Europe is glorified as largest seed ever, but in USA it is another Tuesday.
weego a day ago
A billion seed is not an every day event anywhere.
mattmaroon a day ago
Not at all. A quick google turns up evidence of 4. There may be more but I think probably not many.
s08148692 a day ago
For a foundation AI lab with a world famous AI researcher at the helm though, it's not so impressive. Won't even touch the sides of the hardware costs they'd need to be anywhere near competitive
compounding_it a day ago
Europeans have free healthcare and retirement. They consider putting their money with long term benefits not just become CEO on Tuesday and declare bankruptcy on Wednesday.
general1465 a day ago
It is not free, we just pay taxes.
ExpertAdvisor01 a day ago
ExpertAdvisor01 a day ago
Free healthcare and retirement ?
ExpertAdvisor01 a day ago
MrBuddyCasino a day ago
„free“
oceansky a day ago
A startup getting 1B net worth is so rare that such companies are called unicorns.
As the other commenter pointed out, this is 1B seed.
ArnoVW a day ago
actually, they raised $1.03 billion at a $3.5 billion valuation.
dude250711 a day ago
Yes, the faster they get used to the thought that loosing a billion is not a big deal, the better.
abmmgb a day ago
Not based on true valuation unless h-index has become a valuation metric lol
Academics don’t always make great entrepeneurs
YackerLose a day ago
AI is developing backwards. The simplest organisms eat and find food. More complex ones can smell and sense tremors. After several steps in evolution comes vision and complex thought.
AIs that can't smell, can't feel hunger, can't desire -- I do not think it can understand the world the way organic life does.
bluesounddirect 8 hours ago
Yann is going to sell you the opportunity to sell people the opportunity of better AI .
mentalgear a day ago
Adds up : We are seeing a clear exodus of both capital and talent from the US - with the current US administration’s shift toward cronyism - and the EU stands as the most compelling alternative with a uniform market of 500 million people and the last major federation truly committed to the rule of law.
drstewart a day ago
"Exodus of capital" as if OpenAI didn't just raise 115b
gmerc a day ago
That's a bonfire of capital into a gaping hole in the ground with zero chance outside of "military pork" and "overcharging the taxpayer" to ever make their money back. The brain capital loss here is what's going to spook investors.
whiplash451 a day ago
You lost me at “uniform”…