When models manipulate manifolds: The geometry of a counting task (transformer-circuits.pub)
67 points by vinhnx 5 days ago
Rygian 3 hours ago
> The task we study is linebreaking in fixed-width text.
I wonder why they focused specifically on a task that is already solved algorithmically. The paper does not seem to address this, and the references do not include any mentions of non-LLM approaches to the line-breaking problem.
Legend2440 3 hours ago
They study it because it already has a known solution.
The point is to see how LLMs implement algorithms internally, starting with this simple easily understood algorithm.
Rygian 2 hours ago
That makes sense; however it does not seem like they check the LLM outputs against the known solution. Maybe I missed that in the article.
omnicognate 3 hours ago
There's also a lot of analogising of this to visual/spatial reasoning, even to the point of talking about "visual illusions", when its clearly a counting task as the title says.
It makes it tedious to figure out what they actually did (which sounds interesting) when it's couched in such terms and presented in such an LLMified style.
dist-epoch an hour ago
it's not strictly a counting task, the LLM sees same-sized-tokens, but a token corresponds to a variable number of characters (which is not directly fed into the model)
like the difference between Unicode code-points and UTF-8 bytes, you can't just count UTF-8 bytes to know how many code-points you have
lccerina 3 hours ago
Utter disrespect for using the term "biology" relating to LLM. No one would call the analysis of a mechanical engine "car biology". It's an artificial system, call it system analysis.
lewtun an hour ago
The analogy stems from the notion that neural nets are "grown" rather than "engineered". Chris Olah has an old, but good post with some specific examples: https://colah.github.io/notes/bio-analogies/