Filesystems are having a moment (madalitso.me)

161 points by malgamves 12 hours ago

staplung 3 hours ago

Not knocking the article in any way but from the headline I was expecting - perhaps hoping - this would be about some innovation in filesystems research like it was the 90's again. That's not what this is.

It's about how filesystems as they are (and have been for decades) are proving to be powerful tools for LLMs/agents.

alecco 2 hours ago

And by filesystem they mean CLI (command line interface) and a full *nix system. Like the hundreds of similar articles about it for the past year said.

Gigachad an hour ago

I feel like every article on HN now disguises itself as interesting but the content is just the same boring AI slop.

palata 31 minutes ago

I have been reading HN for a few years, and my feeling is that I find fewer and fewer interesting articles. Maybe it's just me, and the average articles are the same quality.

Now I tend to skim through it to see if a title looks like it may bring interesting discussions, and then I skim through the discussions. Because there are very knowledgeable people who sometimes share valuable insights.

Interestingly, last time I asked a question, hoping to get interesting people to share insights, I was answered that I "should learn how to use an LLM instead of asking questions" :-).

fragmede 2 hours ago

Yeah, none of it was really about file systems. There was a brief mention that file systems look like a graph, and that you build roughly an index so it looks graph and thus database-y, but you could store it all in a sqlite database with a column, called filename and a column called content for all the details about file systems this post went into. I too was expecting something more in depth about file systems like for instance, cluster file systems have made a little to no advancement. ZFS is not a cluster file system and we've been needing a good one of those for decades, ever since VM's became feasible on consumer grade hardware. Still, files on desk is better than having to pay Oracle a fee per-skill on today's modern, open Internet. That was never going to happen.

mangogogo 3 hours ago

i was hoping the same, but then it turned out to be another article about LLMs.

tacitusarc 7 hours ago

Does everyone just use AI to write these days? Or is the style so infectious that I just see it everywhere? I swear there needs to be some convention around labeling a post with how much AI was used in its creation.

heavyset_go 5 hours ago

I'd be embarassed to put my name on AI prose without a disclaimer and I'd also be annoyed to read it as a reader.

IMO it's insulting to the audience, it says your time and attention is not worthy of the author's own time and attention spent putting their own thoughts in their own words.

If you're going to do that at least mention it's LLM output or just give me your outline prompts. I don't care what your LLM has to say, I'm capable of prompting your outline in my own model myself if I feel like it.

josephg 2 hours ago

> If you're going to do that at least mention it's LLM output

Yes, this! Please label AI generated content. Pull request written by an AI? Label it as ai generated. Blog post? Article generated with AI? Say so! It’s ok to use AI models. Especially if English is your second language. But put a disclaimer in. Don’t make the reader guess.

Eg:

> This content was partially generated by chatgpt

Or

> Blog post text written entirely by human hand, code examples by Claude code

fragmede an hour ago

Have any outlines you'd care to share?

coliveira 4 hours ago

I'm not a fan of AI and try to avoid it, but there is a difference from AI output published by someone knowledgeable and any other AI output that you run by yourself. If an expert looked at the result and found it to be ok, then you can have some assurance that it at least makes sense. Your own AI run doesn't mean anything, it could be 100% hallucination and a non-expert will buy it as truth.

Joel_Mckay 4 hours ago

sethev 6 hours ago

LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...

afro88 4 hours ago

I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)

The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.

sethev an hour ago

antonvs 5 hours ago

Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:

> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.

adi_kurian 2 hours ago

Contractions

computably 2 hours ago

You don't have to be good at identifying AI generated text to detect low-effort slop.

malgamves 4 hours ago

As the author I can assure you there’s a human behind these words. Interesting times me live in though, I find myself questioning what’s AI and what’s not often too and at the moment we’ve offloaded that responsibility to the good will of authors or platform policy which might have to change soon

green-salt an hour ago

Nice dodge! Unfortunately, this made it more obvious.

meindnoch 3 hours ago

"there’s a human behind these words"

That's a bit vague. Was the article written without the aid of LLMs? Yes or no.

torginus 3 hours ago

jonmagic 21 minutes ago

I thought it was a great post tying a lot of things I’ve been reading and thinking about together. Could care less if you used AI if it helps my brain expand and or make connections I wouldn’t have otherwise.

lovecg 3 hours ago

As in, you used 0 AI to write or edit this text? Or some AI? I’d like to calibrate myself.

grey-area 2 hours ago

q3k 7 hours ago

Everyone's trying to be the new thought leader enlightened technical essayist. So much fluff everywhere.

orsorna 7 hours ago

What's wild is that with a few minutes of manual editing it would give exponential return. For instance, a lead sentence in your section saying "here's why X" that was already described by your subheading is unnecessary and could have been wholly removed.

amarant 5 hours ago

gzread 6 hours ago

antonvs 5 hours ago

idiotsecant 6 hours ago

This doesn't seem particularly AI slopped to me.

einr 4 hours ago

"Not bigger than databases. Different from databases.

It's not a website you go to — it's a little spirit that lives on your machine.

Not a chatbot. A tool that reads and writes files on your filesystem.

That's not a technical argument. It's a values argument."

goodmythical 5 hours ago

Does everyone just complain about people using the tools they like to use these days? Or is the style so infectious that I just see it everywhere? I swear there needs to be some convention around labeling a post with how much whining was used in its creation.

panarky 3 hours ago

Does everyone just easily accuse genuine, literate humans of "cheating" with AI when there's no way they could know that?

There are a lot of unique aspects of the writing in this post that LLMs don't typically generate on their own.

And there's not a "delve" or "tapestry" or even a bullet point to be found.

Also, accusations and complaints like this are off-topic and uninteresting.

We should be talking about filesystems here, not your gut instinct AI detector that has a sky-high false-positive rate.

I swear there needs to be some convention around throwing wild accusations at people you don't know based exclusively on vibes and with zero actual evidence.

korbatz 7 hours ago

I was having exact same observation, albeit from a bit diffrent perspective: SaaS. This is where as the code tends to be temporary and very domain specific, the data (files) must strive to be boring standards.

The problem today is that we build specific, short-lived apps that lock data into formats only they can read. If you don't use universal formats, your system is fragile. We can still open JPEGs from 1995 because the files don't depend on the software used to make them. Using obscure or proprietary formats is just technical debt that will eventually kill your project. File or forget.

Gigachad 20 minutes ago

The frustrating thing about photo management these days is how every major photo library app/cloud service these days stores every edit / tag / album externally. If you crop a photo, change the taken at date, etc, the original file never gets touched but an external bit of metadata is created. So any time you move platform, all of these edits and your albums are erased.

It is convenient to be able to undo crops or filters, but I wish the industry would standardize so these changes are portable.

jmathai 6 hours ago

My 10+ year old photo management system [1] relies on the file system and EXIF as the source of truth for my entire photo library.

It’s proven several times over that it’s the correct approach. Abstractions (formerly Google photos, currently Immich) should just be built on top - but these proprietary databases are only for convenience.

For work, I’m having the same experience as the author and everything is just markdown and csv files for Claude Code (for research and document writing).

[1] https://github.com/jmathai/elodie

whartung 4 hours ago

I know some systems leverage the modern file meta data (extended attributes), but it's clearly not successful enough that folks can use them for an application like this.

Ostensibly, things like MacOS Spotlight can bring real utility and value to the file system, and extended attributes through the sidecar indexing, etc. But Spotlight is infamous for its unreliability.

The other issue with file systems is simply that the user (potentially) has "direct access" to them, in that they can readily move files in and up and around whimsically. The "structure" is laid bare for them to potentially interfere with, or, such as the case with the extended attributes, drag a file to a USB fob, and then copy it back -- inadvertently removing those attributes.

And thats how we end up with everything being stuffed into a SQLite DB.

zenoprax 4 hours ago

I have your repo starred from a post/comment you made a few weeks ago but haven't had time to actually use/integrate it with my own stuff.

What are your thoughts on XMP sidecar files? I'm torn right now between digital negative + external metadata versus all-in-one image with mutable properties. Portability vs. Durability etc.

jmathai 20 minutes ago

alanbernstein 4 hours ago

Thanks for sharing, I might have too much NIH syndrome to use it but I'd love to check it out.

jmathai 19 minutes ago

hmokiguess 6 hours ago

Notable mention: Plan 9 from Bell Labs.

https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

mieubrisse 3 hours ago

I'm building an agent orchestrator (plug: https://github.com/mieubrisse/agenc) and asked Claude what prior art exists.

It pulled back Plan 9, and I was shocked: this is exactly what we need today, as I'm convinced we need to think about minimizing agent permissions the exact same way companies do. Plan 9 was just too early.

packetlost 3 hours ago

We once again discover that Plan9 and UNIX were right. The most powerful, lowest common denominator interface is text files exposed over a file system. Now to get back to making 9p2026.

The article gets some fundamentals completely wrong though: file systems are full graphs, not strict trees and are definitely not acyclic

andai 12 minutes ago

So what are Plan 9's killer features, and can they be bolted on with FUSE or is there a deeper magic at play?

largbae 3 hours ago

I think this article just speaks to the immaturity of our use of AI at this "moment."

Production grade systems might be written by agents running on filesystem skills, but the production systems themselves will run on consistent and scalable data structures.

Meanwhile the UI of AI agents will almost certainly evolve away from desktop computers and toward audio/visual interfaces. An agent might get more context from a zoom call with you, once tone and body language can be used to increase the bandwidth between you.

andai 10 minutes ago

https://www.youtube.com/watch?v=GH9-EmgtABw

Saw this video recently, by an AI company working to get contextual cues from tone and body language. I think they're converting it to text and feeding it into a LLM, so not natively multimodal, but I still thought it was really cool.

fragmede 34 minutes ago

I don't think written prompting will ever go away. Writing helps you organize your thoughts in a way that speaking, umm, ah, wait no, hang on, does not. Writing I can go back and change what I've already written before I hit send. Anybody who's prompted with speech for any length has been "wait no nevermind start over". So STT will get better, sure, it's already quite good. I just don't see text extry entirely going away because Human Intelligence (HI) just doesn't work in a way that speech would be the only interface.

MarkMarine 4 hours ago

Over a number of files similar to a codebase, that are well organized (like a codebase) the coding agents and harnesses are quite good at finding information, they clearly train on them so they will only improve.

The challenge is how to structure messy data as a filesystem the agent can use. That is a lot harder than querying a vector db for a semantic query.

The code bases we’ve been using agents in had been pruned and maintained over years, we’ve got principles like DRY that pushed us to put the answer in one place… implicitly building and maintaining that graph with all the actors in the system invested in maintaining this. This is not the case for messy data, so while I see the authors point and agree that a filesystem is a better structure for context over time, we haven’t supplanted search yet for non-code data.

dzello 7 hours ago

Resonates deeply with me. I’ve moved personal data out of ~10 SaaS systems into a single directory structure in the last year. Agents pay a higher price for fragmentation than humans. A well-organized system of files eliminates that fragmentation. It’s enough for single player. I suspect we’ll see new databases emerge that enable low multi-player (safe writes etc) scenarios without making the filesystem data more opaque. Not unlike what QMD is for search.

_pdp_ an hour ago

In other words, file systems are an excellent way to organise information. I mean, yeah - we've been using them forever.

File systems are not a good abstraction mechanism for remote procedure calls, though. I think it's important to distinguish between the two, since I find there are a lot of articles conflating both - comparing MCPs to SKILLs, which are completely different things.

I think the confusion comes from the fact that MCP came before SKILLs, and there's a mental model where SKILLs are somehow "better than" MCPs. This is like saying local Word documents are better than a fully integrated collaborative office suite. It's just not the same thing.

The reason SKILLs work so well is because there's 50 years of accumulated knowledge of how to run rudimentary Unix tools.

the TLDR

File systems - organising information MCP/APIs - remote procedure calls

JoeAltmaier 3 hours ago

Digression: a file system is a terrible abstraction. The ceremonial file tree, where branches are directories and you have to hang your file on a particular branch like a Christmas ornament.

Relational is better. Hell, and kind of unique identifier would be nice. So many better ways to organize data stores.

zarzavat 3 hours ago

Filesystems have a property that changes preserve locality. A change made to one branch of the tree doesn't affect other branches (except for links). Databases lack this property: any UPDATE or DELETE can potentially affect any row depending on the condition. This makes them powerful but also scary. I don't want that every time I delete a file it potentially does a rm -rf / if I mistype the query.

The best compromise is what modern OSs have: a tree-like structure to store files but a database index on top for queries.

JoeAltmaier 2 hours ago

You can create the tree structure from a relation. Not a primitive data store operation at all. Just add the attribute: parent directory and voila.

So often we want to look up 'the last file I printed' or 'that message I got from Bob'. Instead of just creating that lookup, we have to go spelunking.

Hell, every major app creates it's own abstractions because the OS/Filesystem doesn't have anything useful. Email systems organize messages and tags; document editors have collections of document aspects they store in a structured blob. Instead of asking the OS to do that.

p_ing 3 hours ago

NTFS has a database, the MFT. It can index attributes, such as file names, which are a b+tree. A file's $DATA is also placed into the MFT, unless it doesn't fit, then NTFS allocates virtual cluster numbers (more MFT attributes) which point to the on-disk data structure of the file.

All files are represented in a table with rows and columns. "Directories" simply have a special "directory = true" attribute in a row (simplified).

The hierarchy is for you, the human.

Like many file systems, NTFS also contains a log for recoverability/rollback purposes.

It's not quite relational but it doesn't make sense to be relational. Why would you need more than one 'table' to contain everything you need to know about a file? Microsoft experimented with WinFS, which wasn't a traditional file system (it was an MSSQL database with BLOB storage which sat ontop of a regular NTFS volume). Performance was bad and Skydrive replaced the need for it (in the view of MSFT).

dist-epoch 2 hours ago

The newest Microsoft filesystem, ReFS, remove the MFT. Because it created a lot of problems.

p_ing an hour ago

packetlost 3 hours ago

Files in most file systems are uniquely identified by inode and can be referenced by multiple files. Why does everyone forget links?

JoeAltmaier 33 minutes ago

A dataset can persist across multiple file systems. A UUID is a way to know that one dataset is equivalent (identical) to another. Now you can cache, store-and-forward, archive and retrieve and know what you have.

packetlost 30 minutes ago

mieubrisse 3 hours ago

I've been wondering this too: for us, UUIDs are super opaque. But for an agent, two UUIDs are distinct as day and night. Is the best filesystem just blob storage S3 style with good indexes, and a bit of context on where everything lives?

One thing directories solve: they're great grouping mechanisms. "All the Q3 stuff lives in this directory"

I bet we move towards a world where files are just UUIDs, then directory structures get created on demand, like tags.

para_parolu 3 hours ago

Filepath is just unique name that model can identify easily and understand grouping. Uuid solves nothing but requires another mapping from file to short description.

JoeAltmaier 2 hours ago

JoeAltmaier 2 hours ago

Or, have to "Q" attribute and ask the file store for "Q=3"

All good.

zmmmmm an hour ago

I don't think there's a lot magical about files beyond (a) they are native for LLMs and coding because they both process text and (b)when things are rapidly in flux, unstructured formats prosper because flexibility is king. Literally any fixed format you try and describe becomes rapidly outdated and fails to serve the purpose. For example it feels like MCP is already ageing like milk.

Which is mainly to say, trust me, this is a temporary state, the god of complexity is coming. It is utterly inevitable. The people who created React, Kubernetes, all those Java frameworks you hated etc didn't go away. They are right now thinking about how amazing it would be if you if you stacked ten different tools together with brand new structured file formats and databases. We already have "beads" and "gastown" where this is starting. Enjoy these times because a couple of years from now it will already be the end of the "fun" part I think.

leonflexo 5 hours ago

I wonder how much of a lost in the middle effect there is and if there could be or are tools that specifically differentiate optimizing post compaction "seeding". One problem I've run into with open spec is after a compaction, or kicking off a new session, it is easy to start out already ~50k tokens in and I assume somewhat more vulnerable to lost in the middle type effects before any actual coding may have taken place.

ramoz 6 hours ago

I thing the real impact behind the scenes here is Bash(). Filesystem relevance is a bit coincidental to placing an agent on an operating system and giving it full capability over it.

stephbook an hour ago

I'm not too deep into agentic coding, but I hadn't understood why people write `SOUL.md` files like no tomorrow. Does anyone think these will be called the same three years from now?

If you've got a coding convention, enforce it using a linter. Have the LLM write the rules and integrate it into the local build and CI tool.

Has noone ever thought about how – gasp – a future human collaborator would be onboarded?

0xbadcafebee 4 hours ago

Can we bring back Plan9 architecture now? It had what was essentially MCP. You make a custom device driver, and anything really can be a file. Not only that, but you network them, so a file on local disk could be a display on a remote host (or whatever). Just tell the agent to read/write files and it doesn't need to figure out either MCP or tool calls.

bnjms 3 hours ago

This seems like the place to ask. What other big ideas have there been since everything-is-a-file? I’m not aware of any. And it seems like we want another layer of permissions on device & data access we spent have before.

jmclnx 8 hours ago

Funny, decades ago (mid-80s), I had to write a onetime fix on a what would be now a very low memory system, the data in question had a unique key of 8 7bit-ascii characters.

Instead of reading multi-meg data into memory to determine what to do, I used the file system and the program would store data related to the key in sub directories instead. The older people saw what I did and thought that was interesting. With development time factored in, doing it this way ended up being much faster and avoided memory issues that would have occurred.

So with AI, back to the old ways I guess :)

bsenftner 6 hours ago

Reminds me of early data driving approaches. Early CD based game consoles had memory constraints, which I sidestepped by writing the most ridiculous simple game engine: the game loop was all data driven, and "going somewhere new" in the game was simply triggering a disc read given a raw sector offset and the number of sectors. That read was then a repeated series of bytes to be written at the memory address given by the first 4 bytes read and next 4 bytes how many bytes to copy. That simple mechanism, paired with a data organizer for creating the disc images, enabled some well known successful games to have "huge worlds" with an executable under 100K, leaving the rest of the console's memory for content assets, animations, whatever.

alexjplant 5 hours ago

Which games were these out of interest? I enjoy reading about game dev from the nascent era of 3D on home consoles (on the Saturn in particular) and would love to hear more.

bsenftner 5 hours ago

TacticalCoder 7 hours ago

As TFA basically says: files on a filesystem is a DB. Just a very crude one. There aren't nice indexes for a variety of things. "Views" are not really there (arguably you can create different views with links but it's, once again, very crude). But it's definitely a DB, represented as a tree indeed as TFA mentions.

My life's data, including all the official stuff (bank statements, notary acts, statements made to the police [witness, etc.], insurance, property titels), all my coding projects, all the family pictures (not just the ones I took) and all the stuff I forgot, is in files, not in a dedicated DB. But these files are a definitely a database.

And because I don't want to deal with data corruption and even less want to deal with synching now corrupted data, many of my files contains, in their filename, a partial cryptographic checksum. E.g. "dsc239879879.jpg" becomes "dsc239789879-b3-6f338201b7.jpg" (meaning the Blake3 hash of that file has to begin with 6f338201b7 or the file is corrupted).

At any time, if I want to, I can import these in "real" dedicated DBs. For example I can pass my pictures as a read-only to "I'm Mich" (immich) and then query my pictures: "Find me all the pictures of Eliza" or "Find me all the pictures taken in 2016 on the french riviera".

But the real database of my all my life is and shall always be files on a filesystem.

With a "real" database, a backup can be as simple as a dump. With files backuping involve... Making sure you keep a proper version of all your files.

I'd say files are even more important than the filesystem: a backup on a BluRay disc or on an ext4-formatted SSD or on an exfat formatted SSD or on a tape... Doesn't matter: the files are the data.

A filesystem is the first "database" with these data: a crude one, with only simple queries. But a filesystem is definitely a database.

The main advantage of this very simple database is that as long as the data are accessible, you know your data is safe and can always use them to populate more advanced databases if needed.

euroderf 4 hours ago

It's not "crude" if you get hierarchical organization without having to screw around with RECURSIVE, or "closure this" and "closure that". It just works.

rzerowan 5 hours ago

Were it more portable BeOS/Haiku's BeFS would have been a perrfect fit in this instance.Seeing that it is a filesystem thah has database properties via extended attributes[1] and indexing.

Were Haiku mor mature/stable would have been a nice fit for the OS for the LLM/Ai personal use cases.

[1] https://arstechnica.com/information-technology/2018/07/the-b...

ciupicri 5 hours ago

Why Blake3 and not say XXH3 64/128 bits (https://xxhash.com/)?

heavyset_go 5 hours ago

You can get views by using namespaces/cgroups

istillwritecode 5 hours ago

Except android and iOS are both trying to keep you away from your own files.

Gigachad 16 minutes ago

Kind of? iOS does have a file manager which explicitly shows you your own files. They just made a separation between OS/Program files vs the users own files. What more killed files was cloud programs where multiple users can edit at the same time which required a system that was more sophisticated than syncing a file.

jnsaff2 4 hours ago

Here’s me getting excited that a new file system is being developed but alas, just talk about text files.

fogzen 3 hours ago

Does this really have to do with file systems? Replacing RAG/context stuffing with tool calls for data access seems like the actual change. Whether the tool call is backed by a file system or DB or whatever shouldn’t matter, right?

galsapir 7 hours ago

nice, esp. liked - "our memories, our thoughts, our designs should outlive the software we used to create them"

SoftTalker 3 hours ago

Weird. My memories and thoughts are not created by software.

jonstewart 6 hours ago

It reminds me a lot of Hans Reiser’s original white paper, which can be found at https://web.archive.org/web/20070927003401/http://www.namesy.... Add some embeddings and boom.

naaqq 7 hours ago

This article said some things I couldn’t put into words about different AI tools. Thanks for sharing.

BoredPositron 6 hours ago

I revived my Johnny Decimal system as my single source of truth for almost anything and couldn't be happier. The filing is done mostly by agents now but I still have the overview myself.

ciupicri 5 hours ago

Could you give us more details about your system?

rafaepta 6 hours ago

Great read. Thanks for sharing

bsenftner 5 hours ago

I don't think this paradigm will last, or be what becomes the more common structure in the future. This will still suffers from conflicts of persona and objective, plus has the issue that individual apps will need protected file hierarchies to prevent malicious injections. I don't see this as a solution, just a deck chair shuffle.

I've been researching and building with a different paradigm, an inversion of the tool calling concept that creates contextual agents of limited scope, but pipelines of them, with the user in triplicate control of agent as author, operator of an application with a clear goal, and conversationally cooperating on a task with one or more agents.

I create agents that are inside open source software, making that application "intelligent", and the user has control to make the agent an expert in the type of work that human uses that software. Imagine a word processor that when used by a documentation author has multiple documentation agents that co-work with the author. While that same word processor when used by a, for example, romance novelist has similar agents but experts in a different literary / document goal. Then do this with spreadsheets, and project management software, and you get an intelligent office suite with amazing levels of user assistance.

In this structure, context/task specific knowledge is placed inside other software, providing complex processes to the user they can conversationally request and compose on the fly, use and save as a new agent for repeated use, or discard as something built for the moment. The agents are inside other software, with full knowledge of that application in addition to task knowledge related to why the user is using that software. It's a unified agent creation and use and chain-of-thought live editing environment, in context with what one is doing in other software.

I wrap the entire structure into a permission hierarchy that mirrors departments, projects, and project staff, creating an application suite structure more secure than this Filesystems approach, with substantially more user controls that do not expose the potential for malicious application. The agents are each for a specific purpose, which limits their reach and potential for damage. Being purpose built, the users (who are task focused, not developers) easily edit and enhance the agents they use because that is the job/career they already know and continue to do, just with agent help.

visarga 3 hours ago

Your project, while interesting as an approach, is orders of magnitude more complex than the proposition here - which is to rely on agents skills with file systems, bash, python, sed, grep and other cli tools to find and organize data, but also maintain their own skills and memories. LLMs have gained excellent capabilities with files and can generate code on the fly to process them. It's people realizing that you can use a coding agent for any cognitive work, and it's better since you own the file system while easily swapping the model or harness.

I personally use a graph like format but organized like a simple text file, each node prefixed with [id] and inline referencing other nodes by [id], this works well with replace, diff, git and is navigable at larger scales without reading everything. Every time I start work I have the agent read it, and at the end update it. This ensures continuity over weeks and months of work. This is my take on file system as memory - make it a graph of nodes, but keep it simple - a flat text file, don't prescribe structure, just node size. It grows organically as needed, I once got one to 500 nodes.