Publishing Likewise
For the last six weeks I’ve been working on something I wasn’t entirely sure I had the right to publish. It’s called Likewise, and tonight I’m putting it out for review.
Likewise is a draft protocol for decentralised personal knowledge graphs. It’s a wire-level standard that lets the things AI systems infer about you (you’re a frequent grocery shopper, you’re close to Sarah, your commute is Tuesdays) live in your hands rather than the platform that derived them. The spec is at getlikewise.ai/spec. It’s a v0.1, and it’s a draft.
I want to talk about why I built it, and then about why publishing it has been so much harder than building it.
The shift I couldn’t stop thinking about
Twenty-five years of consumer software has agreed on the same default: the party providing the service keeps the record of the user. Cookies were an implementation detail. Free accounts were an implementation detail. The personalised feed, the loyalty card. Implementation details on top of the same underlying contract. The party doing the work also kept the work’s record, and the record was not the user’s.
That arrangement persisted because the record was inert. A click stream, on its own, can’t describe you to yourself. It powers ad targeting and recommendation rankings, but it doesn’t model who you are.
That changed. The same logs that were inert raw material a decade ago are now training data and prompt context for systems that can describe you to yourself with uncomfortable accuracy. The economic value of being the party that holds the record has risen by an order of magnitude. So has the asymmetry between you and that party.
Personal AI is on its way. Whatever your phone is going to do for you in the next few years, the interesting layer is the data substrate underneath it: the graph of evidence and claims about you, and the permissions that govern who sees what. That substrate determines whether a personal AI is a product the user owns or a product the user is.
Likewise is an attempt to specify that substrate. The standard is what makes any of this adoptable by anyone, including parties I’ll never meet. Publishing the standard before the implementation is, to me, the only sequence that makes sense.
The part that has been hard
I don’t have a research role. I have no academic affiliation, no lab behind me, no co-authors, no list of published papers with my name on them. I’m a developer who couldn’t stop thinking about this and who decided, eventually, to write it down.
Publishing a protocol from that position is uncomfortable. Most of the protocols I respect (Solid, AT Protocol, the Willow Protocol) come out of teams that had institutional credibility long before they had the artefact. I don’t have that, and I’m aware of it most days.
What I’ve been most afraid of is straightforward: that the protocol shape is wrong, and that someone who knows more than I do is going to say so in public. I’ve spent more than my share of the last six weeks rehearsing that conversation in my head. A few friends and one founder, in slightly different words, kept saying the same thing: just ship it. They’re probably right. Perfectionism can be craft, and it can also be a way of avoiding finding out you’re wrong.
What I’m asking for
Please read the spec, especially the motivation and comparison chapters. Tell me what doesn’t hold up. Tell me what does.
If you’ve worked on Solid, AT Protocol, Iroh, the Willow Protocol, or anything in that family, your eyes on this would be especially welcome. If you’re working in personal AI and thinking about the data substrate, your reaction matters to me. If you’ve been on the receiving end of an isn’t this just X? critique you wish someone had landed on you sooner, I’m asking for that.
You can email me at daniel@danielmay.co.uk, reply on the [Hacker News thread]([ADD HN URL AFTER POSTING]), or DM me on Twitter (@danielrmay). The repository is at github.com/danielrmay/likewise.