Writing is more valuable with AI
As knowledge workers, we write things down all the time. We write to think through problems, we write to express ideas to a machine, and we write to communicate our thoughts and understanding to others. At Equals, our writing is spread out over Slack, email, Google docs, knowledge bases, and tickets. It’s hard to search over to find something, and impermanent in nature. Thus, a large portion of the store of knowledge in our company is contained within the heads of the people who work there. This has been mostly fine for us - we are a small team, we onboard new hires rarely, and people know who to ask for help answering a question.
That set of assumptions around having a small team and being able to do rapid in-person knowledge transfer is quickly becoming outdated. With the rise of LLM-powered AI over the last few years, the value of written text has dramatically gone up, and this changes the value proposition significantly when deciding whether to write something down, where to write it down, and who to target the writing at.
LLMs need text
LLMs are predominately text-based. They take in text, and they produce text. The first wave of text-based LLM interfaces are being transformed into tools that can generate many types of data - they can produce images, music, and video. They can perform deep research, plan vacations, write code and analyze data. As the software world catches up to the potential of AI, more and more tools will be built that can leverage this text-based interface without having the user needing to be deliberate about invoking an AI via text.
There are already tools that let you ask questions of a corpus of text - like reflect or glean - but they’re only scratching the surface of what you can do with an LLM and text.
People will be able to build bespoke small tools quickly to solve individual problems. The cost of building something that is only useful in a narrow context is going down dramatically and I expect the rate of bespoke tool creation from software teams to increase hugely over the next while.
Further to that, powerful tools that are deeply integrated with AI will have a level of customization to them that has not been seen before in software. Because LLMs are amenable to being customized, the best tools built on top of them will be able to be tuned as well - for example Cursor project rules. LLMs do not have our specific context, we will need to provide it. We might provide this textual customization in a direct manner right now, but in the future there might be some fun chain-of-AI capabilities here - one AI could summarize your written texts into a form suitable to use in another AI tool, for example.
The way we will build bespoke software with LLMs, and customize them, will be through text-based interfaces with LLMs that do not inherently have the context of your own work, or the knowledge of how your code is put together, or how you think about building and selling a product. Having all of your thoughts and ideas in text that you control is a huge boon - you have the ability to turn that text into good answers for questions, to build new tools, and to tune existing tools.
Owning your writing
Given that we will need to capture all this valuable text to feed into LLMs, it’s becoming more and more important to own your own writing - to have it all available to you.1
It’s always been important to own your own data if you care about permanence; in the same way owning your own house, or buying a favorite movie on blu-ray instead of renting can be important. But now though, it’s also important to have all your text somewhere accessible because it’s an immensely valuable resource to increase the velocity of you and your team’s work.
I can only see it being a severe disadvantage to be neglect owning all of your knowledge in written form, when it’s so much more valuable than ever before.
Write more
With the increase in value of writing, we should do more of it. We should aim that writing as if it were targeting someone with no context whatsoever on who you are and what you’re doing (because the LLM knows neither of those things). We can build up layers of understanding over time; in much the same way we all learn and grow over time - we need to capture that in text in order to grow the LLM’s understanding.
For example, I am being more deliberate about writing in the following ways:
- Explaining the context around what I’m doing to someone entirely new to this specific work (“We’re building a spreadsheet that…”). This will be useful grounding for the AI in general, for almost all applications.
- Describing the most important concepts within our systems; both what they are and why they are important (explaining the why in plain language turns out to be really important!).
- Communicating important decisions - I’m doing X because of Y.
- Capturing feedback on things I’m doing “this didn’t work out well because of Z”
Fundamentally, written text is massively more valuable now than it has ever been, and we should be mindful of and embrace that change.
Ownership here can be as simple as having the ability to download a full copy of the raw data. ↩︎