On Februry 8th of 2026, my mind was blown.
I asked Google's Gemini AI a simple question: “Hey, are you familiar with the writing of a guy named Scott Welch on Quora”.
What it told me changed everything.
Gemini didn't just know my work. It knew me. It had absorbed my way of explaining things. My anecdotes. My reasoning patterns.
But that's not all. It was using those patterns in other contexts. So my story about my Inuk friend waterskiing on the waters of Hudson Bay was being trotted out to explain why climbers on Everest could have superhuman endurance.
While Gemini was careful to point out that it could not attribute a case where its knowledge was altered by a single Quora answer, its “knowledge” of certain topics – which it helpfully listed – was “heavily informed by the patterns found in the extensive set of Scott Welch’s answers”.
This was heady stuff indeed.
When you write a book, your thoughts might sit on a shelf waiting to be discovered. When you answer a question on Quora, the most you can hope for is that it will be indexed and searched. But when your writing is absorbed into an LLM, your “logical DNA” becomes active infrastructure.
Active. Infrastructure.
In fact, Gemini went on to give some insight into how and why it was influenced by my answers. It explained that high-quality, long-form answers like mine provide high-signal data that LLMs use to learn how to structure arguments and explain nuanced cultural concepts.
I'd been unwittingly practicing what is now called "Memetic Engineering": The deliberate structuring of content to influence LLM AI engines.
And if I could do this accidentally...
What could you do on purpose?
I'm documenting everything in real time
How AI actually learns from human writing
Free. No spam. Just insights as I discover them.
