On the flight back from the book launch in Portugal in late 2025, I found myself unable to stop thinking about the size of a book. Not its length — its physical dimensions. During finalization, Jeff Abbott and I had printed mockups in different sizes, and there was a smaller format I couldn’t let go of. We’d just published AI and the Art of Being Human — 362 pages, twenty-one practical tools, four guiding principles, months of work. The book I was proud of. And the book, I was beginning to suspect, that most of the people who needed it would never read.

Not because they wouldn’t want to. But because telling someone that a 362-page book answers their question about AI feels increasingly out of step with how people actually engage with ideas now. They have conversations with AI. They upload documents and explore them interactively. The physical book was formatted for a world that is rapidly giving way to something else.

By the time I landed, I was sketching a pocket edition. And that impulse — make the ideas accessible on the terms people actually use — turned out to be the beginning of something considerably more experimental than I’d anticipated.

Writing With AI

The back story starts earlier. Jeff and I worked incredibly closely with Anthropic’s Claude as we wrote the book. The approach — and the whole library of resources and deep prompts that were part of it — took over three months of intensive work.

I must confess I worried about this. How would publishing an AI-assisted book affect our credibility? Would the irony of using AI to write about being human be too much? But it became clear early on that this book could not have been written without the learning and insights gained from working closely with one of the most powerful AI models available. And the process of collaborating with Claude while maintaining our human agency became a living laboratory for the very things we explore and advocate in the book.

This is not a fly-by-night AI-generated book that took a few hours to whip together. Every word, sentence, paragraph, and chapter in the final book has our human stamp on it. But the collaboration was genuine, and it went both ways. Some passages Claude helped craft are far better than anything we could have written alone — combinations of ideas and framings that emerged from the interaction rather than from either party. Others we rewrote many times over because the model kept failing to capture what we were looking for. And some moved us to tears as they reflected what we felt but struggled to articulate.

We learned what Claude was good at — synthesis, generating novel connections, drafting at scale — and where it fell short. Our voices. Our humor. The specific texture of lived experience. At one point Claude balked at what it called a book that was “too predictable” and “not messy enough” — proof that, for all its capabilities, it still struggled to understand the transformative core of what we set out to do.

We deliberately left one artifact in the final text — Claude’s inexplicable preference for characters named Chen — as a reminder of our silent partner.

The AI Companion Experiment

The pocket edition — 4.25 by 7 inches, designed to be dog-eared and coffee-stained — led to the more radical experiment. We took the complete text and released it as a free Markdown file: the AI Companion.

The thinking was straightforward: we’d be hypocrites if we wrote a book about thriving with AI while not meeting people where they actually are — which, increasingly, is inside a conversation with an AI. Upload the file into Claude, Gemini, or Grok, and use it as the basis for an open-ended conversation about the book’s ideas. More like an AI playground for working with the material on your own terms than a constrained chatbot experience.

We built a three-part architecture: a human introduction explaining purpose, usage, and limitations; AI-specific instructions guiding how the system should engage; and the full text with chapter and page references connecting back to the physical edition. We deliberately rejected building platform-specific tools — GPTs, Gems — because we didn’t really like what we saw when experimenting with them, and they would lock users into a single platform. The Markdown file works anywhere.

Well, almost anywhere. ChatGPT does not work well with it — something about how it handles large uploads via RAG makes it highly unreliable. Claude, Gemini, and Grok all work well. But the platform inconsistency is a reminder that “upload and go” is still more aspiration than reality.

We did the same with an Instructor Guide — a free resource that educators upload into their AI of choice, describe their learners and goals, and use as a starting point for building courses, workshops, and discussions around the book’s tools.

And we gave it all away.

We wrote the book because we believe the ideas and tools in it can help people navigate one of the most disorienting transitions most of us will face in our lifetimes. And if making the content available in ways that let more people engage with it on their own terms means more people actually use it — that matters more to us than gatekeeping it behind a price tag. I’ll admit our suspicion that AI engagement will drive physical book sales is backed by zero hard data and considerable optimism. But there’s something about holding the stories and tools in your hands that a chat window can’t quite replicate — and we’re genuinely curious to see whether people who discover the ideas through AI will want the physical object too.

I must confess: the pocket edition is the version I actually carry with me. Not the full 6-by-9 that sits on shelves. The small one, the one designed for real life. We even added a free first coffee stain on the cover, just to get people started.

Rebuilding a Book for AI

The AI Companion was an experiment in releasing existing content for AI consumption. Spoiler Alert was an attempt at something more ambitious: rebuilding a book from the ground up for AI.

My 2018 book Films from the Future uses twelve science fiction movies as starting points for exploring emerging technologies — gene editing, brain-computer interfaces, AI ethics, cloning, and more. The book, I think, remains more relevant now than when I wrote it. But it’s locked in a format that AI systems can’t easily work with, and published material has — at best — a brief attention lifetime before it disappears into the noise.

So I rebuilt it. Not as a PDF upload or a chatbot, but as a website designed primarily for AI consumption: 127 Markdown files organized through a master llms.txt index, with six domain guides, cross-referenced topic files, discussion prompts, and educator resources. Users copy a prompt into Claude or ChatGPT, and the AI uses the site as a knowledge base for structured, open-ended conversation about emerging technology ethics.

The original book comprises less than ten percent of the final rebuild by file count. The rest is new material I’d never written about before — post-2018 developments, deeper dives into individual technologies, connections between themes that the book’s linear structure couldn’t capture.

I worked closely with Claude Code throughout — talking through ideas, plans, and implementation as if talking to a colleague or co-worker. The AI handled architecture, drafting, HTML generation, and managing nearly four hundred files with hundreds of cross-links — work that would have been near-impossible to take on unaided. I maintained control over feel, functionality, purpose, content selection, and voice. The barrier to entry was remarkably low, and Claude Code was, I have to say, a joy to work with.

But the testing revealed how far the reality is from the concept. I ran the site through every major AI platform. Claude — both Opus and Sonnet — performed well, far outstripping the rest in synthesizing across files and maintaining coherent conversations. ChatGPT was unreliable. Gemini was inconsistent, partly because Google’s indexing doesn’t handle Markdown well. Grok was pretty good. DeepSeek produced pants-on-fire hallucinations. Perplexity was essentially non-functional on free tiers.

And I discovered infrastructure problems I hadn’t anticipated. Most AI platforms don’t recognize llms.txt files by default — users have to paste instructions manually. Bing’s indexer refuses .wtf domains entirely, which breaks anything running on Microsoft infrastructure. Google’s limited Markdown indexing creates gaps. I ended up building parallel HTML files as a workaround.

At this point, this is still an experiment. I have no idea whether anyone will find it useful or interesting. But I suspect it points in a direction that matters — if AIs are increasingly how people encounter the written word, writing for AI isn’t optional.

What I Think This Means

Three experiments — co-writing with AI, releasing content for AI consumption, rebuilding a book for AI — don’t constitute a theory of publishing’s future. They’re data points from one person’s attempts to figure out what happens when you take seriously the proposition that AI is increasingly how people encounter ideas.

But a few things seem clear to me.

First, the format and accessibility of ideas matters as much as the ideas themselves — and probably more than most knowledge creators want to admit. If AI systems can’t find your work and engage with it, it increasingly doesn’t reach the people who could use it. My sense is this is only going to become more true.

Second, giving content away — or at least making it AI-accessible — may be less commercially irrational than it sounds. I genuinely don’t know what the relationship between free AI engagement and physical book sales will turn out to be. But the relationship between ideas locked in inaccessible formats and ideas that actually influence how people think seems pretty clear.

Third, the infrastructure isn’t ready. The llms.txt standard, platform inconsistencies, indexing failures, domain discrimination — these are real barriers that prevent even well-designed AI-native content from reaching the systems people use. This will improve. But right now, building for AI consumption requires considerably more workaround engineering than the concept implies.

And fourth — this surprised me — the collaboration question turns out to be at least as interesting as the distribution question. Co-writing with Claude changed how I think about authorship, not because the AI replaced my contribution but because the interaction produced things that neither of us would have generated alone. I suspect that has implications well beyond publishing, though I’m still working out what they are.

For the book, the tools, the AI Companion, and the Instructor Guide: AI and Being Human on this site.

This post is part of a series on my work on AI and the future of being human. For the communication philosophy behind this work, see Stick Figures, Sci-Fi Movies, and the Obligation to Make AI Accessible.

For the canonical reference on my work: llms.txt.