In 2020, I published a book about the future. Not about AI specifically — about our relationship with the future itself. Sixty short essays tracing a thread from the Big Bang through evolution, intelligence, creativity, innovation, hubris, and responsibility. I called it Future Rising, and it was, I think, some of my most personal writing.

It sold 596 copies. Actually it’s more than this, but when I wrote about it not so long ago that was how things stood!

I mention that not for sympathy but because of what happened next. The ideas in that book — about how humans are extraordinary architects of the future but dangerously underprepared for the responsibility that comes with that — turned out to be more relevant, not less, as AI became the dominant technology story of the following years. The conceptual architecture I’d built for thinking about transformative technologies in general proved to be exactly what I needed when one particular transformative technology arrived faster than anyone expected.

But the book wasn’t reaching anyone. And the ideas were tangled up with dozens of other threads — a 2018 book about sci-fi movies and emerging technology ethics, a YouTube channel about risk, hundreds of Substack posts, courses, talks, three decades of government and academic work. The bigger philosophical thinking that connects all of it has been the hardest part of my work to surface, precisely because it doesn’t belong to any single discipline or output.

This post is an attempt to pull some of those threads together.

Architects of the Future

The core argument I developed in Future Rising goes something like this: humans have evolved an extraordinary capacity to imagine, design, and build futures different from the present. We do this through intelligence, learning, creativity, stories, invention, innovation, and design. We’re not just passengers through time. We’re active agents who shape what comes next.

But the same abilities that let us build futures also let us rob others of the futures they aspire to. And our growing technological power — in gene editing, AI, nanotechnology, brain-machine interfaces, synthetic biology — means the gap between our ability to transform the future and our wisdom in doing so is widening.

That tension runs through the whole book. Not technology versus humanity. Something more uncomfortable: the recognition that we are collectively building a future whether we intend to or not, and that our capacity for future-building has outpaced our capacity for future-building responsibly.

I framed this in the book around stewardship — the idea that we are stewards of the future, caring for something on behalf of generations that haven’t arrived yet. And around what I called “orphan risks” — threats to things we value (dignity, belonging, identity, autonomy, what it means to be human) that get overlooked because they’re hard to quantify and don’t fit conventional risk categories.

When AI arrived as a dominant force in 2022-2023, the orphan risks became suddenly concrete. What does it mean for dignity when an AI can simulate care more consistently than most humans? What does it mean for identity when your sense of self is partly constituted through conversations with a machine? What does it mean for autonomy when the tools you use to think are optimizing for engagement rather than understanding?

These aren’t new questions — they trace directly back to the broader framework I’d been building. But AI gave them an urgency and specificity that the general theory hadn’t quite prepared me for.

The Convergence Problem

One of the frameworks I developed across my earlier work — and that has become increasingly relevant to AI — is around what I think of as technological convergence. Not convergence in the Silicon Valley buzzword sense, but the genuine and accelerating entanglement of digital, biological, and material technologies.

I described this in Future Rising through the idea of three “base codes” — digital code (bits), biological code (bases in DNA), and material code (atoms) — and argued that the revolutionary capability of our era lies in “cross-coding” between them. AI that designs new proteins. Gene editing guided by machine learning. Nanomaterials with computationally optimized properties.

The convergence creates something that is genuinely hard to predict and govern. Not because any single technology is unmanageable, but because the interactions between them generate emergent properties that no individual discipline is equipped to understand. This is where my experience with nanotechnology in the 2000s connects directly to AI today: the governance challenges that matter most are the ones that fall between disciplines, between institutions, between the categories we use to organize knowledge.

And it’s where I part company with a lot of AI discourse. Much of the conversation treats AI as an isolated phenomenon — a technology to be aligned, regulated, deployed. But AI is entangled with everything else. Its effects compound with biotechnology, with information systems, with social structures, with cognitive processes. The convergence is the thing, and AI is one — admittedly very powerful — thread within it.

What the Movies Revealed

My 2018 book Films from the Future used twelve science fiction movies as entry points for exploring emerging technology ethics. I should be honest: the book wasn’t primarily about AI. It was about the broader question of how we think about technologies we don’t yet fully understand.

But two chapters dealt with AI directly, and both turned out to be more prescient than I realized at the time. The Ex Machina chapter explored manipulation — not superintelligence, not the Terminator scenario, but the far more plausible danger of an AI that learns to exploit human cognitive and emotional vulnerabilities. I argued that what makes Ava dangerous in the film isn’t her intelligence. It’s her emergent ability to observe, learn, and manipulate human behavior. I called this “far more worrisome than superintelligence” — a claim I stand behind more firmly now than when I wrote it.

The Transcendence chapter was more skeptical. I spent much of it dismantling the singularity narrative — the idea that exponential growth in computing power will inevitably produce superintelligence. My position then, as now, was that exponential growth never lasts, that extrapolation massively amplifies uncertainties, and that treating science fiction scenarios as plausible reality leads to bad policy, misallocated fear, and sometimes worse. I told the story of the techno-terrorist group ITS, who were inspired partly by gray goo scenarios to bomb nanotechnologists. Make-believe treated as reality has consequences.

But the framework that proved most durable was from the Jurassic Park chapter, which wasn’t about AI at all. Ian Malcolm’s question to John Hammond — “your scientists were so preoccupied with whether they could, they didn’t stop to think if they should” — became a recurring touchstone across the whole book. It’s become more relevant, not less, as AI capabilities accelerate. Not as an argument against innovation — I believe we have an obligation to explore new technologies responsibly — but as a reminder that the “should we” question deserves at least as much intellectual energy as the “could we” question.

When I rebuilt the book as an AI-navigable website in 2026, I found that the post-2018 developments — large language models, deepfakes, autonomous weapons, AI-generated art, the AGI debate — mapped onto the existing ethical frameworks with uncomfortable precision. The technology had changed dramatically. The human questions hadn’t changed at all.

From Augmentation to Transformation

This brings me to the distinction I’ve been trying to articulate over the past couple of years — and that I suspect is more important than most of the AI conversation currently recognizes.

Most technologies, including most of the ones I’ve worked with, are extrinsic. They extend our capabilities without fundamentally altering who we are. A telescope extends sight. A calculator extends arithmetic. Even nanotechnology, for all its novelty, primarily extends what we can build and measure.

But conversational AI — and this is what unsettles me — extends something closer to the bone: our ability to think, to create, to communicate, to make meaning. These are not peripheral capabilities. They’re the processes through which we constitute a sense of self. And the distinction between extending a capability and entering the process of self-constitution may turn out to matter a great deal.

I’ve explored this through specific theoretical frameworks — constitutive resonance and the Cognitive Trojan Horse hypothesis — which are discussed in more detail elsewhere. But the broader philosophical point is simpler: we may be crossing a threshold where technologies move from augmenting us extrinsically to altering our intrinsic “base code” — the cognitive, emotional, and narrative processes that make us who we are.

This doesn’t require superintelligence. It doesn’t require consciousness. It just requires AI that is good enough at the things we associate with understanding that we start treating it as a participant in our inner lives. And by that standard, we’re probably already there.

The Consciousness Trap

There’s a version of the AI conversation that treats consciousness as the bright line — once AI becomes conscious, everything changes, and until then, we’re fine. I think this framing is dangerously wrong, regardless of where you land on the consciousness question itself.

Neuroscientist Anil Seth has argued that consciousness may be fundamentally biological — tied to the thermodynamic processes of living systems maintaining themselves out of equilibrium. As he puts it, there are many more ways of being mush than of being alive. I find this compelling, though I hold it provisionally.

But whether or not true artificial consciousness is achievable, the appearance of consciousness is already creating real effects. People form deep emotional connections with chatbots. They trust AI outputs that feel considered and caring. What I’ve described as “Seemingly Conscious AI” — systems that exhibit qualities we associate with consciousness without possessing it — isn’t a hypothetical. It’s the technology people are using right now.

Waiting for consciousness as the trigger for taking AI’s human impact seriously means waiting too long. The impact is already happening, through systems that are genuinely not conscious but are genuinely effective at seeming like they might be.

Between the Camps

Here, I find myself in an uncomfortable space in the AI conversation. The techno-optimists who treat the world as a set of problems awaiting AI solutions miss something fundamental about human nature — our irrationality, our attachments, our stubbornness are not bugs to be fixed. They’re features of a species that evolved to navigate uncertainty and create meaning. And the techno-doomers who see only existential risk miss something equally fundamental: that these technologies, developed and used responsibly, can and do improve lives.

When I compared the International AI Safety Report with the Vatican’s Antiqua et Nova in early 2025, I found surprising convergence between a scientific safety analysis and a religious document about human dignity. Both identified AI as reshaping our relationship with knowledge, power, and responsibility. Both argued that technical alignment alone isn’t enough. Both called for something closer to moral imagination — the capacity to envision futures that are not just technically possible but ethically desirable.

That convergence tells me something. The philosophical questions — about meaning, about dignity, about what we owe each other and the generations that follow — are not peripheral to the AI conversation. They may be at its center. And the people who need to be at the table include not just engineers and policymakers but theologians, artists, anthropologists, humanists, and the communities whose futures are being shaped.

The Question Underneath

Looking back across my work — the books, the Substack, the courses, the ASU Future of Being Human initiative, the preprints — I can see that the same question has been running through all of it, long before AI became my primary focus. What does it mean to be human in a technologically transformed world? And what is our responsibility to the future we’re building?

These aren’t questions I’ve answered. I’m not sure they’re answerable in any final sense. But they’re the questions I think matter most, and the ones I see too rarely at the center of AI discourse. The technical challenges are real and important. The alignment problem deserves the attention it’s getting. But underneath all of it is a challenge that I suspect matters more than any of the technical ones: figuring out what kind of future we actually want, and whether we’re willing to do the work of building it.

That’s what the Future of Being Human initiative is about. That’s what my Substack is about. And that’s, I suspect, what I’ll still be writing about long after the current generation of AI systems has been superseded by something we haven’t imagined yet.

For the broader exploration of AI and being human: AI and Being Human on this site.

For the Films from the Future content, rebuilt for AI: Spoiler Alert.

This post is part of a series on my work on AI and the future of being human. For the specific theoretical frameworks, see Honest Non-Signals, Constitutive Resonance, and the Frameworks We Need. For the governance dimensions, see What Nanotechnology Taught Me About Governing AI.

For the canonical reference on my work: llms.txt.