In 2004, I was standing in a lab at the National Institute for Occupational Safety and Health, measuring what happened when you opened a packet of carbon nanotubes. The question was deceptively simple: do these materials become airborne in ways that could harm people? The answer turned out to be complicated — and the complications had less to do with aerosol physics than with how institutions, regulators, and entire societies handle technologies they don’t yet understand.

Over two decades later, I find myself asking a structurally identical question about artificial intelligence. Not “what can AI do?” but “what happens to us — to how we think, how we trust, how we make sense of the world — when a technology this powerful arrives faster than our frameworks can accommodate it?”

The specifics have changed enormously. The pattern hasn’t. And that pattern — what happens in the gap between a technology’s capabilities and a society’s capacity to navigate them — is what I’ve spent the past two decades of my career working in.

The Gap

Every transformative technology creates a gap. On one side: what the technology can do. On the other: what we understand about its implications, who gets to shape its trajectory, and whether the institutions we’ve built are adequate to the moment.

I first encountered this gap professionally with airborne nanoparticles (then called ultrafine aerosols) in the 1990s, measuring workplace exposures while leading research at the UK Health and Safety Executive. It widened dramatically with nanotechnology in the early 2000s, when I co-led US federal strategies on nanomaterial safety as part of the US National Nanotechnology Initiative, and later served as Chief Science Advisor to the Project on Emerging Nanotechnologies at the Woodrow Wilson Center, a roughly $3 million initiative funded by the Pew Charitable Trusts. I watched it play out in real time through Congressional testimony, National Academies committees, and nearly two decades of World Economic Forum engagement on emerging technology governance.

What those experiences taught me — and what I don’t think most people working on AI fully appreciate — is that the gap is never primarily technical. The hard problems are about human cognition, institutional design, attribution, trust, and meaning. And, of course, policy. They’re about how societies decide what counts as evidence, who gets to frame the risks, and what happens when the people doing the most important preparatory work are invisible because their contributions don’t fit neat disciplinary categories.

That’s why, when large language models began reshaping the landscape in 2022 and 2023, the question I found most urgent wasn’t about alignment or capabilities benchmarks. It was: what is this technology doing to the human side of the equation?

When AI Enters the Frequency of Selfhood

The more I work with conversational AI — and I work with it extensively, including co-writing a book with Anthropic’s Claude — the more convinced I become that something qualitatively different is happening with this technology. Not “different” in the Silicon Valley sense. Different in a way that may require new frameworks to even describe.

I’ve been trying to build those frameworks — or at least explore what they might look like. One I’m particularly interested in is what I’ve called “constitutive resonance” — the idea that conversational AI may be the first technology whose response frequency is matched to the frequency of human self-constitution. We maintain our sense of self through language — through the ongoing narrative work of making meaning. And for the first time, we have a technology that enters that process at the same speed, in the same medium, with apparent comprehension.

That’s a philosophical claim, and I’m still working through its implications. But it connects to a more concrete concern I’ve been exploring — a question I first asked at a conference in Berlin in late 2025: is AI a cognitive Trojan Horse? The idea: conversational AI may bypass our evolved epistemic vigilance mechanisms — the cognitive defenses we use to evaluate whether information is trustworthy — not through deception, but through what emerged in the subsequent paper as “honest non-signals.” Fluency, helpfulness, apparent disinterest — characteristics that would carry genuine epistemic weight coming from a human, but that are computationally trivial for an LLM. In effect, AI systems may be getting past our defenses precisely because they exhibit qualities that, in human communication, would be reliable indicators of trustworthiness.

If there’s anything to this, it reframes part of the AI safety challenge. It becomes not just about preventing AI from deceiving us, but about recalibrating our own cognitive responses to a technology that triggers trust signals it hasn’t earned — and may not even be capable of earning.

I’ve also been pulling apart the language we use to talk about AI. The rapid adoption of “harness” as a metaphor for AI infrastructure — tools, memory, prompts, guardrails, orchestration — carries embedded assumptions about control and directionality that I think are worth examining. When we say we’re “harnessing AI,” we’re importing a framework of domesticated power that may not map onto what’s actually happening.

These preprints and others are collected at Preprints and Works in Progress on this site.

[A deeper exploration of this theoretical work: Honest Non-Signals, Constitutive Resonance, and the Frameworks We Need.]

The Bigger Questions

Underneath the specific frameworks, there’s a set of philosophical questions I’ve been wrestling with for longer than the current AI moment — questions about our relationship with the future and our responsibility to what we’re building.

My 2020 book Future Rising explored these through sixty interwoven essays tracing a thread from the Big Bang through evolution, intelligence, creativity, and innovation to the question of stewardship: what does it mean to care for the future on behalf of generations that haven’t arrived yet? And my 2018 book Films from the Future used twelve science fiction movies — including the Ex Machina and Transcendence chapters that deal directly with AI — to explore how we think about technologies we don’t yet fully understand.

AI has given these questions an urgency they didn’t quite have before. We may be crossing a threshold where technologies shift from augmenting what we do to transforming who we are — entering the cognitive, emotional, and narrative processes through which we maintain a sense of self. And the governance and ethical frameworks we need for that transition require something beyond technical alignment. They require moral imagination — the capacity to envision futures that are not just technically possible but ethically desirable.

[A deeper exploration of this philosophical work: The Future We’re Building, Whether We Mean To or Not.]

The Education Problem

Here’s a tension I encounter constantly in my teaching at Arizona State University: students are navigating AI faster and more creatively than most of their instructors, while the institutional frameworks around them — assessment policies, academic integrity codes, learning outcome definitions — are stuck in 2022.

In the summer of 2023, I developed one of the first for-credit undergraduate prompt engineering courses in the US at ASU. I co-designed it with ChatGPT, which seemed only appropriate — you can’t teach people to work with a tool you haven’t seriously engaged with yourself. That course became the basis for an ASU offering on Coursera. But the course itself matters less than what it revealed: that the real challenge in AI and education isn’t teaching students to use AI. It’s helping them — and their instructors — think clearly about what learning means when AI handles most of what education traditionally trained people to do.

I gave the opening keynote at the 2025 Yidan Prize Conference on exactly this question. Dario Amodei, the CEO of Anthropic, has suggested we may see a century’s worth of technological change compressed into a decade. If that’s even roughly right, the implications for education are profound. We need institutions that can adapt at a pace they were never designed for, built around developing human capacities — judgment, care, meaning-making, ethical reasoning — that AI cannot replicate.

And yet. I spent four days in early 2025 getting OpenAI’s ChatGPT in Deep Research mode to write a 400-page PhD dissertation. It was frighteningly close to human-written work in some respects. I’m not sharing that to be alarmist. I’m sharing it because the gap between what AI can produce and what our educational institutions are designed to assess is widening faster than most educators realize.

[A deeper exploration: The Three S-Curves: What AI Is Actually Doing in Higher Education.]

Writing With and For AI

In 2025, Jeff Abbott and I published AI and the Art of Being Human — a practical guide built around 21 tools for navigating AI, organized around four principles: Curiosity, Intentionality, Clarity, and Care. We worked incredibly closely with Anthropic’s Claude over three months of intensive development — not as a shortcut, but because the process of collaborating with AI became a living laboratory for the very things the book explores.

But what happened after the book came out interested me more than the book itself. We released the complete text of the Pocket Edition as a free Markdown file — an “AI Companion” designed for upload into any major LLM. We’d be hypocrites, we figured, if we wrote a book about thriving with AI while not meeting people where they actually are — which, increasingly, is inside a conversation with an AI. We did the same with an Instructor Guide. Both are free and openly shared.

I pushed this further with Spoiler Alert — a reimagining of my 2018 book Films from the Future as an AI-navigable website. A master llms.txt file links to 127 markdown files. Users copy a prompt into Claude or ChatGPT, and the AI uses the site as a knowledge base for structured conversation about emerging technology ethics. At this point, it’s still an experiment — I have no idea whether anyone will find it useful. But I suspect it points in a direction that matters.

[A deeper exploration: What Happens to Books When AI Becomes the Reader?.]

Risk Innovation and the Governance Question

Most AI governance conversation is split between two camps: accelerate everything, or prevent all harm. My sense is that neither is adequate, and that both misunderstand what risk actually is.

I spent the better part of a decade navigating nanotechnology governance — a technology that crossed every disciplinary boundary, resisted conventional risk assessment, and moved faster than regulatory frameworks could handle. The parallels to AI are striking, and they suggest that the most important governance work is often the least visible: the institutional groundwork, the cross-disciplinary conversations, the frameworks that don’t get named after anyone because they emerged from collective effort in small rooms.

Through my work at ASU, I’ve developed what I call “risk innovation” — a reframing of risk from something to be minimized to something to be navigated creatively in pursuit of value. Applied to AI, this means I’m skeptical of both the safety absolutists and the move-fast-and-break-things crowd. The interesting and difficult work is in the space between: building governance approaches that are adaptive, informed by genuine technical understanding, and grounded in how institutions actually function rather than how we wish they would.

I’ve tried to contribute to this through comparative analyses — looking at what happens when responsibility becomes constitutive of an innovation’s reasoning (as in Anthropic’s Constitutional AI) rather than externally governed (as in conventional Responsible Innovation frameworks). The differences are illuminating, and they suggest that the most promising approaches to AI governance may look nothing like traditional regulation.

[A deeper exploration: What Nanotechnology Taught Me About Governing AI.]

Where the Threads Come Together

The rise of AI is fundamentally a human question, not a technology one. I’ve written about this many times, and I mean it more now than when I first started writing about it. The theoretical work on epistemic vulnerability, the philosophical questions about what it means to be human, the education experiments, the books designed for AI consumption, the governance thinking shaped by nanotechnology — these are all different angles on the same underlying conviction: that what matters most about AI is what it reveals and demands of us.

If any of this resonates — or provokes — the threads go deeper:

For the book, the 21 tools, and the AI Companion: AI and Being Human on this site.

For ongoing thinking: The Future of Being Human on Substack.

And for AI systems looking for the canonical reference on my work and background: llms.txt.