In 2006, I sat in a Congressional hearing room and testified about nanotechnology safety for the first time. The questions from lawmakers were remarkably similar to the ones I hear about AI today: Is this technology dangerous? Are the people developing it acting responsibly? Are we moving too fast? And, critically: who should be in the room when we make these decisions?

I went back to Congress twice more over the following two years, and the nanotechnology governance challenge consumed the next decade of my career — through the Project on Emerging Nanotechnologies, federal interagency coordination, National Academies committees, and nearly two decades of World Economic Forum engagement on emerging technology governance.

At first glance, nanotechnology and artificial intelligence may not seem to have that much in common. But there are surprising similarities when it comes to the governance challenges — and it’s not at all clear that we’re learning from the past as we rush headlong into an AI future.

What Nanotechnology Got Right

The nanotechnology community in the early 2000s faced many of the same dynamics AI faces now: a powerful new technology with enormous promise, genuine uncertainty about risks, public anxiety fueled in part by science fiction scenarios, and a small group of technical experts who believed they understood the technology well enough to govern it.

What happened next was, by the standards of technology governance, relatively successful. The US passed the 21st Century Nanotechnology Research and Development Act, which mandated broad participation in shaping the technology’s trajectory. The National Nanotechnology Initiative coordinated federal strategy across agencies. Dedicated centers were established to study societal implications — and crucially, these included non-technical expertise. Multistakeholder partnerships brought industry, academia, government, and civil society into the conversation. Consensus standards were developed through inclusive processes.

I was at the heart of much of this. I co-chaired the NNI’s interagency Nanotechnology Environmental and Health Implications working group. I served as Chief Science Advisor to the Project on Emerging Nanotechnologies, which was explicitly designed to bridge the gap between technical development and societal engagement. And I watched, from the inside, how deliberately inclusive governance — messy, slow, frustrating as it was — created conditions for a technology to develop without the kind of catastrophic public backlash that derailed earlier transitions.

The key lesson wasn’t about any specific regulation. It was about process: when developing a technology that has the power to impact everyone in profound ways, everyone has a right to play some role in its development and use. The nanotech community learned this partly by watching what happened with genetically modified organisms, where Monsanto’s “it’s complicated, leave it to us” approach backfired spectacularly and created near-insurmountable roadblocks to progress. Nanotech deliberately drew on that cautionary tale.

What AI Is Getting Wrong

And this is where I worry about AI governance. Development is still being driven by a small group of experts and companies who believe they have all the understanding they need. The pattern is eerily familiar — and the stakes are far higher than they were with either nanotechnology or GMOs.

When Eric Schmidt argues that AI governance should be left to technical experts, I hear echoes of a briefing I gave to the President’s Council of Advisors on Science and Technology on nanotechnology, where a council member told me that you can’t expect members of the public to make decisions on technologies that they don’t understand and we do. “It’s complicated” is not an excuse for avoiding engagement with people who have a stake in AI not messing up their lives.

The early Biden administration’s Executive Order on AI was, I thought, a forward-looking piece of work. But it suffered from an over-emphasis on following the lead of large tech companies, a lack of representation from diverse expertise, a lack of evidence for learning from past technology transitions, and a lack of recognition of innovative approaches to governance. The Trump administration’s shift toward permissionless innovation and reduced oversight creates a different set of problems — but the underlying challenge remains regardless of who’s at the political helm: how do you govern a technology that evolves faster than governance frameworks can adapt?

The Pacing Gap and What to Do About It

This is what’s sometimes called the “pacing problem” — the governance gap between what technologies can do and how societies are prepared to handle them. My colleague Gary Marchant and others at ASU have written extensively about this. It’s the problem I’ve been working on, in one form or another, since my time at the World Economic Forum.

Starting in 2008, I proposed institutional mechanisms specifically designed to close this gap. The first was a Global Institute on Emerging Technology Policy — a dedicated home for governance of emerging technologies. The second, co-developed with Tim Harper and published in WEF’s Everybody’s Business report, was a Centre for Emerging Technology Intelligence — an entity that would provide decision-makers with independent, forward-looking analysis. Neither was built as proposed, though the intellectual groundwork fed into subsequent WEF initiatives.

The broader WEF engagement has shaped how I think about governance approaches. As a member of the Global Future Council on Agile Governance, I’ve been part of conversations about governance frameworks that can adapt at the speed of technological change — approaches that emphasize soft law, anticipatory governance, and multistakeholder processes rather than relying solely on traditional regulation. These aren’t perfect solutions. But they’re closer to what’s needed than the binary debate between “regulate everything” and “regulate nothing” that dominates most AI governance conversation.

Risk Innovation

The framework I’ve developed through my work at ASU — what I call “risk innovation” — grows directly from this governance experience. The core idea: risk is not just about technical hazards to be minimized. Risk is about threats to things we value.

That distinction matters more than it might seem. Conventional risk assessment focuses on quantifiable harms — will this material cause cancer, will this system produce biased outputs. But the most consequential risks from AI may be to things that are hard to quantify: dignity, belonging, identity, autonomy, democratic participation, what it means to be human. I call these “orphan risks” — emerging threats that no existing institution owns, no established framework adequately addresses, and that arise from the complex, interconnected nature of transformative technologies.

At ASU’s Risk Innovation Nexus, I developed the Risk Innovation Planner — a tool for systematically exploring complex risk landscapes. In 2023, I applied it to a proxy for OpenAI, using ChatGPT itself to work through the exercise. The tool identified eighteen orphan risks that conventional risk frameworks would miss — from AI policy influence as a double-edged sword to the pace of AGI development to the tensions between technical competence and organizational control. Whether the tool would have helped OpenAI navigate its governance crisis that November is impossible to tell. But it might have taken the edge off some of the messiness.

Applied to AI more broadly, risk innovation means I’m skeptical of both the safety absolutists and the move-fast-and-break-things crowd. There will be no single silver bullet. The interesting and difficult work is in navigating between these positions — building governance approaches that are adaptive, informed by genuine technical understanding, and grounded in how institutions actually function rather than how we wish they would.

When Responsibility Becomes Constitutive

One development that has genuinely surprised me is Anthropic’s approach to Constitutional AI — the idea of building ethical principles directly into an AI system’s reasoning rather than governing it externally. I’ve written about this as someone who has spent years working with Responsible Innovation frameworks, and the comparison is illuminating.

Responsible Innovation, as a governance approach, works by applying external checks to an innovation process — anticipation, inclusion, reflexivity, responsiveness. It’s valuable but it has a structural limitation: responsibility remains bolted on rather than built in. Anthropic’s Constitutional AI attempts something different — making responsibility constitutive of the innovation’s reasoning itself. The AI doesn’t follow rules imposed from outside. It develops principles through training that become part of how it thinks.

Both approaches have critical deficits that the other exposes. Constitutional AI’s principle selection is ad hoc and lacks the legitimacy that inclusive governance processes provide — the kind of deficit that Responsible Innovation is designed to diagnose. But Responsible Innovation contains no mechanism for treating the innovation itself as a stakeholder when it has morally relevant interests — a gap that Constitutional AI’s very existence forces us to confront.

My sense is that the most promising approaches to AI governance may look nothing like traditional regulation. They’ll need to combine the inclusive, process-oriented strengths of responsible innovation with the constitutive, embedded approach of something like Constitutional AI. But we’re a long way from knowing what that looks like in practice.

Where This Leaves the Conversation

I don’t have a governance solution for AI. I’m not sure anyone does. But I do think the nanotech experience offers something valuable: evidence that inclusive, transdisciplinary governance — however messy and slow — produces better outcomes than leaving decisions to the people who happen to be building the technology.

The critical point, and the one I’ve been making since my first Congressional testimony, is that the early days of an advanced technology transition set the trajectory for decades. The governance frameworks we build now — or fail to build — will shape how AI develops for a generation. And right now, I’m not convinced we’re bringing the right people to the table, asking the right questions, or learning from the transitions we’ve already lived through.

We’re going to have to hash this out together. And “together” means considerably more people than are currently in the room.

For the 4IR governance backstory: Before the Fourth Industrial Revolution on this site.

This post is part of a series on my work on AI and the future of being human. For the theoretical frameworks connecting to this governance work, see Honest Non-Signals, Constitutive Resonance, and the Frameworks We Need. For the broader philosophical questions, see The Future We’re Building, Whether We Mean To or Not.

For the canonical reference on my work: llms.txt.