In July 2016, four years into running my Risk Bites YouTube channel, I drew some stick figures on a whiteboard in my office at Arizona State University, pointed a camera at them, and recorded a video explaining what nanotechnology is. It was, by any professional standard, terrible. The drawings were barely recognizable. The production consisted of me, a whiteboard, and some dry-erase markers. There was no marketing budget, no production team, no institutional support.

That video has now been watched over a million times. It may just have had more impact than all of my formal classes combined — although the metrics of success are admittedly very different.

I mention this not because the video is particularly good — it isn’t — but because it taught me something about the gap between what academics think matters in communication and what actually reaches people. I can’t draw. But I can tell a story and connect complex ideas to everyday experience. And it turns out that matters more than polish, more than professional production, and certainly more than the kind of earnest institutional content that universities spend fortunes creating and almost nobody watches.

The Obligation Question

I believe rather strongly that the privilege of academic scholarship and research comes with an obligation to ensure that the knowledge we unearth is accessible to anyone who can benefit from it, no matter who they are or where they are. This isn’t a nice-to-have. It’s part of what the privilege is for.

I realize that’s a strong claim, and it puts me at odds with how most academic institutions actually work. The conventional evaluation framework — research, teaching, service — actively discourages public-facing work. Faculty who serve their communities often succeed despite the institution, not because of it. And the incentive structures reward publishing in journals that almost nobody reads over communicating in ways that actually reach the people affected by the research.

This has been a tension throughout my career. But it sharpened considerably with AI. The questions AI raises — about agency, trust, dignity, what it means to be human when machines get good at the things we thought were uniquely ours — are too important and too consequential to leave to the people building the technology. Some of the technological challenges we’re facing are so profound, so life-changing, that the questions they raise are ones we cannot afford to leave solely to scientists, innovators, and politicians to answer. We all need to be able to wrap our collective heads around what’s coming our way and how it might affect us.

The question is how.

Risk Bites and the Whiteboard Philosophy

Risk Bites started in 2012 as an experiment from the University of Michigan’s Risk Science Center. I’d attended VidCon the previous year and watched creators like Minute Physics and Veritasium demonstrate that equipped with a video camera, a whiteboard, and a handful of pens, anyone with knowledge and a story to tell could reach a global audience. I wanted to test whether a full-blown academic with limited time and even less talent could use YouTube to connect understanding and insights to a general audience.

The answer, it turned out, was yes — but with caveats. Over its lifetime, the channel accumulated over 5 million views and nearly 20 years of cumulative watch time. At peak, it was getting over 1,000 views and 50 hours of view-time per day. This is still pitifully low compared to professional YouTube creators. But for an academic working between the cracks in a busy schedule, it represents a significant translation of knowledge into resources that are freely accessible.

What I learned — and what I later wrote up as an academic paper in Frontiers in Communication — is that substance and narrative skill outweigh production values by a wide margin. The million-view nanotechnology video works not because it looks good (it doesn’t) but because it tells a clear story in accessible language. I should come clean and admit that “no talent” is a little tongue in cheek — I certainly can’t draw, but I do know how to tell a story and connect broad audiences with complex ideas. And as it turns out, that’s what matters.

I also built this into a training program — Science Videos Made Simple — a step-by-step guide for graduate students on creating accessible science explainer videos using smartphones, whiteboards, and stick figures. The philosophy behind it: if the barrier to entry for public communication is lowered far enough, more experts might actually do it.

AI on the Whiteboard

In 2018, I made a Risk Bites video on ten potential risks of artificial intelligence — and deliberately steered away from the existential risk scenarios that dominate most AI discourse. I wasn’t interested in hyperbolic speculation. I wanted to lay the foundations for nuanced discussion about risks that are often far more mundane but no less serious: algorithmic bias, non-transparent decision making, value misalignment, heuristic manipulation, technological dependency.

When ChatGPT arrived five years later, I updated the video and was pleased to find the original framing had held up reasonably well. The ten risks I’d identified in 2018 were, if anything, more relevant in 2023. That wasn’t because I’d been unusually prescient. It was because the risks I focused on were human risks — about agency, justice, autonomy, and control — rather than technical capability benchmarks that shift every few months.

This is a pattern I’ve noticed across my communication work: the technology-specific details change constantly, but the human questions underneath them are remarkably stable. AI manipulation looked one way in 2018 and another in 2025, but the underlying concern — that AI might exploit cognitive vulnerabilities we don’t fully understand — is the same thread that led to the Cognitive Trojan Horse question years later.

Movies as Entry Points

The communication philosophy that runs through all of this found its fullest expression in Films from the Future (2018). The book uses twelve science fiction movies as jumping-off points for exploring emerging technology ethics — not because the movies are scientifically accurate (some are terrible science) but because they do something academic papers can’t: they break down barriers between experts and non-experts, bypass ideological divisions, and give everyone permission to think about the future.

At a World Economic Forum meeting, someone once suggested that the most effective way to bridge divides around technology governance wasn’t politics, regulation, or education. It was art. Science fiction movies are, without a doubt, a legitimate form of art, and one that has the power to bring people together in imagining how to collectively create a future that is good for society.

That sounds grand, but the practical point is simple: a conversation about Jurassic Park’s “could we, should we?” question lets a room full of people who disagree about everything else find common ground on technology ethics. The same conversation framed as a policy paper about emerging technology governance would clear the room. Movies are democratic entry points to questions that matter enormously but that most people have been given no framework for thinking about.

The Ex Machina chapter — about AI, manipulation, and permissionless innovation — turned out to be the most durable piece of AI communication I’ve done. I republished it on Substack in 2023 because the underlying ideas had become more important, not less, as conversational AI became mainstream. And when I rebuilt the entire book as an AI-navigable website, the post-2018 AI developments — large language models, deepfakes, autonomous weapons, AI-generated art — mapped onto the existing ethical frameworks with uncomfortable precision.

The Substack and Why It Exists

In early 2023, I went looking for a link to something I’d written on the potential risks of AI — something citable, something I could point people to. And I realized that despite AI being at the heart of so much of my work, I hadn’t written much that was neatly citable. I was shocked.

That was the jolt I needed. I’d been blogging for over a decade at 2020science.org and had missed the informality and responsiveness of that kind of writing. Academic publishing moves too slowly for the speed AI demands, and its reach is too narrow. The Future of Being Human Substack became the space where I could think in public about AI — weekly, informally, with 4,700+ readers who are part of the conversation rather than an audience being lectured at.

The framing — “the future of being human” — is deliberate. It allows a focus on how transformative technologies may impact each of us personally, rather than the rather general handwaving around “humanity” that usually goes on. And it has become, over the past three years, the primary channel for my AI thinking — more than fifty posts on AI topics, from the Cognitive Trojan Horse to stochastic agency to student attitudes to whether AI can write a PhD dissertation.

This is alongside writing for The Conversation (700,000+ reads), Slate, Scientific American, and other outlets — and co-hosting the Modem Futura podcast with Sean Leahy through ASU’s Future of Being Human initiative. Each channel reaches different people in different ways. The academic papers reach other academics. The Substack reaches people who are already thinking about these questions. The YouTube videos reach casual learners who are curious but not yet engaged. The popular writing reaches people who wouldn’t seek any of this out but encounter it through feeds and recommendations.

The Honest Broker

The role I try to carve out for myself in all of this public-facing work is what Roger Pielke calls the “honest broker” — not judging, not advocating for a specific course of action, but helping people make the best-informed decisions for themselves and their communities, based on available evidence and insights.

With AI, that’s become harder. The questions are more urgent, the stakes are higher, and the temptation to advocate for particular positions is stronger. But my sense is that the most valuable thing I can do is not tell people what to think about AI but give them the frameworks, the evidence, and the stories that help them think about it for themselves.

Whether that’s through stick figures on a whiteboard, a science fiction movie, a Substack essay, or a podcast conversation — the medium matters far less than the honesty of the engagement.

This post is part of a series on my work on AI and the future of being human. For more on the books and AI-native knowledge architecture discussed here, see What Happens to Books When AI Becomes the Reader?.

For the canonical reference on my work: llms.txt.