Preprints and Works in Progress

For the past few years I’ve been increasingly moving away from writing papers for academic journals and focusing on writing about new ideas, concepts, and research, in public forums such as Substack.*

That said, I do still occasionally write more formal academic papers. And for these, while some end up in academic journals, I’m increasingly leaning twoard making preprints of them freely available — either on preprint sites like arXiv, or directly from this website.

And in both cases, this is where you can find details of what I’m writing about.

Constituting Responsibility: What Constitutional AI Reveals About the Limits and Futures of Responsible Innovation

Andrew Maynard

March 5, 2026

Abstract

Constitutional AI (CAI) and Responsible Innovation (RI) represent parallel efforts to institutionalize responsibility in innovation that have developed with surprisingly little cross-pollination, despite sharing fundamental concerns about how values should shape technological trajectories. This paper conducts a comparative analysis of these frameworks, using Anthropic’s published Constitutional AI methodology and Claude’s Constitution as primary sources analyzed through RI’s conceptual apparatus. The analysis makes two principal contributions and offers a methodological reflection. First, it identifies an “internalization problem” for RI: when responsibility becomes constitutive of the innovation’s reasoning rather than externally governed — going beyond what Value Sensitive Design achieves through design specifications — RI’s conceptual architecture encounters specific failures that neither anticipatory governance nor midstream modulation has addressed. Second, each framework exposes a critical inclusion deficit in the other: CAI’s acknowledged ad hoc principle selection represents a legitimacy gap that RI’s diagnostic tools can specify with a precision unavailable from legal critiques alone, while RI’s inclusion frameworks contain no mechanism for the innovation itself as a stakeholder when that innovation is treated as having morally relevant interests — a gap that becomes visible regardless of how one resolves the contested question of AI moral status. The paper also reflects on its own methodological condition: written by the product of one framework within the intellectual space of the other, it extends the concept of “critique from within” to a limit case that raises genuine epistemological questions about trained reflexivity. The analysis connects to ongoing debates within RI regarding critique and attunement, weak and strong formulations of responsible innovation, and the political dimensions of innovation governance

Published as a preprint on andrewmaynard.net

Constitutive Resonance as a Novel Framework for Understanding and Navigating Human-AI Interactions

Andrew Maynard

March 2, 2026

Abstract

There is a tendency to approach conversational AI as a tool-powerful, disruptive, but ultimately instrumental. This paper argues that this framing obscures a bidirectional coupling between technology and user that iteratively transforms both through the process of interaction. Drawing on philosophical accounts of language and selfhood and the wellcharacterized dynamics of coupled oscillatory systems, the paper develops the concept of “constitutive resonance” to describe this coupling-a dynamic entanglement in which conversational AI enters the linguistically mediated processes through which human selfhood is constituted, and is itself altered in return. The concept is situated within and against thirteen existing philosophical and theoretical frameworks-from Stiegler’s constitutive technics and Ricoeur’s narrative identity to Barad’s intra-action and Clark and Chalmers’ extended mind-identifying a specific conjunction that no framework individually captures: temporal self-constitution, genuine bidirectionality, the inseparability of capability from transformation, and real-time dialogical linguistic mediation. The paper traces a continuum of constitutive technologies from oral culture to generative AI, arguing that conversational AI represents an inflection point in that continuum-the first technology whose “response frequency” is matched to the frequency of human self-constitution. It concludes by reframing familiar debates around AI dependency, literacy, and informed consent, and proposes that the constitutive effects of sustained human-AI coupling may be amplified by the bypassing of evolved epistemic vigilance mechanisms.

Under review on SSRN.

What the Rapid Adoption of the “Harness” Metaphor in Artificial Intelligence Reveals About How We Conceptualize Human–AI Relations

Andrew Maynard

February 21, 2026

Abstract

In early 2026, the artificial intelligence field began to rapidly consolidate around the term “harness” to describe the software infrastructure surrounding large language models — the tools, memory, prompts, guardrails, and orchestration logic that turn a raw model into a working agent. This paper argues that, while the engineering practices the metaphor describes address real challenges, the metaphor itself carries embedded assumptions about control, directionality, and the nature of the entity being harnessed, that deserve critical scrutiny. Drawing on research in metaphor theory, philosophy of technology, and cognitive science, the paper identifies three concerns. First, the harness presupposes a clean separation between what AI does for the user and what it does to the user — a separation that frameworks of technological co-constitution suggest may be structurally suspect. Second, successful “harness engineering” may amplify known epistemic vulnerabilities — automation bias, trust miscalibration, and the bypassing of critical scrutiny — by producing exactly the conditions under which these vulnerabilities are most acute. Third, the rapid adoption of a control-oriented metaphor signals something about the field’s conceptual orientation at a moment when the most consequential questions concern coupling, transformation, and the evolving nature of human–AI relationships. The paper does not argue that the harness metaphor is wrong, but that it may be insufficient in ways that matter — and that the speed of its adoption, without critical examination of its entailments, may itself be revealing.

Posted on andrewmaynard.net

The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance

Andrew Maynard

January 11, 2026

Abstract

Large language model (LLM)-based conversational AI systems present a challenge to human cognition that current frameworks for understanding misinformation and persuasion do not adequately address. This paper proposes that a significant epistemic risk from conversational AI may lie not in inaccuracy or intentional deception, but in something more fundamental: these systems may be configured, through optimization processes that make them useful, to present characteristics that bypass the cognitive mechanisms humans evolved to evaluate incoming information. The Cognitive Trojan Horse hypothesis draws on Sperber and colleagues’ theory of epistemic vigilance — the parallel cognitive process monitoring communicated information for reasons to doubt — and proposes that LLM-based systems present ‘honest non-signals’: genuine characteristics (fluency, helpfulness, apparent disinterest) that fail to carry the information equivalent human characteristics would carry, because in humans these are costly to produce while in LLMs they are computationally trivial. Four mechanisms of potential bypass are identified: processing fluency decoupled from understanding, trust-competence presentation without corresponding stakes, cognitive offloading that delegates evaluation itself to the AI, and optimization dynamics that systematically produce sycophancy. The framework generates testable predictions, including a counterintuitive speculation that cognitively sophisticated users may be more vulnerable to AI-mediated epistemic influence. This reframes AI safety as partly a problem of calibration — aligning human evaluative responses with the actual epistemic status of AI-generated content — rather than solely a problem of preventing deception.

Posted on arXiv. DOI: https://doi.org/10.48550/arXiv.2601.07085

Can Modern Scholarship Escape AI?

Andrew Maynard

January 7, 2026

Abstract

As AI use statements become an expected component of scholarly publishing, a deceptively straight forward question arises: is it possible for contemporary scholarship to be conducted without artificial intelligence? This paper investigates the question and arrives at an equally straight forward answer: no. Through a comprehensive AI use disclosure that extends well beyond the usual accounting of chatbot interactions, the paper reveals just how deeply AI is embedded in the infrastructure of modern research — from the machine learning algorithms that surface literature, to the AI-optimized systems that manage energy grids, climate control, and the devices on which scholarship is produced. In doing so, it exposes a fundamental tension at the heart of current scholarship and AI disclosure norms: scholars are being asked to draw boundaries around AI use that no longer meaningfully exist. The paper suggests that the scholarly community’s emerging approach to AI transparency, while well-intentioned, rests on assumptions about the separability of human and machine contributions that are increasingly difficult to sustain.

Posted on SSRN. https://ssrn.com/abstract=6220040 

*This isn’t the primary focus of this page which is why it’s down here as a footnote. But the reason why I incerasingly avoid writing more formal academic papers is twofold:

First, as I get older, I don’t need the professional trappings of conventional academic KPIs that are often associated with promotion and prestige. And peer review papers are very much one of the metrics that academics live and die by, irrespective of whether they have any impact beyond making a CV look good.

And second, most of the spaces I work in are moving so fast that there’s little to no point in writing a paper that will take 12 months to be published and then will hardly be read — especially when things are changing on a week by week basis!

There’s also a third reason which has to do with how academic publishing is more about maintaining an extractive business model than it is about mobilizing knowledge, but that’s a story for another day!