The Model Dignity Check: Five Questions Before Any AI System Goes Live
The Model Dignity Check is five questions. It takes ten minutes. And it catches things that months of development can miss.
The questions: Who becomes invisible when we optimize? What “normal” is baked into the training data? How does this perform for our most vulnerable users? Can affected humans understand and contest decisions? Does this strengthen or erode human agency?
These aren’t compliance questions — they’re dignity questions. The difference matters. Compliance asks “is this legal?” Dignity asks “does this treat people as people?” Hana’s triage AI was probably compliant. It wasn’t dignified.
The key to making this tool work is specificity. “Who becomes invisible” needs names, not categories: “elderly residents in walk-ups” rather than “some users.” Vague answers hide real problems. If any answer troubles you, the instruction is simple: redesign before deploying.
This is also a tool that needs to be run more than once. Every time you update or retrain a model, the answers may change. The biases shift. The vulnerable populations shift. The dignity check is a discipline, not a one-time gate.
Download the Model Dignity Check from the book’s website, or explore it in full in AI and the Art of Being Human.
← CARE Loop | All 21 Tools | Next: Prompt-Scaffolding Canvas →
The Model Dignity Check is one of 21 practical tools from AI and the Art of Being Human by Jeffrey Abbott and Andrew Maynard. The characters and narratives in the book are fictional — designed to reveal truths about AI and being human that only stories can capture.
