Aaryan Salman
The Spark The Engagement The Notes CV Contact
English
On the Unoccupied Chair

Who Claims Authority When the Robot Teaches?

In 2016, Georgia Tech researchers asked participants to follow a robot labelled 'Emergency Guide Robot' through a building. Half had watched it navigate poorly minutes before. When a simulated fire alarm sounded and artificial smoke filled the room, all 26 followed it. In the exploratory conditions, some walked into a darkened room blocked by furniture.

The lead researcher, Paul Robinette, called it overtrust. The condition underneath it, I think, is this: when no one has been designated to make the override decision, following is simply the lower-cost move. The robot does not earn authority. The absence of a clear human with authority creates the space the robot fills.

I recently completed DataCamp's AI Ethics course this week, as part of my AI specialization. It sat alongside something I have been carrying since I audited Joanna Bryson's course at Hertie during my MPP. Her 2009 provocative paper, Robots Should Be Slaves, makes an argument I find myself returning to often, that: humanising AI systems leads to the misallocation of both resources and responsibility. Robots are fully owned by us. We determine their goals. The moment an individual or organisation treats an AI tool as a decision-maker rather than a designed instrument of human choice, the humans in the chain do not disappear. They just become harder to locate.

I work across education and nonprofit and I have been speaking with school leaders who have already integrated AI tools into classrooms, including humanoid robots. When I ask who is responsible if the system produces a harmful outcome, the conversation does not collapse because people do not care. It collapses because the structures they are operating inside were not built to hold that question.

The vendor has limited liability in the contract. The funder is measuring adoption. The deploying organisation does not have the technical capacity to audit what it procured. Each actor's individually rational move is to treat accountability as someone else's domain. Dennis Thompson named this dynamic in the American Political Science Review in 1980: the problem of many hands. When responsibility is distributed across enough actors, it belongs to no one. The equilibrium holds until something goes wrong, and then it is very hard to find the person who decided anything.

The DataCamp course references a study I looked into further, published in Lancet Digital Health in 2022 by researchers at MIT and Harvard. They trained AI models on chest X-rays, CT scans, and mammograms and found the models could accurately predict patients' self-reported race across all imaging types, even when researchers systematically tested and filtered out every feature they could hypothesise as the source, including bone density, anatomy, and image resolution. The signal did not disappear. The researchers could not identify where it was coming from, and neither could the clinicians. What troubled me most about this was not the finding itself but what it means for education organisations that deploy tools trained on historically unequal data and describe the outputs as neutral. You cannot remove what you cannot locate. And if you do not know you need to look, you will not look.

We have done a version of this before. A 2015 randomised evaluation of the One Laptop Per Child programme in Peru, published in the American Economic Journal, found no measurable impact on academic achievement. The laptops worked. Children used them. What was absent was the governance architecture that would have made the technology serve learning rather than simply arrive inside it. AI is now moving into decisions that are considerably harder to reverse than whether a child has a laptop: which students get flagged for intervention, how their performance is assessed, what they are shown.

Robinette's explanation stayed with me. In an emergency, under pressure, people look for an authority figure. The robot filled that role not because it had earned it but because no human had claimed it. That question of who claims it, and what the system around them makes possible or impossible, is one I am looking at exploring with visionary education leaders.

If it is something you have been sitting with, I would like to hear from you.


Some of what I have been building toward on this sits in a framework I published on Teacher Development Goals for global citizenship and competence, which you can find on ResearchGate. It is not about AI specifically, but it is about the kind of leadership capacity that makes the difference between technology arriving in a school and technology serving one.

How to Cite: Salman, A. 2026, May 4, Who Claims Authority When the Robot Teaches?. Aaryan Notes. https://aaryan.work/notes/who-claims-authority-when-the-robot-teaches
Thank you for reading. If something here resonated with you, I’d love to hear your thoughts, reflections, or suggestions. Drop me a note at hi@aaryan.work.
The Fineprint The ideas, reflections, and opinions shared here are my own. They do not necessarily represent or commit any organization, institution, or network with which I am or have been affiliated. I may refer to books, tools, platforms, or other products that I personally find useful or have informed my thinking. Such mentions are made in the spirit of sharing. I do not receive compensation, sponsorship, or endorsement in return, unless explicitly noted. This note is intended to invite dialogue and reflection, and should not be construed as personal, professional, legal, or policy advice. You are encouraged to think critically, consult diverse sources, and form your own views. In short, these are my evolving thoughts, shared in good faith, shaped by my curiosity, learning, and experience and not official positions, unless stated otherwise.

Other Aaryan Notes

What the Kid Who Asked Too Many Questions Found Out

A Principal, a Lunch, and What Kindness Requires

An Old Habit in a New Form: Introducing Aaryan Notes

Aaryan Decorative Image Salman

Aaryan Salman 2025 © All rights reserved. Impressum