Home Portfolio Quotes Blog

Envision this scenario: a stranger forcefully and suddenly poisons you with a drug that causes unpredictable and unstoppable movements in the middle of a packed crowd. You would not be morally responsible for harming others under the duration of this drug because although you are causally responsible, you are not morally responsible. Why? You could not have done anything within reasonable grounds to prevent such an incident.

This is known as the ‘control condition’ by some philosophers. This consideration is also why animals often are not treated as having moral responsibility. They act instinctively and are not able to control their actions.

One might resist this condition by pointing out that many entities which are not human are treated thus and so by both people and the law. After all, some species of great apes as well as the Ganges river in India have the legal status of personhood. But, treatment as moral patients is significantly different from treatment as moral agents (the receiver of a moral action vs. the doer of a moral action). All moral agents must fulfill the control condition, but the same rule may not apply to moral patients.

The baseline is moral patients have rights and can be harmed. We can harm the river by polluting it. But, when there is a disaster, it is not the river harming someone. No one would hold the river accountable (i.e. sue it). It’s not that the river killed ten people, but rather, the flood (the event). If we conceptualize this in more structured terms, since rivers have no intentions or goals, no sense of right or wrong, the ‘control condition’ does not apply. In other words, the entity in question is not in control.

What does this have to do with AI? The motivation is twofold. When it comes to the moral status of robots, we must ask the same questions ethicists have always been asking. Here are some initial thoughts. If robots can be harmed in ways just as I can be harmed (or feel pain like that of intelligent animals), that gives plausible reason to treat them as moral patients. It is hard to say when robots should be treated as moral agents, however. The nature of selfhood and sufficient conditions for moral responsibility are hotly contested and intimately related issues. If we cannot answer these questions even for humans, then making progress in the AI context seems implausible.

The second reason has to deal with why we care about such a question in the first place. A good way to anchor any philosophical conversation (especially in the Continental tradition) is with the question: So what? The most direct answer is that we should care because we might create artificial systems that warrant human treatment. There is thus a human rights concern with potential violations. A couple years ago, this may have been thrown aside based on irrelevancy (the objection that such technologies will occur far into the future), but now, the possibility of even more sophisticated artificial systems is real and closer than ever.

When we examine the ‘control condition’ in the context of robots, there are many tricky obstacles to either answer or dispel. This duality corresponds to the general task of metaphysics which also falls under two divisions: showing that there actually is a puzzle with a particular issue (and trying to provide a solution or propose potential ways to address it) or showing that there is actually no puzzle, once put in clearer terms, although there appears to be one.

I will now focus on the latter method, known as deflation. GPT-4o has caught headlines for sounding like Scarlett Johnasson, but outside of that, this new OpenAI model convincingly passes the Turing test (indistinguishable from a real human in conversation). The issue is, however, that LLMs have no knowledge of underlying reality as it is trained largely from text. This also implies that LLMs have no moral agency as they cannot understand right or wrong in the sense which people do. As LLMs have no self to experience senses through, they also have no control the way humans do over the consequences of their outputs. Thus, there is no question that although there seems to be a puzzle with LLM consciousness, there is not.

A potential counter argument may be put forth with multimodal LLMs, as they are trained on images, videos, and text. Although they are trained on more advanced forms of data such as videos, at the end of the day, RGB values are a numeric way of representing colors which we see. Thus, multimodal models do not solve the issue of phenomenal experience. GPT-4o can give you examples of videos containing blue objects, but it still does not know blue like humans do. Instead of training on text, training on numeric data still does not solve the sensory issue. In other words, neither numbers nor text constitute the building blocks of phenomenal experience.

This is an important distinction to make because if one day robots or AI do gain phenomenal experience, that certainly qualifies them for moral patienthood and agenthood. If robots have phenomenal experience, then they should have a sense of self as subjective experience is inherently perspectival and experienced through a self. If they have a self, then they should fulfill the control condition, thus qualifying them as moral agents and opening the door for moral responsibility and blame.

But, some are very quick to jump to conclusions, swayed by the fact that these systems behave so human-like. The observable phenomena of human-likeness is more of the pop-culture route for concluding the consciousness of AI (I take consciousness to mean the phenomenal sort, that of subjective experiences and a first-person sense of self). The idea is that if it talks like a human, then it must be human-like in other ways. But, philosophers have shifted toward talking about internal structure, debating on what structures and properties are necessary and sufficient for consciousness to obtain. I take the latter to be more rigorous as it is not based on appearances like the former. This is crucial because in order to determine if one is in control (and thus eligible for moral agency), then their sensory and self-consciousness must first be established.

I do, however, see the idea of conscious robots to be a true possibility. Just as with different animals, there are multiple ways or structures in which consciousness can be realized (dog brains are different from human brains but most would concede dogs have some degree of consciousness). Such is the multiple realizability argument. There are general features which are shared across all conscious beings (internal structures), which thus illustrate why the minimal set of sufficient conditions for consciousness is so important to find (in order to generalize past humans) and why this is such a popular area in neuroscience and philosophy.

Although there are many pressing moral questions and challenges surrounding AI systems which further bring up hot debates on the nature of consciousness itself, I want to end on a positive note. Jackson (1982) once said this: “It is to be expected that there should be matters which fall quite outside our comprehension . . . the wonder is that we understand as much as we do” (135). There is no doubt that while we are constrained in many ways, the status of human understanding has excelled in ways which provides new life to the discipline of philosophy, and I anticipate AI will make us confront these questions with added urgency.