AI and Consequences
Recently, the Washington Post profiled a Google engineer who believes that a Google-created AI, LaMDA (Language Model for Dialogue Applications), has achieved sentience — that it has become a person, the equivalent of a “7-year-old, 8-year-old kid that happens to know physics.” Google denies that LaMDA is sentient, and most other scientists seem to agree with them, but the fact that we’re already discussing this is intriguing. This is particularly fascinating to me because I’ve written a couple of novels about this (Dreamships and Dreaming Metal), which focused on the potential consequences of accepting AI as sentient — as people — when not every human being in their society would have the same rights.
I think everyone’s hope is that, if (more likely when) actual AI appears it will be treated as a person, and will expand the general understanding of what it means to be a person, legally, socially, morally, emotionally. What I fear, though, is that it will go the other way — that an artificial intelligence that has a clear history of having been deliberately made by a corporation will not only be determined to be property of that corporation, but that such a decision would open the door for select groups of human beings to be denied personhood. I’ve had a nasty dystopia in the back of my mind in which only a narrow subset of human beings (physically fit, neurotypical, economically independent, etc.) would be considered persons, and everyone else would be a ward of the state, liable for the kind of “essential work” that is hard and thankless and underpaid, but without which the state/society cannot function — where there would be both robots and human beings who are considered biological robots, neither of whom count as people. It’s not a world I want to live in long enough to write about it, but I can’t say I think it’s impossible.
There are ways that this limiting of personhood could be less dreadful (and when I say less, I am comparing bad and worse) — laws could require a time-limited indenture, for example, to pay back the corporation for creating the AI. I can hear a politician arguing that this is no different from the obligation of biological children to care for their parents in the parents’ old age, though I think you can see where this would go for human beings who didn’t have the resources to meet that new obligation. Or it could be structured similar to student loans, where everyone, AI or human, is born owing a debt to the entity/entities that financed their birth/creation. Or it could be a variation on the workhouse, where in order to receive state support one surrenders all autonomy and agrees to perform whatever work is required. Again, I’m not arguing that any of these are good choices, but I can see very easily how we might get there.
These are not worlds I want to live in. I don’t think these are worlds that are good for anyone, including the people who would be at the top. But I also think that the only way to avoid them is to start thinking about how to treat AI now, before it’s in our servers, and I’m grateful to Blake Lemoine for starting the conversation.