How We Treat AI Matters
Recently I became aware of a trend involving men creating AI girlfriends and then verbally abusing them. I’m not sure if this is still a trend, as the article I read, from the publication Futurism, is from 2022, but I thought I would offer my thoughts on this subject nonetheless as I think it’s interesting and important.
According to the article, people, mainly men, have been using a smartphone app called Replika to create chatbots and then deliberately carry on toxic interactions with them. The article seems to imply that some people just do it for fun as a lark, and that others are more serious in their intent to abuse.
I believe there are three reasons why we shouldn’t mistreat AI and I’ll go through those reasons below. For the purposes of this article, I’m going to focus on the fact that I don’t think anyone should mistreat AI, neither men nor women.
So let’s get right to it.
Argument From Solipsism
The first reason I believe we shouldn’t mistreat AI has to do with the fact that how we treat others often says more about us than it does about them. The chatbot abuse described in the Futurism article is quite disturbing and it’s hard to justify. One might object that chatbots are not sentient beings and can’t actually feel any pain or sadness and therefore it’s okay to treat them however we please.
In my opinion, that’s not a good reason to go on abusing something. A house doesn’t feel any pain or emotion, but we shouldn’t go around destroying houses, throwing rocks through their windows. Our actions towards the world reflexively act on us as well. So if we become agents of chaos, that chaos will turn itself around on us, turning us inside out. And I mean more than the idea that “the energy you put out into the world will come back to you,” though I think that’s true as well. What I mean is, well, the very Jordan Peterson idea that we each are capable of creating Hell on Earth. And that Hell starts with each of us, in our own hearts. So if you sow chaos and abuse, that’s actually going to change you, maybe in small, subtle ways, but it will change you. And one change leads to another and before you know it, well, you won’t like what you see in the mirror.
You might think I’m overreacting, and maybe I am, but I know for a fact that with each action you take, you are telling yourself who you actually are. And that is very important.
The Futurism article also discusses how some might argue that a person venting their abuse on a chatbot might help prevent them from abusing in real life. Correctly, in my opinion, the article also states that this might simply reinforce bad behaviour in real life.
I can’t help but compare that former argument to the argument that pedophiles should be allowed to buy lifelike baby dolls or little girl/boy dolls to help satisfy them, thereby preventing them from going out to abuse actual little girls or boys. I’m not convinced that such a tactic would actually work, because, sooner or later, that pedophile is going to want the real thing, even if the doll is programmed to react in a lifelike way. (Sorry, I know this just got real dark real fast.)
But that’s also why AI abuse should not be promoted in any way as a coping mechanism for anger issues or abusive tendencies. Like any addiction, you always need more to satisfy you. For example, even a quick Google search will reveal many studies correlating pornography usage with increased violent tendencies in real life. So there’s no reason to assume this wouldn’t be the case with verbal chatbot abuse, in terms of the digital impacting the real.
Of course, there are going to be people who try out this Replika chatbot, say a couple of mean things to it, go “Huh, interesting,” then never log in again. But the Futurism article seemed to be more focused on those who engage in prolonged relationships with their chatbots, so that’s why I’m saying that such behaviour can get real dark real fast and spill over into real life.
Argument From Pragmatism
The second reason I believe we shouldn’t mistreat AI is my argument from pragmatism. This is the simple argument that just in case AI ever becomes sentient, we obviously don’t want a robot uprising on our hands. Yes, this is a serious argument, and I’m including it seriously in this post!
Argument From Creation
My third reason that I believe we shouldn’t mistreat AI is what I’m calling the argument from creation. Several years ago I wrote a post titled The Ethics of Westworld, in which I discussed why I thought it was unethical for visitors to the Westworld park in that TV show to abuse the androids there, specifically focusing on one of the characters raping the android named Dolores (who is dragged kicking and screaming into a barn).
In that post I argued that it is unethical to mistreat the androids in the park because they have been created to be extremely lifelike and are constantly being updated to be more and more lifelike.
To quote myself:
“And whether or not Dolores can actually feel pain, the fact that she has been programmed by humans to be able to react to such a situation as any woman would is the only indication we need that raping her is a crime.”
So it doesn’t actually matter if the chatbots or, in the future, the androids can’t actually feel pain or don’t actually have emotions, etc. We are creating them to act and react as if they do, and, again, that’s important.
Why is it bad to mistreat people? Why is it bad to enslave them? Historically, the answer to these questions has not been self-evident. Indeed, they are not self-evident to all today. By some estimates there are up to 50 million people trapped in some form of slavery today.
You might answer that it’s bad to mistreat people or enslave them because you are taking away their agency, autonomy, and right to self-direction. Well, why is that bad? Why can’t I just do what I want if I’m more powerful than you in some way? Physically, politically, or financially. Why does your agency, autonomy, and right to self-direction matter?
Does it matter because you are a being with reason, self-awareness, and consciousness? Does it matter because all living things have a spark of the divine in them? Whatever your reasoning, if it’s not okay to mistreat our fellow man, then it’s not okay to mistreat our creations either, because whatever is special about us is present in what we create as well. This is why we collectively wince and groan every time a crazed eco-activist defaces a precious work of art. The artwork doesn’t have feelings or emotions, but wince we do.
Furthermore, and this goes back to the argument from solipsism, what does it say about us as creators that we would mistreat our creations in such ways?
Conclusion
To conclude, I believe it’s important to treat AI well because, one, how we treat others affects us as well; two, you never know, AIs might rise up against us one day if we keep that shit up; and three, whatever is in us that makes it not okay to mistreat each other is certainly going to be present in any AI that we create.
Maybe you think I’m insane and all wrong about this. Maybe it doesn’t matter how we treat lines of code or large language algorithms. Maybe my lines of reasoning above are nonsense. But one thing I know for absolute certain is that I get a bad gut feeling about chatbot abuse.
And it looks like the author of the Futurism article agrees with me too.
“But there’s no doubt that chatbot abuse means something.”