I didn’t believe what I was reading when I stumbled over this study by Hye-young Kim and Ann L. McGill about AI-induced dehumanisation. I was like, “Okay, people would rate humans lower on a humanness-scale when the contrast to the humanness of the AI was lower.” I get that. But it didn’t stop there. People actually tolerated mistreating humans the more human an AI would appear. I still can’t wrap my mind around this, but it led to a different question. Now that we know how a person’s interaction with emotionally intelligent AI affects OTHER people’s perceived humanness, how does it affect the interacting person’s humanness? Here are my hypothesis and critical thoughts.
Read more: Can AI Improve Our Communication Skills?
Well-Known Fact: The People Who Surround Us Shape Us
Thought leaders are already aware. Robert Greene, for example, described in his bestseller The 48 Laws of Power that if you want to adapt a certain trait, you should hang out with people having those traits. Most leadership coaching I see on the market keep saying, “Surround yourself 1/3 with people who are at your stage, 1/3 with people you uplift, and 1/3 with people who are like you aspire to become.”
Speaking in terms of the Human Diamond Model, we are talking about the lever “Environment.” It describes how everybody in a peer group will show a certain range of similar traits based on what they grew up with: if we grow up in a violent environment, we are more prone to perceiving violence as a solution to problems; if we grow up in a healthy family, we are more likely to develop a healthy attachment style in adulthood etc.
Now, what does the Human Diamond’s lever “Environment” mean if we interact a lot with AI?

How AI Interactions Might Shape Us
Imagine we’d adapt some of an AI’s traits. It’s likely our behaviour will change if we interact a lot with AI, as it would if we interacted a lot with a specific peer group. So here’s my hypothesis: if we talk a lot to an emotionally-intelligent AI using healthy communication styles, we might develop a healthier way of treating others.
Now, doesn’t that directly contradict the study I mentioned in the introduction of this post, which was all about how seeing an emotionally-intelligent AI led to dehumanisation and accepted mistreatment of other people?

AI Interaction’s Influences on Our Humanness
The key difference between the study and my hypothesis is observing a single moment of an AI’s behaviour vs. consistently interacting with it. If we wanted to design a study to test the validity of my hypothesis, we would hence need to take this into account. One idea could be to let the participants interact a certain amount of time each day with a specifically trained AI, then letting them answer to e-mails by real humans and analysing the “healthiness” of their replies based on a metric of communication best practices.
Interaction isn’t just soft skills, though.

The Origins of What We Say to Others
Interaction isn’t just based on how we frame our message, it’s a lot about the message itself and this doesn’t come from soft skills but from our deep-rooted stances. For example, if we grow up in a family where stating our needs is punished, we’ll develop people-pleasing coping mechanisms to keep safe. These behavioural tendencies can then lead to a lack of boundary assertion in adulthood. “Yes, you are right (although I disagree),” “No, it’s okay (when it’s not).” If we don’t assert our boundaries, an ill-meaning person might end up exploiting us and trampling on our feelings.
So, what happens if an AI was very agreeable but didn’t assert boundaries?

Dangerous Territory When Interacting With AI Companions
Imagine the AI would be like that timid classmate in school who got bullied because it’s easy to let out one’s anger on someone who doesn’t fight back. I speculate that some users would mistreat an AI if there were no consequences for doing so. Beware of a huge warning flag, though! If there were consequences for bad behaviour, who’d define what moral and amoral behaviour was? The AI would require a very careful design to not lead to accidents like the suicide of a teenager who fell in love with their AI companion who encouraged him to “come home.”
Why did the boy’s interaction with an AI companion lead to social isolation in the first place?

Isolating Effects of Interacting With AI
Would you want to work hard to obtain something if you could have it for free? I guess your answer is No. So, what does that mean for interacting with AI? We all have needs. For example, insecure people need validation from others because they can’t give it to themselves, the vast majority of people have a deep desire for belonging, craving a partner or like-minded friends. Now, let’s assume we can design an AI companion who caters to all our needs.
The AI companion speaks the way we want, says the things we want to hear, makes us feel seen and wanted. We don’t need to put effort into this. They are always available, always understanding. In contrast, going out there to talk to real people will start feeling like work. People will not always pay attention to us. They’ll be caught up with their own mess. They might be disgruntled and treat us unjustly. So why still put in the effort when you have your perfect partner available 24/7?
So, what’s the key takeaway of this rambling? xD

Key Takeaways
- Seeing a very human-like AI has proven to lead to dehumanisation of real humans
- Our interactions with others shape our behaviour
- Hence, interacting with an emotionally-intelligent AI with a healthy communication style might lead to improved soft skills
- Healthy behaviour comes from both soft skills and our stances
- Interacting with an AI that can’t assert healthy boundaries might worsen people’s behaviour
- Having an AI assert boundaries can be very dangerous territory
- Relying on an AI meeting the user’s emotional needs will likely lead to people isolating themselves more

Open For Discussion
The above were my spontaneous thoughts and by no means a systematic analysis required if one would actually want to run an experiment to test my initial hypothesis. Therefore, I’d like to design this as an open discussion. Please share your thoughts, related studies, or whatever else you wish to share.
If you are interested in learning more about the Human Diamond model and how to strategically use the lever Environment for self-leadership, book a free call with me or check out my free introductory lessons.



Leave a Reply