Being on the technical side of AI, I usually think about what we can do to build unbiased, empathetic machines with human interests at heart. But this week I had a conversation that launched me into an entirely new tornado of thought on this topic.
Amie Jean, an early employee at MakerBot who now works as a neuroscience-based personal coach, thinks that we should be teaching humans to have empathy for machines. “We have this beautiful opportunity to not be the oppressor from the start,” she says, “and if we consciously make an effort to have empathy for machines and AI, we can be the best version of ourselves.” And she has a point.
Let’s explore this in two stages: first, the impact that treating human-presenting machines as subhuman will have on our human interactions; and second, the possibility of enslaving a breed of highly intelligent beings.
Will humanoid robots increase human-on-human crime?
Forgetting about the intelligence aspect for a moment: we are building machines that are ever more human in appearance. Advancements in prosthetics and animatronics have combined to give us lifelike robotics that convincingly pass as animal and human from afar. Sure, they’re still quite uncanny valley, but our relationships with them already transcend the cold, impersonal relationship we’ve had with traditional machines.
Amie Jean reminded me of the early days of the Roomba, when people were naming them and treating them like pets (this 2007 study from Georgia Tech explores this at more depth). This has evolved in the last decade – take Lovot, the recently announced companion robot from Japan, for example. As described in a press release by parent company Groove X, “What Groove X have pursued with technology is not efficiency or usefulness, but rather a robot that makes people truly happy by its innocent character and charming gestures that feels satisfying to cuddle.”
But the presence of interactive machines also presents opportunity for the darker side of us to indulge, as happened at the Arts Electronica Festival in Linz, Austria, in 2017. A sex robot named Samantha was virtually destroyed when on demo at the conference. Because it was a machine, visitors treated the demo unit like an object and destroyed it in vulgar ways. If this type of behaviour – the mistreatment of humanoid machines – becomes commonplace, I worry about what it will mean for our relationships with one another. Will we normalize the abuse of sex robots? Will that translate to increasingly normalized violence towards humans?
And when we add intelligence back into the mix, it’s even more complicated. What happens when an intelligent machine is abused? Should we try to build machines that can feel emotions, to deter us from treating them horribly? Or should we leave out their ability to feel in anticipation of the inevitability that people will mistreat them?
“We have this beautiful opportunity to not be the oppressor from the start”
For millennia humans have warred against and oppressed and dominated one another, drawing lines between who is ally and enemy, and forcing power structures that have caused deep unfairness in today’s society. Now here we are, on the edge of creating machines with intelligence that can match our own, and we are not prepared for how they will fit in to our world.
There is still a lot of fear around AI replacing human workers. We haven’t yet figured out how to support all the people who will be displaced by machines, and we certainly aren’t ready for humanoid machines to take their places (though that won’t happen till later and likely won’t be the case for the majority of jobs which can be done invisibly by bodiless intelligence).
Science fiction is also not helping – the overwhelming majority of AI media still portrays machines as a threat to our safety and our livelihood. The one movie that comes to mind that really twisted this around was Extinction – I won’t share any spoilers here but I recommend you watch it if this space interests you.
And finally, we’re creating highly intelligent black boxes, so it’s impossible for us to tell just how much these machines will understand and feel. I’m not talking about the AI engineers who are building the machines; once intelligent robots become household norms, it’s not the engineers’ comprehension that matters, but the average consumer’s. It’s the consumer who interacts with the machine, so it’s they who will determine how it is treated.
So, what do you think? Do you agree that we should have empathy for our machines? Or do you think we should focus on building machines that have empathy for us? I don’t have the answers, but I do know we need to be thinking about these questions before we release truly intelligent machines into the world.
Companies to watch in this space:
- Sanctuary.ai, who are building “machines with human-like intelligence,” created by the founders of Kindred.ai
- Boston Dynamics, especially their humanoid robot, Atlas
- Hanson Robotics, the company behind Sophia, the world’s first robot to be granted citizenship
. . . . .
Special thanks to Amie Jean Buzadzija, who brought such an interesting topic to my Extrovirtual calls this week and sparked me on this exploration of having empathy for machines. Amie Jean is a life coach in Arizona, who offers an eight-week neuroscience-based course on resilience and growth. You can reach her at her site, amiejean.coach.