No one seems to find Star Wars’ C3PO eerie, spooky, or creepy. However, build a humanoid social robot and the world goes crazy. That’s the case of the humanoid doppelganger robot Nadine that has caused a lot of people to be uneasy.
It’s either their appearance or the ethical issues that are usually connected to the ways they may be used, but robots that look a lot like us humans cause a lot of controversial reactions in people.
The latest social robot to make waves and headlines is Nadine, one that was created in the likeness of Professor Nadia Thalmann from Nanyang Technological University (NTU Singapore). The Professor, the robot’s creator, has programmed Nadine with intelligent “assistant” software, much like Apple’s Siri or Microsoft’s Cortana.
But more than being an intelligent personal assistant, Nadine can also express a range of emotions and moods. At the same time, she was gifted with the ability to remember previous conversations and the people she has had them with.
However, the controversy isn’t so much related to the human-like appearance of such robots, but to the ethical and moral questions that rise about the way these robots will be used. For instance, how about encouraging an elderly person to share their personal concerns with a robot with a human face?
Given that humanoid robots like Nadine can be programmed to assist children with certain default applications, would it be right to let them interact with our young ones? What is the answer to the ethics of using robots to fill in personal and social needs?
Robotics enthusiasts have only praise words for Nadine, so far, but the blowback on Twitter was rather harsh. According to Sherry Turkle, professor of the social studies of science and technology at MIT, there’s a psychological reason why anthropomorphized robots make us shiver.
As Sigmund Freud explained, seeing something not quite human has been described as “the uncanny,” which roboticists then took a step further and named the “Uncanny Valley” – the gap between being comfortable with something to feeling eerily and uneasy.
This is why it’s not ethical to market these robots to children and to old people as viable new companions, because we humans look into eyes and expect to be able to read the other person’s face.
But with the doppelganger social robots, our brains fail to detect any emotion and thus sending a warning shiver up the human spine. According to Turkle, just because we can create something like this, it doesn’t necessarily mean that we should.
Image Source: Newsx