I Can’t Watch The Polar Express
Have you ever watched the movie, The Polar Express? Roger Ebert gave it 4 out of 4 stars for many good reasons, but I personally couldn’t get past the creepy visuals. The computer-animated characters had zero life and made my skin crawl. I’m not alone in this response. In a post entitled 10 Things That Still Bother Me About Polar Express, Nadia Osman correctly observes, “Everyone, even the children, look as though they were injected with too much Botox. It’s beyond creepy, it’s lifeless. It might as well be a holiday themed episode of The Walking Dead.” Exactly. As Erika Marchant explains, the movie is one of many that pushes its viewers into an uncanny valley.
The Uncanny Valley and its Mechanisms
The Uncanny Valley hypothesis was put forward by Masahiro Mori in 1970 (Mori, MacDorman, and Kageki 2012). He proposed that entities that lie close to human in their properties but that aren’t human will provoke aversion in others. It’s called a valley because of the visual depicted above, the idea that some entities fall into a crevice of aversion lying next to humans on the continuum of human similarity. In a recent paper in press at Frontiers in Psychology (see this preprint here) headed up by Megan Strait, we sought to address two unanswered questions about the the uncanny valley:
- Does the uncanny valley prompt behavioral withdrawal?
- Why is there an uncanny valley?
We addressed these questions using a simple but compelling task in which participants viewed a set of 60 pictures each of which was a robot or a human that fell into one of three categories.
As you can see to the right, pictures of robots (top row) were prototypically robotic (i.e., low in human similarity; left) or high in human similarity (middle and right). Half of the robots that were high in human similarity had atypical features, meaning features representing a blend of different kinds of entities (for example, a robot that has a human-looking head atop a mechanical body); the other half exhibited category ambiguity, meaning there was some confusion about what kind of entity they are (for example, an android designed to look human).
Pictures of humans (bottom row) were prototypically human (right) or they depicted a person with a unique trait that makes them less prototypical of most humans (left and middle). Half of the humans with a unique trait paralleled the robots with atypical features (for example, a human who has a prosthetic limb); the other half paralleled the robots exhibiting category ambiguity (for example, a human wearing black scleral contacts).
1. Does the uncanny valley prompt behavioral withdrawal?
To get at the question of whether the uncanny valley prompts behavioral withdrawal, we told our participants (72 adults) that they could press a button to remove each picture from the screen. If they pressed the button, the picture would disappear and they’d see blank screen for the rest of the 12-second viewing time; if they didn’t press the button, the picture would stay on screen for a full 12 seconds. After 12 seconds elapsed, we asked participants to indicate why they did or didn’t press the button, and how eerie they found each agent.
If the uncanny valley prompts behavioral withdrawal, we would expect that entities that are high in human similarity should be perceived as eerier than other types of entities and that people would press the button to remove them from the screen more frequently, primarily because they’re unnerved. And that’s basically what we found when collapsing across the robot and human stimuli in the high human similarity category.
Entities that were high in human similarity were perceived as most eerie. Importantly, people pressed the button more frequently to make them go away (at least compared to the rate at which they did so in response to prototypical robots low in human similarity), and they did so due to being unnerved.
2. Why is there an uncanny valley?
We examined two reasons that may contribute to the uncanny valley, namely the presence of atypical features and the presence of category ambiguity. If best explained by atypical features, uncanny valley effects should be more pronounced for entities with atypical features than for those with ambiguous features. But, if best explained by ambiguous features, uncanny valley effects should be more pronounced for entities with ambiguous than atypical features. What we discovered is that both mechanisms are in play depending on the type of entity, robot or human.
For robots, shown on the left side of the figure, atypical entities (like the robot with a human-looking head atop a robot body) were perceived as most eerie and prompted the greatest behavioral withdrawal, significantly more than ambiguous entities (like the android designed to look human).
For humans, on the other hand, shown on the right side of the figure, ambiguous entities (like the person with black scleral contacts) were perceived as most eerie and prompted the greatest behavioral withdrawal, significantly more than atypical entities (like the person with a prosthetic limb).
Based on the results of this study, it appears that the uncanny valley isn’t just a state of mind; it’s also reflected in behavior. And, it seems to be driven by both atypical features and category ambiguity, at least for the entities we studied. If these results bear out in future replications, they have some interesting implications for the design of artifical entities. Consider The Polar Express. Would those creepy computer-animated characters be less creepy if they fell farther away from the boundary that visually separates them from prototypic humans? Maybe! In general, robotics engineers, game designers, and computer-assisted animators who attend to qualities that avoid the uncanny valley may be more likely to yield products that are palatable to consumers.
Got a good uncanny valley story or a link to great examples? Share them in the comments below or hit me up on Twitter (@HeatherUrry).
Mori, Masahiro, Karl F MacDorman, and Norri Kageki. 2012. “The Uncanny Valley [from the Field].” IEEE Robotics & Automation Magazine 19 (2). IEEE: 98–100.