-
サマリー
あらすじ・解説
In this episode of The Gentle Rebel Podcast, we explore the Uncanny Valley. This was inspired by a video I published about Apple's marketing campaign for Apple Intelligence. It turns out I'm not the only one unsettled by their approach. It's interesting to contrast these Apple commercials with those for Google Gemini. Apple presents its AI technology as a tool for masking personal flaws and promoting insincerity. Conversely, Google has framed AI as a social companion that enhances self-expression. It gives advice, and makes suggestions to help its users work on their goals. In other words, they frame Gemini as a support to enhance people's competence, confidence, and knowledge where Apple helps people deceive and pretend to be more skilled and knowledgeable than they really are. A Voice From The Uncanny Valley The Gemini adverts got me thinking about The Uncanny Valley. There is something eerie about the way they demonstrate the technology. Not least, seeing users glued to the phone, on the other end of which is a friendly disembodied humanoid. They have anthropomorphised this technology, giving it an uncanny human voice and the platform of a constant companion. It is a friend, teacher, mentor, cheerleader, and coach—the ultimate human! Or, perhaps not quite human. The Uncanny Valley hypothesis, coined by robotics professor Masahiro Mori in 1970, describes the discomfort humans feel toward entities that are almost but not fully human. The valley exists when something moves from anthropomorphised traits, as seen in animations of talking animals, cuddly toys with facial features, and the projected thoughts and feelings we give to our pets, to unnervingly realistic human characteristics. These nearly humans freak many of us out. But why do some of us seem more impacted than others? High Sensitivity and The Uncanny Valley Those who score higher on the sensitivity scale (Highly Sensitive People) may experience the uncanny valley more intensely due to their deeper sensory processing and emotional attunement. HSPs may be unsettled by artificiality, preferring clear distinctions between what is and isn't "real". It's interesting to consider this a foundational biological survival instinct rather than one of ethics or morality. In other words, highly sensory people unconsciously scan the world around, within, and between us, looking for signs of safety and danger. When we encounter something that seems real but doesn't feel right, it might leave us unsettled, prompting us to investigate further to see if an impostorous threat lurks within. HSPs process information deeply and are attuned to subtle sensory cues. We might detect unnatural contradictions, such as mismatched tone or body language, at a subconscious level. This attuned sensitivity can lead to unease during interactions with AI chatbots and humanoids, where inconsistencies may create discomfort even if not immediately apparent. As AI technology advances, the line between human and machine becomes more blurred, making it harder for HSPs to discern artificiality. Why Do We Make Machines in Our Image? The tendency to anthropomorphise technology—creating machines that mimic human behaviour—raises questions about our desire to replicate human characteristics in machines. It's strange! Why do we do this? Maybe it's some "god complex", or we are simply trying to figure out what it means to be human by considering what is still missing from creatures that look and sound like us. But we don't need to do this, and the uncanny valley hypothesis indicates that we would be more successful at trusting technology if we didn't try to make them in our image. Think about fictional droids, like R2-D2 and BB-8 in Star Wars. They are loveable despite, nay, because of their non-human forms. Yet they have distinct personalities and a range of emotional expressions. On the other hand, more humanoid machines like C-3PO can be profoundly irritating despite havi...