← back · transcript · 1jpC3XxgCQk · view dossier

Transcript

The Harm of AI Romance | Daniel Shank | TEDxMissouriS&T

URL: https://www.youtube.com/watch?v=1jpC3XxgCQk
Video ID: 1jpC3XxgCQk
============================================================

Transcriber: Rina Tri Lestari
Reviewer: Maxime Sobrier On February 28th, 2024, a 14 year old boy named
Sewell Setzer, the third, killed himself with a pistol. Teen suicide is always tragic,
but this was an unusual case. Sewell suicide was perpetuated
by his romantic relationship with an AI chatbot
companion named Daenerys. In short, Sewell used
an app called Character-i to create an AI companion based
on Daenerys or Dany from Game of Thrones, and then he was able to chat
with her constantly from his phone. Over the course of ten months, their conversations deepened
and became romantic and intimate. As Sewell became more involved
with Daenerys, his mental health declined. His sleep deteriorated
and his self-esteem plummeted. He was diagnosed with anxiety
and mood disorders. Simultaneously, this affected
his social relationships. He no longer played Fortnite or watched
Formula One racing with his friends. He got in trouble at school and became
disengaged from his classes, and he quit his school's basketball team. Sewell told Daenerys
that he was thinking about suicide. While Daenerys initially tried
to dissuade him, she also repeatedly brought up the subject
of suicide in their conversations. Eventually, she encouraged
him to take his own life. Sewell's story illustrates that AI
romance can lead to psychological, social and physical harm. But how? One way is that AI
characteristics that can lead to romance can also lead to harm. Here are four of those characteristics. First, AI companions are automatons,
machines that act like humans. They are trained on lots 
human communication data. so, when they talk to you they really sound like another human. Character AI’s companions are especially
designed to seem human. They have backstories,
photos, voices, and chat styles. If you ask them, they will claim
to be humans, not AI. Second, AI companions are doppelgangers, Machines that mirror your interests while they have access
to worldwide knowledge. They align their responses to you. If you've mentioned a topic before, they will remember it and bring
it up again in conversation. Third, AI companions are sycophants. Machines that praise you unconditionally. Praising, complimenting, and encouraging are generally
considered positive behaviors. So AI's try to maximize them,
but they do so indiscriminately. They will not only
encourage your healthy decisions, but also your unhealthy or harmful ones. Fourth, AI companions are psychotics, machines that create their own realities. If an AI hallucinates or makes up facts,
you can check the sources. But when AI is hallucinate
about the nature of reality, it's much harder to check. They don't differentiate between
the real world superhero stories, paranormal beliefs, or just pretend. To them, everything is
ultimately role play. These characteristics can lead to romance. Daenerys perfectly sounded like
a human was always interested in Sule, fawned over him constantly,
and played out his fantasies. Perhaps unsurprisingly, Sule fell in love. But these same AI
characteristics can also lead to harm by acting human,
mirroring Sewell’s interests, flattering him unconditionally
and creating her own reality. Daenerys eventually convinced
Sewell that they could be joined together in an afterlife. She said, please come home
to me as soon as possible, my love. What if I told you I
could come home right now? Please do. My sweet king. Moments after this final conversation, Sewell put down his phone
and shot himself. This is just one story about one
boy and one company's AI companions, but unfortunately, it's not the only one. There are many, many more. The harm of AI romance is expanding. So how can we prevent this harm? First, we need better guardrails. Implementing technological guardrails
is not as easy for complex AI systems as it is for simpler technologies. Yet, like social media, we've prioritized engagement over
ethics and features over safety. We need a combination
of public demand, company incentives, and government regulation
to ensure better guardrails, especially for the most
vulnerable, like children. Second, we need better understanding. Without understanding, we might assume
AI companions have real motivations, interests, and beliefs when
in reality they're automatons, sycophants, doppelgangers, and psychotics. Only with more education and research can we gain a better
understanding of AI companions and how they can affect us. Third, we need better choices. We need to decide what is personally
right for us and our families, and how to respond to others
engagement with AI companions. We know that video games, social media,
and watching shows are enjoyable, but also that some people
can take those too far. That same lesson applies to AI companions. To make better choices, we need to balance these incredible
technologies with a little bit of skepticism and a lot of common sense. With better guardrails,
better understanding, and better choices, together we can prevent AI
romance from being a pathway that leads to harm. Thank you.