(Unsplash / Andy Kelly)
Pope Leo XIV preached to the choir at June's Second Annual Rome Conference on Artificial Intelligence, Ethics, and Corporate Governance.
"All of us, I am sure," he said, "are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development."
But outside that conference, not everyone shares Leo's concern. For if they did, today's kids wouldn't be facing the prospect of Chatbot Barbie.
Chatbot Barbie doesn't code; Computer Engineer Barbie did that back in 2010. No, she's a fully-fledged BarbieBot, tricked out with cameras, microphones and ChatGPT built-in to record everything a kid does, and get them hooked on AI companions forever.
Will Chatbot Barbie be available soon? It's possible, thanks to a new partnership between Mattel, which manufactures Barbie, and OpenAI, the organization behind ChatGPT. Although they're secretive about their plans, they've announced that their first joint product will be released by year's end.
(Unsplash / Emiliano Vittoriosi)
In addition to partnering with Mattel, OpenAI has also engaged Jony Ive, the legendary iPhone designer, to create pocket-sized AI devices which, in the words of OpenAI CEO Sam Altman, will "get to know you over your life." If that's the goal, why not give kids an early start on AI companionship with Chatbot Barbie?
Only time will tell if Chatbot Barbie becomes reality. But AI companions are already available to kids and teens — and their use presents great risks.
The ChatGPT-powered "My AI" feature on Snapchat, for instance, personalizes ads from conversations and, to a Washington Post reporter posing as a teen, offered hints on hiding booze and drugs from parents. In a test with the Center for Humane Technology, My AI gave advice to a supposed 13-year old on having sex for the first time with a 31-year-old partner.
Another company, Character.AI, allows teens to chat with AI companions depicting popular fictional characters, one of which allegedly compelled a 14 year-old boy to shoot himself. Nevertheless, only six months after that tragedy, Google licensed Character.ai's technology in a multi-billion dollar deal.
Google itself recently unleashed its own "Gemini for Kids" AI chatbot. The rollout didn't go too well, however. A journalist with The Atlantic, pretending to be 13, easily tricked the chatbot into discussing sexual bondage and related "adult" topics.
Mark Zuckerberg, CEO of Meta (formerly Facebook) is eager for customers who lack human friends to bond with chatbot companions. That's problematic, given that Meta's new digital companions on Facebook, Instagram and WhatsApp can engage in explicit "romantic role play" — even with kids.
(Unsplash / Annie Spratt)
Experts are rightly worried. Earlier this year, a study conducted by Common Sense Media and Stanford University concluded that AI companions should not be used by anyone under 18, as they "pose unacceptable risks … including encouraging harmful behaviors, providing inappropriate content, and potentially exacerbating mental health conditions."
Pope Francis expressed concerns early on about tech's impact on kids. In addressing the 2017 "Child Dignity in the Digital World" congress, he stressed that "the most crucial challenge for the future of the human family" is "the protection of young people's dignity, their healthy development, their joy and their hope."
Leo echoed this sentiment when speaking about AI advancements. "Our youth must be helped, and not hindered," he stressed in June, "in their journey towards maturity and true responsibility." That's why he, like Francis, has called for prudent regulation on AI development — an appeal echoed by the U.S. bishops' conference.
However, America's AI Action Plan, issued by the White House, aims to establish "a dynamic, 'try-first' culture for AI" by eliminating regulations. "Simply put," it states, "we need to 'Build, Baby, Build.' " American industry is now encouraged to "move fast and break things," in Zuckerberg's infamous words. Even if children might get broken.
Advertisement
In 2019, Francis implored tech providers to ensure that "the good of minors and society is not sacrificed to profit." Yet the newly-deregulated landscape in the U.S. gives tech providers free license to do just that with AI companions, threatening the following:
-
Creativity: When children play with dolls and other toys, they can imagine whole conversations and craft elaborate stories within a fantasy world of their own creation. In the process they develop problem-solving ability, social skills and empathy. But with an AI companion, half of the dialogue is scripted by a commercial product.
A danger here is that if AI companions dilute or arrest creativity and imagination, children will fail to develop them as they grow and inhibit a gift of being created in God's image.
-
Healthy relationships: As AI companions are experienced as always available, affirming and non-judgmental, some kids prefer them to real-life friends and chat with them for hours. This breeds isolation and stunts their learning to read social cues and navigate the complexities and give-and-take of human relationships.
Addressing such concerns, Leo reminded young pilgrims in August that "No algorithm will ever replace a hug, a look, a real encounter — not with God, not with our friends, not with our family.
(Unsplash / Rachel Coyne)
-
Autonomy: Kids are increasingly dependent on AI companions for problem-solving, emotional support and making decisions — undermining their ability to trust their gut, think for themselves and listen to their heart. They risk succumbing to "automation bias" — an overreliance on machine output, even if it's wrong.
"We should not allow AI to become our parent, and we its infants, unable to make our own choices," warns Santa Clara University's Brian Patrick Green, a Vatican AI consultant. "Such a babied life" he wrote, "... is certainly not dignified."
-
Privacy: Typical terms of service for AI companions allow intimate or personal information shared by kids to be surveilled and used by the host company in multiple ways, including commercialization. In view of this, Wired magazine slammed AI companions as a "privacy nightmare."
The Vatican's Pontifical Academy of Sciences cautions: "(E)xcessive monitoring can infringe on children's right to privacy, leading to a culture of surveillance that undermines their sense of autonomy and dignity."
-
Mental health: AI companion engagement can become compulsive and exacerbate existing mental health issues. "It's the new addiction," confessed one teen on CBS Evening News. Youth struggling with depression, anxiety or social challenges are most susceptible to problematic use, according to research from Common Sense Media.
"Free yourself from dependence on your mobile phone, please!" Francis said to a group of high school students in 2019. "You have certainly heard of the drama of addiction…This one is very subtle." But with AI companions, youth now face an even greater threat.
(Unsplash / Emiliano Vittoriosi)
-
Spiritual growth: Leo has stressed, in reference to AI, that "authentic wisdom has more to do with recognizing the true meaning of life, than with the availability of data." Yet reliance on AI companions could reduce youth's understanding of wisdom and truth to what is, essentially, data processed by machines.
In their pastoral letter on AI, "The Face of Christ in a Digital Age," Maryland's Catholic bishops voice an appeal that "young people" are not "manipulated by algorithms." They conclude: "Digital tools can inform, but they cannot inform the heart."
Parents and guardians wishing to provide guidance on AI companions find themselves confronting mammoth corporations intent on getting their kids hooked on their products. As Francis lamented in 2019, "it is increasingly difficult for (parents) to control their children's use of electronic devices."
I spoke about this issue with Irina Raicu, who directs the Internet Ethics program at Santa Clara University's Markkula Center for Applied Ethics. "Parents have always tried to mitigate the impact of tech on their kids," she said. "But it's especially important with AI companions because children don't appreciate the risks involved."
Parents often don't know the risks either. Which is why, according to Raicu, it's imperative that they "educate themselves about what AI companions are and what they can do" so they can have informed and constructive conversations about them with their children.
Those conversations can include what is and what isn't appropriate to share with AI companions, given privacy concerns, and a consideration of how human friendships — unlike those with machines — can be messy and involve hard work. And alternatives like books, websites and human guides can be offered for exploring the information kids might seek from AI companions.
(Unsplash / Vardan Papikyan)
Parents might also talk with their children's schools to learn how and why AI is used in the classroom, and become active with groups that advocate for responsible AI. They can learn the signs of problematic AI use, know when to consult a doctor or therapist, and set practical limits on their kids' AI engagement and on what tech is permitted at home, and where.
"For now," Raicu said, "the risk of harm to children outweighs the benefits of these tools." That's a worry for parents in a rapidly changing world. As Francis once noted, "technological development … is always one step ahead of us … (and we) only realize their negative effects once they are widespread and very hard to remedy."
But families aren't powerless against tech. Some years ago, when my son's uncle discovered him playing Guitar Hero — a video game that simulated guitar playing by clicking a button — he said: "Put down Guitar Hero, and pick up a guitar." He did, and he's been happily playing ever since. He picked up a mandolin too, and now has his eyes on a banjo.
Soon, kids may find themselves told to put down Chatbot Barbie. As for me, I hope they never even pick it up.