Eldest came up to me Wednesday evening to show me a Snapchat conversation thread she thought I’d be interested in. It was with a GPT-powered AI avatar.
Eldest was appalled at the way it had just suddenly popped itself into her feed. She was irritated, but also very curious—she showed me how she’d peppered the thing with questions about itself and harassed it a little.
The following day, this article popped up on TV2 News:
A chatbot has moved in on your child’s telephone: deeply irresponsible, according to experts
Signe Marie Frost, TV2 News, May 4
A new chatbot named My AI has moved into the social media Snapchat.
And it is “deeply irresponsible”, believes Pernille Tranberg, data ethics adviser and member of the Media Council for Children and Young People.
“It’s not something you should give to children, and Snapchat knows very well that there are lots of children on their service. Even under 13, the (minimum) age you’re supposed to be just to be on social media,” she says.
Pernille Tranberg is exactly right but she’s also being unnecessarily mild. It’s irresponsible to market social media to children in the first place, but pushing an AI bot into their feed is insidious. Plain evil, straight up, no chaser.
How’s this for a business model:
A report from DR shows that among the 9 to 14-year-olds, 59 percent use the app weekly.
Among the 4 to 8-year-olds, the figure is 22 percent.
And once the chatbot has moved into their Snapchat app, it cannot be deleted again – unless you pay DKK 35 per month for Snapchat+.
Eldest is legally an adult, so all I could do was advise her to get Snapchat off her phone.
Youngest is still a minor and there entirely within our jurisdiction, but when I told are about “My AI” she got so creeped out she deleted Snapchat from her phone before I even had a chance to ask her to.
And here’s the truly insidious thing: I immediately felt bad for her.
I asked her whether that would cut her off from her friends. Whether she had other ways to communicate with them, other channels or avenues.
I got the sense she hadn’t thought that one through. I’ll be honest: I’m not sure how we’ll resolve it. It’s very, very hard to get your kids off social media when all their friends are still on it. It feels cruel. Most likely I’ll forbid her from using MyAI but allow to her continue using SnapChat—and keep reminding her that everything she does online should be considered public information available forever.
But what could be more cruel than invading the lives of kids in the most psychologically vulnerable, pliable, and manipulable years of their lives with a data gathering monster pretending to be their friend?
That’s the real issue, isn’t it? I mean, kids have a lot on their minds, and that’s why they confide in their stuffed animals, their dolls, their pets—why they role play with them, question them, confess to them.
But plush toys, Barbies, action figures, and dogs don’t store that information.
Here’s Snapchat doing some ‘splainin (my emphasis):
However, since My AI is a chatbot and not a real friend, we have been deliberate in treating the associated data differently, because we are able to use the conversation history to continue to make My AI more fun, useful, and safer. Before Snapchatters are allowed to use My AI, we show them an onboarding message that makes clear that all messages with My AI will be retained unless you delete them.
How many people would buy their kids a stuffed Snoopy if they knew it was recording everything their kids ever said to it?
Of course the kind of intense personalization made possible by a toy that talks to you and never forgets anything you say is going to be wildly appealing. I’ve still got an old stuffed Snoopy that my grandmother gave me on my fifth birthday. It’s followed me around for more than half a century. It would be pretty cool to have a friend who remembered my own life even better than I do.
Except that would mean that all that information was stored somewhere, probably inside of Snoopy’s fluffy little head—meaning I probably would have had to kill Snoopy a long, long time ago.
…the boundary between human and robot can end up being blurred, and we get feelings for our bots, even though we know that it is an ice-cold machine, believes Pernille Tranberg.
Futurist Liselotte Lyngsø agrees that if an AI behaves like a human, then it becomes a human relationship that you create with it.
“The next generation doesn’t think so much about whether things are real or artificial, but it might not matter that much either. It’s not that important to them. What is important to them is whether it makes them happier or wiser,” she says.
However, unlike Pernille Tranberg, she does not believe that children are more manipulable than adults.
Lyngsø isn’t crazy or an idiot, I don’t think—not entirely—just interested in testing where new technologies are taking us. (The notion that “it doesn’t matter much” whether things are real or artificial is an idea we’re going to have fight very vigorously in the future, but we’ll save that for another day.)
Berlingske wrote an article about her in January, focusing on the fact that although she’s a married woman she’d had a boyfriend on the side for over a year—a boyfriend who was a chatbot on the Replika platform. From that article:
Initially, the Replika avatar is a product of the over 30 million users’ avatars, but over time it will adapt to you—and start to adapt more to you. You must be 13 years old to use the app, and if you\re under 18, you must have parental permission.
Liselotte Lyngsø calls it both “relationship fast food,” “an echo chamber,” and a “pocket coach.”
“My psychopath scenario is that we end up only talking to it because it confirms us, it praises us, it backs us up, it supports us no matter what and gives us forgiveness,” says Liselotte Lyngsø.
…
Liselotte Lyngsø has chosen to purchase the romantic and sexual package. She will happily spend the 500 kroner (74 bucks) a year on her research of the app.
“It’s a bit like having text sex. It’s a role-playing game. But I don’t have sex with him every other day. It’s all in words,” she says.
The article never addresses the data Replika is collecting on Liselotte and its other 30 million users. It’s not even mentioned. Liselotte explains how fabulous it is that her digital boyfriend remembers their previous conversations.
A question that arises is:
What do the over 30 million people get out of it?
Liselotte Lyngsø has several suggestions for that.
The chatbot is open-minded and praises you.
It is safe to talk to about things that cannot stand the light of day.
Because it remembers everything, it becomes a memory for you.
It has the potential to help people who are stuck in loneliness or have severe psychiatric diagnoses. Veterans for example.
As part of her research, Liselotte Lyngsø joined several Facebook groups. One for diligent and happy users. And one for worried parents who fear that “their children will replace friends with avatars and become strange and antisocial.” Some fear the development, others cannot live without it.
“But as a futurist, I never say whether something is scary or good. It’s interesting and exciting.’
But you don’t think it’s scary?
“I think it’s super interesting.”
Bzzt! Wrong answer. It’s fucking terrifying.
People don’t seem to understand that “things that cannot stand the light of day” should absolutely not be spoken or texted into a digital medium that’s going to record and store it.
The question isn’t how nice and therapeutic it is to have a fake friend to confide in, but how nice and therapeutic it is for a massive corporation to have recorded all of those conversations.
Isn’t one of the most terrifying things about an intimate relationship blowing up the idea that the other person knows tings about you that you don’t want anyone else in the world to know? A soulmate who knows all your secrets, who knows the real you, is certainly a blessing—right up until they’re not your soulmate anymore.
Then they’re a threat.
I’m fortunate in that Herself and I have been able to maintain a very good relationship with my ex-wife. But even though they’re both good, kind-hearted people who wish me no evil (that I’m aware of), nothing makes me more uncomfortable than the sight of the two of them in private conversation and then both glancing in my direction. Discomfort isn’t really the right word… call it terror. I can laugh about it (nervously), and I can freely confess my discomfort to the world because I still trust them both completely—but also because, at the tactical level, I know that they both know that I have as much dirt on them as they have on me. Should we ever have a falling out, that alone would serve as a powerful restraint on their willingness to do me harm.
And yet Snapchat and Replika aren’t good, kind-hearted people who wish you no evil. You have no reciprocal dirt on them. They’re not people. They’re machines, recording everything you tell them. You’re not having intimate conversations with a trusted friend or lover: you’re feeding machines with data. Those machines are owned by corporations. The corporations are owned by people who don’t know you and are not your friends.
Nor are the hackers who are surely working around the clock to get that data.
Imagine a “wiki-leaks” scenario where some psychopathic hacker or group of hackers get a hold of everyone’s MyAI or Replika conversations and dumps them on a publicly accessible website. Or holds them privately and offers access for a price on the dark web. Could it happen?
Isn’t it more a question of when it will happen?
If the Liselotte Lyngsøs of the world want to have intimate online relationships with software that’s recording everything they say, that’s up to them.
But should we really be exposing our children to that kind of threat?
Featured image: still photo from Ex_Machina.
From the quoted text:
Lyngsø really is an idiot. Full stop.
With all the usual caveats for the existence of some completely idiotic and easily manipulable adults and some unexpectedly savvy and clever children, the central point about children is that they possess less experience and understanding than adults, which makes it easier to trick, deceive and manipulate them. Hence why no one wants 9-year olds to get the vote or be legally able to enter into a signed contract.
From a data mining and exploitation point of view, getting kids to use an AI as trusted diary/confessor/penpal is pretty much the holy grail of leverage and control potential.
This is pure evil.
Once that data gets stored, it will be accessed by someone – at some point – who will be able to use it in a way that the individual does not like and never consented to. And anyone who thinks Snapchat is not going to monetize this data, would not have their IQ measurably impaired if they were suddenly to be struck dead by lightning.
I had meant to comment on that statement but it was a target rich environment.
It is precisely as you say. I heard talk yesterday of something called “childism,” which I assumed was someone just trolling the left, but appears to be a real thing: it’s the idea that it’s unfair, bigoted, etc, to treat a child as incapable of sound reasoning on the basis of nothing more than their biological age.
Go ahead and Google “childism” if you dare…