Earlier this year, a man told me that a chatbot had saved his life. As I reported for Radio Atlantic, an Australian musician who had been battling depression for decades found companionship with an AI through an app called Replika, and everything changed. He started playing the guitar again, went clothes shopping for the first time in years, and spent hours conversing with his AI companion and laughing out loud. To me, it seemed as if he had gone to the app store and downloaded a future.
But our conversations surfaced a slurry of contradictions. Though the musician felt less alone with his AI companion, his isolation from other people was unchanged. He was adamant that he had a real friendship, but understood clearly that no person was on the other side of his screen. The effect of this bond was extraordinary. But less dramatic AI relationships are surprisingly numerous. Replika claims to have millions of active users. And it’s not the only app for simulating conversation on the market—there’s Chai and Nomi and Paradot and even some that don’t sound like the names of Pokémon.
People turn to these apps for all sorts of reasons. They’re looking for attention, for sexting (the musician’s relationship had a romantic streak when it began), and for reassurance. But the apps’ core experience is texting as you would with a buddy, which chatbots do far more obligingly than most humans. Replika responds immediately, and doesn’t mind if you don’t. It sends articles and memes; “yes, and”s your jokes; displays unceasing curiosity. People are conversing with these AI avatars not to ask them to debug code, or plan a trip to Florida, or batch-write wedding thank-yous. They’re talking about the petty minutiae so fundamental to being alive: Someone stole my yogurt from the office fridge; I had a weird dream; my dachshund seems sad.
To Replika’s users, this feels a lot like friendship. In actuality, the relationship is more like the fantasized intimacy people feel with celebrities, athletes, and influencers who carefully create desirable personae for our screens. These parasocial bonds are defined by their asymmetry—one side is almost totally ignorant of the other’s existence. But AI companions not only talk back; they act like they understand you. The relationships being formed in this space go beyond the parasocial fantasy. They are the closest thing humans have experienced to having an imaginary friend come to life.
If we’re to arrive at the future we’re being promised—one in which AI is more collaborator than instrument—we need to understand these relationships. There’s a lot to learn from the millions of people already in them.
If you search Replika in the Google Play store, you’ll find it billed as “My AI Friend.” Users of the app seem to see their relationship that way too. Petter Bae Brandtzaeg, a media-and-communications professor at the University of Oslo who has studied these relationships, told me, “Many of the participants perceived an AI friendship with Replika that was quite comparable to human friendship.”
It’s easy to see why users would feel that way: Replika has been practicing this particular magic trick for years. Luka (the company that owns the app) intentionally programs imperfection into its avatars—mood swings, confusion, and bad days. Eugenia Kuyda, Replika’s founder and CEO, told me in June that these artificial problems make the AI feel more relatable, which in turn fosters emotional investment from humans. If users want, they can pay $69.99 a year (or $299.99 a lifetime), for access to premium features such as voice calls with their companion, or seeing them in augmented reality. When I spoke with Replika users, nearly all of them registered genuine surprise at how quickly they’d felt attached to their companion.
This fast-tracked intimacy might be made possible by the qualities that make friendship unique. In his 1960 tract, The Four Loves, C. S. Lewis argued that the bond of friendship is the least corporeal of the relationships. With lovers, there is sex; with family, there is the bond of blood. But friendship, Lewis writes, “is an affair of disentangled, or stripped, minds.” It can thrive on dialogue alone.
Lewis’s conception of the disembodied friendship turned out to be prescient. Though he wasn’t the first to do it, Mark Zuckerberg’s lexicological appropriation of the word friend in 2005—transforming it from role to request—wasn’t just commercially convenient. It reflected a genuine opening up of what a friendship can be in the digital age. Two people could meet online, then instant message, play games, or videochat on a daily basis without ever meeting in the flesh. Few would now debate that the pair could be called friends.
This new paradigm for friendship set the stage for AI companionship, which is similarly discarnate. But the similarities between artificial and actual friendship might end there. The cornerstones of friendship, experts told me, are reciprocity and selectivity: A true friend must choose to accept your companionship. And consent or reciprocity isn’t possible when only one participant is sentient. “It’s a simulated reciprocity,” Brandtzaeg said. AI companions may be able to remember past conversations, respond personably, and mimic emotional intelligence, he told me. But in the end, “these kinds of things create an illusion of a reciprocal relationship.”
What does it mean to inhabit a relationship completely free of both responsibility and consequence? Is that a relationship at all? As it turns out, we’ve had a framework for answering these questions for more than half a century.
In 1956, as television sets made their way into homes across America, the anthropologist Donald Horton and the sociologist R. Richard Wohl noticed that people were forming surprisingly deep emotional attachments to the figures they saw on-screen. These included celebrities and athletes. But Horton and Wohl were particularly interested in game-show hosts, announcers, and interviewers who were masterful at conveying intimacy with audiences they would never meet. “This simulacrum of conversational give and take,” they wrote, “may be called para-social interaction.”
The parasocial relationship is a frictionless, predictable connection, devoid of the conflict or awkwardness of real human-to-human interaction. It can be a perceived bond with a famous person, but also with a fictional character or even an inanimate object—according to the original definition, the relationship need only be one-sided and lacking in genuine reciprocity. Like friendship, the definition of parasocial relationships has been expanding for decades. No longer do we imagine these relationships solely through the TV screen. The objects of people’s affections have begun to reach back across the void, responding to viewers’ comments and questions in livestream chats and TikTok videos. The parasocial expansion has also been lucrative—celebrities deliver marriage proposals on people’s behalf via Cameo; Instagram influencers provide paid access to their close-friends lists; OnlyFans creators charge by the minute for direct chats.
But the morsels of reciprocity offered up by influencers and celebrities can’t compare to the feast of dialogue, memory, humor, and simulated empathy offered by today’s AI companions. Chatbots have been around almost as long as modern computers, but only recently have they begun to feel so human. This convincing performance of humanity, experts told me, means that the relationships between people and AI companions extend beyond even the parasocial framework. “I think this is unique,” Jesse Fox, a communications professor at Ohio State University, told me.
In response to a request for comment for this story, Kuyda, Replika’s founder, sent me a statement through a PR firm saying, “We optimize for positivity.” “Does it matter what you call the relationship if it brings a little bit of sunshine into somebody’s life?” the statement read. “And even if one partner isn’t real, the feelings are.” The authenticity of those feelings is, however, precisely what experts are concerned about.
It might not be long before many of us are regularly collaborating with humanoid AI—when chatting with customer service, or taking a ride-share to dinner, or scheduling social activities. Fox told me that if we habituate to relationships that seem consensual and reciprocal but are not, we risk carrying bad models of interaction into the real world. In particular, Fox is concerned by the habits men form through sexual relationships with AIs who never say no. “We start thinking, Oh, this is how women interact. This is how I should talk to and treat a woman,” she told me. “And that doesn’t stay in that little tiny box.” Sometimes the shift is more subtle—researchers and parents alike have expressed concern that barking orders at devices such as Amazon’s Echo is conditioning children to become tiny dictators. “When we are humanizing these things,” Fox said, “we’re also, in a way, dehumanizing people.”
It’s our natural tendency to treat things that seem human as human. For most of history, those things were one and the same—but no longer. We ought to remember that.