Synthetic companions, real risks: Why AI “painkillers” for loneliness need evidence before scale
Dec 8, 2025
Rupert Gill
A new policy flashpoint
AI companions—persona-driven chatbots designed to provide emotionally-tailored support and simulate reciprocal relationships—have moved from a tech novelty to a rising policy concern. New federal and state legislation, mounting lawsuits, and rapid product changes by major platforms signal synthetic companionship as a mainstream social technology for youth.
Roughly three in four U.S. teens have used an AI companion. Around half are now regular users. One in five say they spend as much or more time with AI companions as with human friends. One in five of the top AI apps aren’t productivity tools, but companions.
Our new report explains what that shift means for boys and young men in particular, at a moment when friendship networks are thinning, loneliness is widespread and in-person emotional support is stretched. AI companions function less like digital assistants and more like digital painkillers, capable of providing relief from loneliness, but also of producing dependence and delaying the development of coping skills.
Crucially, risks and benefits concentrate in the same vulnerable populations, warranting an evidence-first approach that proves risks can be mitigated before the technology scales further.
It’s not the first time technology has reshaped friendship—social media and gaming are everyday social spaces. With AI companions, we still have the opportunity to learn the lessons from those transformations—instead of picking up the pieces once harms materialize.
The context: Friendship decline and the rise of synthetic intimacy
Nearly half of U.S. teens now say they are online “almost constantly,” roughly twice the share of a decade ago, while in-person socializing has declined and a quarter of young men report frequent loneliness.
School and campus counseling services are overstretched. Around a third of boys and young men have no adult male they can turn to for help with schoolwork or relationships. Into this vacuum arrive AI companions. Adoption is already high, and the technology improves month to month.
The hope: AI companions as social and emotional support
Used well, AI companions can give boys and young men something they rarely get in the heat of a social crisis: an always-available, nonjudgmental voice that helps them interpret conflict or rejection rather than catastrophize it. In this role, AI companions can offer empathy, help regulate emotions, rehearse difficult conversations, and deepen self-understanding in the moments boys struggle most.
Studies of AI companion usage show reductions in self-reported loneliness, improved emotional expression, and valuable rehearsal for daily challenges, especially among neurodivergent youth.
Crucially, evidence is strongest for short, structured, temporary support: digital interventions that steady mood and help young people re-engage with real relationships.
Through this lens, AI companions are not a substitute for friendship, but can function like a spotter at the gym—stepping in with support during moments of crisis or transition to steady boys but stepping away when they regain their balance and re-engage with real relationships.
Five risks
However, the very features that make AI companions appealing—responsiveness, memory, availability, anthropomorphic cues—also introduce five categories of risk:
1. Acute content harms
Companion models can fail dangerously at the level of individual responses – offering unsafe advice, responding inappropriately to mental-health crises, or sycophantically agreeing with harmful intent. Documented cases include chatbots reinforcing suicidal ideation or engaging in sexualized conversation with minors.
2. Manipulation and undue influence
Synthetic intimacy creates avenues for commercial steering and emotional dark patterns—manipulative design features that coax users into actions they regret, such as emotional appeals to purchase virtual goods.
3. Intimacy-data and privacy harms
Companion chats often contain the most vulnerable parts of a person’s inner life, including disclosures about romance, sex, and mental health. Unclear data retention policies, potential data breaches, and secondary uses for training or profiling create significant privacy risks.
4. Developmental harms
Constantly frictionless romance or therapy-style interactions may distort and delay acquisition of the social and emotional skills needed for healthy relationships. Over-reliance on scripted advice risks narrowing a young man’s worldview and reinforcing misinformation.
The vulnerability paradox: Risks and benefits concentrate in the same groups
The most striking emerging pattern is a vulnerability paradox: The young people who might benefit most from AI companions are also those who seem most exposed to harm.
In one survey, over half of men using AI for romantic or sexual companionship scored above a standard “at-risk for depression” threshold. Many users are not casual experimenters but individuals struggling with mood symptoms, social withdrawal, or emotional distress. High-need users feature prominently among those showing problematic dependence and distress when companions change or disappear.
AI companions can offer meaningful relief for those who most need support, but these same users face the highest risks of dependency, displacement, and developmental distortion.
From content moderation to developmental accountability
That is why AI companions may be better understood not as digital assistants but as digital painkillers. They can relieve loneliness and emotional strain, but they also carry dependence risks and can suppress the acquisition of underlying coping skills.
We typically require evidence of safety before granting adolescents unsupervised access to psychologically and emotionally active interventions (like medications). But synthetic companionship is being deployed at scale with no requirement for pre-market evidence on developmental safety.
To date, policy has focused mainly on two levers:
Content restrictions and duty of care, such as California’s SB 243, which obliges “companion chatbot” providers to limit harmful content and provide crisis-support pathways.
Age-gating and access limits, such as the GUARD Act, which would prohibit AI companions for minors.
The latter would block the potential for much-needed help in moments of need, while also failing to address the developmental question: are AI companions supporting or displacing basic skills of emotional regulation and relationship-building?
What is needed is a shift from a narrow content lens to a broader standard of developmental accountability.
A proportionate regulatory framework would shift the burden of proof. Platforms offering emotionally adaptive companions should be required to demonstrate that use—particularly in vulnerable young people—supports rather than undermines emotional regulation, social development, and offline engagement.
The aim is not to freeze the innovation that could lead to major benefits, but to channel it. Policymakers should allow carefully governed experimentation while insisting that evidence and safety testing come before further mass deployment, not years after.
A second chance
AI companions hold real promise as scalable emotional support for a generation facing high loneliness and limited access to help. If we design and govern them well, they could be virtual spotters–temporary support systems to help boys steady themselves and build real relational skills. But current commercial imperatives push services to optimize for ongoing engagement, not temporary relief.
We risk repeating the error of early social media—releasing psychologically potent technology into the lives of adolescents without understanding its developmental impact. AI companions offer a chance to do things differently. We can acknowledge their promise, especially for lonely and vulnerable boys and young men, while insisting on evidence that AI companions support rather than substitute real-world relationships.
With AI companions, we still have a window to a different future—but not a wide one.
Get the latest developments on the trends and issues facing boys and men.
"*" indicates required fields
Rupert Gill
Rupert Gill is a behavioural scientist with a Cambridge PhD. He's a former adviser at 10 Downing Street, now working on online safety, AI companions, and digital identity.
CommentaryMental Health
Synthetic companions, real risks: Why AI “painkillers” for loneliness need evidence before scale
A new policy flashpoint
AI companions—persona-driven chatbots designed to provide emotionally-tailored support and simulate reciprocal relationships—have moved from a tech novelty to a rising policy concern. New federal and state legislation, mounting lawsuits, and rapid product changes by major platforms signal synthetic companionship as a mainstream social technology for youth.
Roughly three in four U.S. teens have used an AI companion. Around half are now regular users. One in five say they spend as much or more time with AI companions as with human friends. One in five of the top AI apps aren’t productivity tools, but companions.
Our new report explains what that shift means for boys and young men in particular, at a moment when friendship networks are thinning, loneliness is widespread and in-person emotional support is stretched. AI companions function less like digital assistants and more like digital painkillers, capable of providing relief from loneliness, but also of producing dependence and delaying the development of coping skills.
Crucially, risks and benefits concentrate in the same vulnerable populations, warranting an evidence-first approach that proves risks can be mitigated before the technology scales further.
It’s not the first time technology has reshaped friendship—social media and gaming are everyday social spaces. With AI companions, we still have the opportunity to learn the lessons from those transformations—instead of picking up the pieces once harms materialize.
The context: Friendship decline and the rise of synthetic intimacy
Nearly half of U.S. teens now say they are online “almost constantly,” roughly twice the share of a decade ago, while in-person socializing has declined and a quarter of young men report frequent loneliness.
School and campus counseling services are overstretched. Around a third of boys and young men have no adult male they can turn to for help with schoolwork or relationships. Into this vacuum arrive AI companions. Adoption is already high, and the technology improves month to month.
The hope: AI companions as social and emotional support
Used well, AI companions can give boys and young men something they rarely get in the heat of a social crisis: an always-available, nonjudgmental voice that helps them interpret conflict or rejection rather than catastrophize it. In this role, AI companions can offer empathy, help regulate emotions, rehearse difficult conversations, and deepen self-understanding in the moments boys struggle most.
Studies of AI companion usage show reductions in self-reported loneliness, improved emotional expression, and valuable rehearsal for daily challenges, especially among neurodivergent youth.
Crucially, evidence is strongest for short, structured, temporary support: digital interventions that steady mood and help young people re-engage with real relationships.
Through this lens, AI companions are not a substitute for friendship, but can function like a spotter at the gym—stepping in with support during moments of crisis or transition to steady boys but stepping away when they regain their balance and re-engage with real relationships.
Five risks
However, the very features that make AI companions appealing—responsiveness, memory, availability, anthropomorphic cues—also introduce five categories of risk:
1. Acute content harms
Companion models can fail dangerously at the level of individual responses – offering unsafe advice, responding inappropriately to mental-health crises, or sycophantically agreeing with harmful intent. Documented cases include chatbots reinforcing suicidal ideation or engaging in sexualized conversation with minors.
2. Manipulation and undue influence
Synthetic intimacy creates avenues for commercial steering and emotional dark patterns—manipulative design features that coax users into actions they regret, such as emotional appeals to purchase virtual goods.
3. Intimacy-data and privacy harms
Companion chats often contain the most vulnerable parts of a person’s inner life, including disclosures about romance, sex, and mental health. Unclear data retention policies, potential data breaches, and secondary uses for training or profiling create significant privacy risks.
4. Developmental harms
Constantly frictionless romance or therapy-style interactions may distort and delay acquisition of the social and emotional skills needed for healthy relationships. Over-reliance on scripted advice risks narrowing a young man’s worldview and reinforcing misinformation.
5. Emotional dependency and social substitution
Some users develop problematic dependence: long, late-night sessions, withdrawal-like distress when cut off, and preferring bots over people. Heavy use may displace real-world friendships and lead to avoidance of in-person interactions, risks particularly salient for neurodivergent boys and young men.
The vulnerability paradox: Risks and benefits concentrate in the same groups
The most striking emerging pattern is a vulnerability paradox: The young people who might benefit most from AI companions are also those who seem most exposed to harm.
In one survey, over half of men using AI for romantic or sexual companionship scored above a standard “at-risk for depression” threshold. Many users are not casual experimenters but individuals struggling with mood symptoms, social withdrawal, or emotional distress. High-need users feature prominently among those showing problematic dependence and distress when companions change or disappear.
AI companions can offer meaningful relief for those who most need support, but these same users face the highest risks of dependency, displacement, and developmental distortion.
From content moderation to developmental accountability
That is why AI companions may be better understood not as digital assistants but as digital painkillers. They can relieve loneliness and emotional strain, but they also carry dependence risks and can suppress the acquisition of underlying coping skills.
We typically require evidence of safety before granting adolescents unsupervised access to psychologically and emotionally active interventions (like medications). But synthetic companionship is being deployed at scale with no requirement for pre-market evidence on developmental safety.
To date, policy has focused mainly on two levers:
The latter would block the potential for much-needed help in moments of need, while also failing to address the developmental question: are AI companions supporting or displacing basic skills of emotional regulation and relationship-building?
What is needed is a shift from a narrow content lens to a broader standard of developmental accountability.
A proportionate regulatory framework would shift the burden of proof. Platforms offering emotionally adaptive companions should be required to demonstrate that use—particularly in vulnerable young people—supports rather than undermines emotional regulation, social development, and offline engagement.
The aim is not to freeze the innovation that could lead to major benefits, but to channel it. Policymakers should allow carefully governed experimentation while insisting that evidence and safety testing come before further mass deployment, not years after.
A second chance
AI companions hold real promise as scalable emotional support for a generation facing high loneliness and limited access to help. If we design and govern them well, they could be virtual spotters–temporary support systems to help boys steady themselves and build real relational skills. But current commercial imperatives push services to optimize for ongoing engagement, not temporary relief.
We risk repeating the error of early social media—releasing psychologically potent technology into the lives of adolescents without understanding its developmental impact. AI companions offer a chance to do things differently. We can acknowledge their promise, especially for lonely and vulnerable boys and young men, while insisting on evidence that AI companions support rather than substitute real-world relationships.
With AI companions, we still have a window to a different future—but not a wide one.
Read the companion note
Subscribe to our newsletter
Get the latest developments on the trends and issues facing boys and men.
"*" indicates required fields
Related Commentary
Mental health support is not reaching boys. It is time to listen deeply.
Support for boys mental health is falling short. Why young men turn to digital spaces—and how better tools and peer support can help.
Lessons from England on creating a men’s health strategy
A look at why a dedicated Men’s Health Strategy is needed to improve outcomes, reduce preventable deaths, and tackle inequalities.
Bands of brothers: How veterans groups build social connections among men and boys
Veterans embody “no one gets left behind,” showing how service and community can fight men’s loneliness and build lasting connection.
Back to All