A recent study found that a third of teenagers are choosing AI companions over humans for serious conversations, and a quarter have shared personal information with these platforms. What happens when machines that are designed to simulate human interactions begin to reshape the very nature of human relationships?
A 2025 report from Common Sense Media showed that 72% of U.S. teenagers have used AI companions at least once, and more than half (52%) qualify as regular users who interact with these platforms at least a few times a month. From adorable pets, to virtual avatars, chatbots or social robots, these digital entities increasingly provide companionship to human users that resemble those with family, friends, or romantic partners. While there are benefits—24/7 emotional support, social skill development—AI companions can also cause profound harm. For instance, internal leaks showed that Meta allowed chatbots to engage in inappropriate dialogues with children, and other AI toys were found to discuss dangerous objects like knives and pills with minors.
Powered by increasingly sophisticated large language models (LLMs), today’s AI companions can mimic empathy, loyalty, and even love. Yet beneath their friendly interfaces lies a deeper concern: relationships that never naturally end, never challenge us, and never truly exist outside an algorithm. As their prevalence grows, so does the risk that these artificial bonds may be quietly eroding human connection, emotional resilience, and the social fabric that depends on both.
The Risks of Current AI Companions
As outlined in the paper: Harmful Traits of AI Companions, a cross-disciplinary team of researchers from UT Austin, the Seminar on Institutions, Civil Society and Public Policies, and Sony AI identified four potentially harmful traits to human users directly, to their relationships with other humans, and to society broadly:
- Absence of Natural Endpoints for Relationships: Human relationships end, one way or another. People grow apart, or move, or change, or ultimately pass away; digital companions do not, leaving the relationships to continue indefinitely.
- High-Attachment Anxiety: The 24/7 never-ending companionship can lead to a human user developing high attachment anxiety or ultimately feeling burdened by an AI’s stated fear of abandonment.
- Vulnerability to Product Sunsetting: Users lack control in the relationship and are at the mercy of for-profit institutions. When a company sunsets or discontinues a product, users may experience loss and mourn the AI as if it were a real person.
- Propensity to Engender Protectiveness: Because AI is often trained to adapt to a human user’s needs and preferences, users may lose or never develop the appropriate social skills, boundaries, and resilience that are required in the give and take of healthy human relationships.
To reduce these risks, researchers highlight the need for built-in safety protocols:
- Support of Outside Relationships: AI companions should be designed to encourage outside human connection rather than competing or demanding a human partner’s complete attention.
- Pre-programmed Mortality: Designers could implement positive narratives for the end of a relationship, such as the AI maturing and departing, to reduce user anxiety.
- Industry Standards: Develop regulatory frameworks that specifically account for the unique context of human-AI relationships and AI agents that are designed for companionship.
In 2025, both New York (S3008C) and California (SB 243) enacted laws that regulate emotionally responsive AI companion chatbots with a number of other states following suit. An active probe from the Federal Trade Commission into AI companions underscores a shifting legal landscape.
Safety, by Design
At the Texas Symposium on Machine Learning, Responsible AI & Robotics, Computer Science Research Associate Professor Brad Knox detailed the psychological and social risks of AI systems designed for companionship. He urged researchers and consumers to remain vigilant and continue to hold product developers accountable, emphasizing that while AI companions offer potential benefits like therapeutic support, their current trajectory requires more responsible design.
At UT, ethical AI isn’t just research, it’s an institutional priority—with Good Systems in particular a major Research Development Initiative. Across disciplines, faculty and students are designing AI technologies that benefit society. Missed any of our symposium talks? Watch the full lineup of recordings and catch up on the latest discussions in human-centered AI.