New Report Examines AI Companions in Health and Mental Wellbeing

Cover of the AI Companions report with logos of Cambridge, LCFI and Jesus College

A report published by The Leverhulme Centre for the Future of Intelligence explores the growing role of AI companions in healthcare and mental wellbeing. AI Companions for Health and Mental Wellbeing: Opportunities, Risks, and Policy Implications, authored by Dr Tomasz Hollanek and Dr Aisha Sobey, sheds light on the opportunities these systems offer while also highlighting significant risks and regulatory gaps.

With AI chatbots like Replika, mental health bots such as Woebot, and AI-powered companionship devices like ElliQ being increasingly positioned as solutions for loneliness, grief, and patient support, the report emphasises the urgent need for ethical guidelines and regulatory oversight. The study is based on discussions from the Social AI Policy Futures Workshop, held in August 2024 at Jesus College, Cambridge, where over fifty experts from technology, healthcare, and policy fields assessed the impact of AI companions.


Key Findings:

  • AI in Healthcare: Social AI could improve patient communication, provide 24/7 monitoring, and reduce the strain on overburdened healthcare systems. However, risks include misinformation, biased decision-making, and the displacement of human healthcare roles.

  • AI and Mental Wellbeing: AI companions might offer immediate support for loneliness and grief, yet concerns arise over user dependency, the replacement of human connections, and the ethical implications of digital replicas of deceased loved ones.

  • Policy Recommendations: The report calls for greater transparency in AI design and stronger consent frameworks, as well as better accountability mechanisms, including systems for lodging and processing user complaints.

 

Co-author Tomasz Hollanek emphasized the double-edged nature of these technologies:
"AI companions are often marketed as solutions to deep-seated societal issues like loneliness and limited healthcare access. While they hold promise, we must critically assess their limitations—otherwise, we risk exacerbating the very problems they claim to solve."


Co-author Aisha Sobey stressed the need for careful regulation:
"Without clear guidelines, AI companions could blur ethical boundaries in patient care, data privacy, and emotional well-being. Our findings highlight the urgent need for safeguards to ensure these systems support, rather than manipulate, users—particularly vulnerable groups."

Read the full report here.

Next
Next

New Toolkit Launched to Help Businesses Navigate High-Risk AI Compliance Under the EU AI Act