AI ‘Deadbots’ Could Digitally ‘Hunt’ Loved Ones From Beyond the Grave

Robot Artificial Intelligence Red Particles

Cambridge researchers warn of the psychological dangers of ‘deadbots’, AI that imitate deceased individuals, and urge establishing ethical standards and consent protocols to prevent misuse and ensure respectful interaction.

According to researchers at the University of Cambridge, artificial intelligence that allows users to have voice and text conversations with lost loved ones risks causing psychological harm and even digitally “haunting” those left behind without design safety standards .

‘Deadbots’ or ‘Griefbots’ are artificial intelligence chatbots that simulate the language patterns and personality traits of the dead using the fingerprints they leave behind. Some companies already offer these services, providing a whole new type of “post-mortem presence.”

AI ethicists at Cambridge’s Leverhulme Center for the Future of Intelligence outline three design scenarios for platforms that could emerge as part of the developing “digital afterlife industry”, to show the possible consequences of a sloppy design in an area of ​​AI they describe as “high risk.” .”

Misuse of AI chatbots

The research, published in the journal Philosophy and Technologyhighlights the possibility of companies using dead robots to surreptitiously advertise products to users in the manner of a deceased loved one, or distress children by insisting that a deceased parent is still “with you.”

When the living sign up to be virtually recreated after their death, companies could use the resulting chatbots to spam surviving family and friends with unsolicited notifications, reminders, and updates about the services they provide, similar to being “digitally stalked by the dead”. .”

Even those who initially find solace in a ‘dead robot’ may feel exhausted by daily interactions that become an “overwhelming emotional weight,” researchers argue, but they may also be powerless to stop an AI simulation if their loved one now deceased signed a long contract. contract with a digital future life service.

Visualization of a fictitious company called MaNana

A visualization of a fictional company called MaNana, one of the design scenarios used in the article to illustrate potential ethical issues in the emerging digital afterlife industry. Credit: Dr. Tomasz Hollanek

“Rapid advances in generative AI mean that almost anyone with Internet access and some basic knowledge can revive a deceased loved one,” said Dr. Katarzyna Nowaczyk-Basińska, co-author of the study and researcher at the Leverhulme Center for Cambridge Future of Intelligence (LCFI). “This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased and ensure that this is not invaded by financial reasons, for example from digital afterlife services. At the same time, a person can leave an AI simulation as a parting gift to loved ones who are not ready to process their grief in this way. The rights of both data donors and those who interact with AI services after death must be equally safeguarded.”

Existing services and what-if scenarios

There are already platforms that offer to recreate the dead with AI for a small fee, such as ‘Project December’, which began by leveraging GPT models before developing its own systems, and applications such as ‘HereAfter’. Similar services have also begun to emerge in China. One of the potential scenarios in the new paper is “MaNana”: a conversational AI service that allows people to create a dead robot that simulates their deceased grandmother without the consent of the “data donor” (the dead grandfather).

In the hypothetical scenario, an adult grandchild who is initially impressed and comforted by the technology begins receiving ads once a “premium trial” ends. For example, the chatbot that suggests placing orders with food delivery services in the voice and style of the deceased. The family member feels that the memory of his grandmother has been disrespected and wants to deactivate the dead robot, but in a meaningful way, something that the service providers have not considered.

Visualization of a fictitious company called parent company

A visualization of a fictional company called Parent’t. Credit: Dr. Tomasz Hollanek

“People can develop strong emotional attachments to such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr. Tomasz Hollanek, also from Cambridge’s LCFI. “Methods and even rituals should be considered to remove dead robots in a dignified manner. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context. “We recommend designing protocols that prevent deadbots from being used in a disrespectful manner, such as for advertising or to have an active presence on social networks.”

While Hollanek and Nowaczyk-Basińska say that recreation service designers should actively seek consent from data donors before approval, they argue that a ban on deadbots based on non-consenting donors would be unworkable.

They suggest that design processes should include a series of prompts for those seeking to “resurrect” their loved ones, such as “have you ever talked to X about how you would like to be remembered?”, so that the dignity of the deceased passes through. to the foreground. in deadbot development.

Age restrictions and transparency

Another scenario presented in the article, an imaginary company called “Paren’t,” highlights the example of a terminally ill woman who leaves behind a dead robot to help her eight-year-old son with the grieving process.

While the dead robot initially helps as a therapeutic aid, the AI ​​begins to generate confusing responses as it adapts to the child’s needs, such as representing an imminent in-person encounter.

Visualization of a fictitious company called Stay

A visualization of a fictional company called Stay. Credit: Dr. Tomasz Hollanek

The researchers recommend age restrictions for dead robots and also call for “meaningful transparency” to ensure that users are aware that they are interacting with an AI. They could be similar to current warnings about content that can cause seizures, for example.

The final scenario explored by the study – a fictional company called “Stay” – shows an elderly person secretly committing to a dead robot of his and paying a twenty-year subscription, in the hope that it will comfort his adult children and enable his grandchildren live in peace. know them.

After death, the service is activated. An adult son does not participate and receives a flood of emails in the voice of his deceased father. Another does so, but ends up emotionally exhausted and wracked with guilt over the dead robot’s fate. However, suspending the dead robot would violate the terms of the contract its parent company signed with the service company.

“It is vital that digital afterlife services consider the rights and consent of not only those recreating, but also those who will have to interact with the simulations,” Hollanek said.

“These services risk causing people great distress if they are subjected to unwanted digital apparitions from alarmingly accurate AI recreations of those they have lost. “The potential psychological effect, especially at an already difficult time, could be devastating.”

The researchers call for design teams to prioritize opt-out protocols that allow potential users to end their relationships with deadbots in ways that provide emotional closure.

Nowaczyk-Basińska added: “We have to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology is already here.”

Reference: “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” by Tomasz Hollanek and Katarzyna Nowaczyk-Basińska, May 9, 2024, Philosophy and technology.
DOI: 10.1007/s13347-024-00744-w