The Disturbing Implications of Jim Acosta’s AI Interview
AI resurrections are becoming more common — but they aren’t harmless.
Veteran journalist Jim Acosta dove head first into the uncanny valley on Monday when he interviewed an AI depiction of a high school student who was murdered in the Parkland school shooting seven years ago.
The interview was made possible by the parents of slain high school student Joaquin Oliver, who created an AI persona of their late son. The persona then took part in the interview with Acosta on Monday — what would have been Oliver’s 25th birthday.
The interview, which aired on Acosta’s Substack, was awkward and uncomfortable to watch, as Acosta essentially spent the entire time talking to a very rudimentarily animated photograph of a dead high school student whose lips jerked when he spoke.
According to Oliver’s dad, the AI model was trained on general information and things that Oliver wrote and posted online.
Manuel, his father, described the persona as a “very legit Joaquin.”
Acosta started the interview by asking Oliver, what happened to him, to which Oliver replied that he had been taken from this earth too early. Acosta then asked Oliver about gun control and mental health issues, to which the AI Oliver replied “I believe in a mix of stronger gun control laws, mental health support, and community engagement.”
The rest of the interview was more casual and at the end, Acosta asked his parents why they had created the AI persona and what their plans were now that they had brought it into existence. His father assured Acosta that he wasn’t trying to bring his son back to life or anything like that, but just a few moments later, he mentioned that his wife found it very comforting to hear her son — the AI version — say things like “I love you, mommy.”
Manuel also said that he and his wife wanted Oliver’s voice to be heard through the AI persona and that they plan to establish social media accounts where AI Oliver will start posting and gaining followers so he can share his words about issues like gun control with the public.
“Now, Joaquin is going to start having followers. He’s going to start uploading videos. This is just the beginning,” Manuel said.
Yikes?
I don’t know exactly what to say about this because I’m not in a position to tell a grieving parent how to deal with the death of their child who was murdered in a school classroom. However, there’s good reason to believe that trying to recreate memories of a deceased loved one through some sort of AI depiction could ultimately end very badly. There are huge ethical, moral, spiritual, and even legal issues that remain up in the air and are far from being resolved, yet tech companies are plowing forward with these creations, with potentially world changing impacts, that many of us want nothing to do with.
Consider, for example, the primary rationale for creating these “griefbots”: to help people mourn and grieve the loss of a loved one. But one of the few studies looking at so-called griefbots, or deathbots, as a way to help people process the death of a loved one found that these AI personas can actually prolong the grief process and create unhealthy dependencies. However, other scholars position griefbots as simply another form of “continuing bonds” with deceased loved ones, just like we might listen to old voicemail messages or watch old videos of our loved ones. But those aren’t interactive, and that makes all the difference. Still other researchers warn of “digital hauntings” as surviving family members are essentially stalked online by the AI depiction of their deceased relative — at which point they are faced with the choice of continuing to have to face the AI depiction of their dead relative or arrange an online funeral and kill the persona of the person they love. I don’t know about you, but I don’t want to be someone who has to make that choice.
Among the most uncomfortable issues here is that as much as Oliver‘s parents knew him and knew his beliefs and values, they only got to knew him as an 18-year-old high school student. They didn’t know him as a 25-year-old and they don’t how his viewpoints or worldview might have changed during those years, nor do they know if he would even want his voice to be used in this way because they never got the chance to ask him. There are also data privacy issues here, as Oliver never had the opportunity to consent or not to consent to his writings and posts being used to train this AI model, and there are almost certainly biases built into the model because his parents likely chose writings that they liked and wanted to hear reflected in his voice, while passing over writings that may have been less favorable or embarrassing. In that case, we really aren’t getting an AI Oliver; we’re getting an AI version of an idealized Oliver.
These legal issues surrounding ownership of legacies and voices of deceased persons are still making their way through the courts and are in no way resolved, yet tech companies have decided they don’t need to wait for that.
There are also serious concerns about undue influence from AI resurrections who advocate for certain policies or positions. We have just spent the past nine years trying to combat fake accounts and false voices online, but what happens when that false persona is representative of a real person who is no longer with us? Do they get a voice in our current society even though they don’t have to live with the consequences of their advocacy? When griefbots do engage in advocacy, it appears to be quite effective. In a series of two online experiments, researchers found that resurrected-victim videos boost policy support 25% more than text-only testimonials and inflate perceived credibility even when viewers are told the persona is synthetic. Other research in this area finds that the impact of deceased AI personas is akin to the effect of any other deepfake, in that it erodes trust in public discourse, breaks down evidentiary standards, and increases affective polarization.
And for what benefit? There’s very little evidence that griefbots actually help with the process of grieving, and there is substantial evidence that they may actually make it harder.
Yet tech companies are pushing forward with this deeply disturbing technology and basically exploiting grieving individuals who would do anything to speak to their loved one again.
In fact, this isn’t even the first time AI has been used to resurrect someone killed by gun violence. Last year, a group of parents of several other Parkland victims launched a robocalling campaign called The Shotline, using the voices of six students and staff who were killed in the mass shooting. AI depictions of the deceased victims’ voices were used to record calls to members of Congress to advocate for gun control.
I’m not sure exactly how I feel about that, but I don’t feel very good. Using dead children in such an unnatural and jarring way doesn’t seem like a great way to carry out the careful deliberation that we are called to do in a democracy.
At the very least, if this disturbing technology is inevitable, why can’t we put the brakes on it for long enough to get the legislation and regulations in place to make sure it’s not misused in ways that could be irreparably harmful?
Oh that’s right. Tech companies have a profit to make.
And tech companies are big campaign contributors.
This is wrong on so many levels that I don’t know where to start. AI could have the possibility to do some great things , but this is not it. I can’t believe the mental damage this will cause when used this way. I really just don’t know what to expect from the future.