A MAN killed in a road rage incident has spoken in court beyond the grave via an eerie AI video message.
Chris Pelkey was virtually resurrected to deliver a victim impact statement directly to his killer.
3

3

3
The move is a first in Arizona judicial history, the state where Mr Pelkey lived – and possibly a first for all of the US too.
In the video Mr Pelkey addresses Gabriel Horcasitas, who shot him dead after a dispute in 2021, aged 37.
“It is a shame we encountered each other that day in those circumstances,” the AI version of Mr Pelkey said.
“In another life, we probably could’ve been friends.
“I believe in forgiveness and in God who forgives. I always have and I still do.”
Mr Pelkey’s sister Stacey Wales came up with the idea when gathering victim statements from friends and family.
“We received 49 letters that the judge was able to read before walking into sentencing that day,” she told FOX 10.
“But there was one missing piece. There was one voice that was not in those letters.”
Her husband and a friend then went through the “painstaking” work of finding a way to make the video a reality using AI.
His brother John said: “To see his face and to hear his voice say it.
“Just waves of healing washed over my soul. Because that was the man that I knew.”
Judge Todd Lang welcomed the AI, saying: “I loved that AI, thank you for that.
“As angry as you are, as justifiably angry as the family is, I heard the forgiveness.
“I feel that that was genuine.”
Mr Horcasitas was sentenced to 10-and-a-half years in prison for manslaughter.
It comes as a federal judicial panel gathered last week over proposals to regulate the introduction of AI-generated evidence at trial.
The U.S. Judicial Conference’s Advisory Committee on Evidence Rules in Washington, D.C. has voted to seek public comment on a draft rule.
What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.