- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
This shit, and later on in the article when it talks about an Arizona court using AI, makes me want to hate AI forever. Fuck this, man
A person has social anxiety, and that makes you hate AI? That sounds pretty ableist.
No. Try to think critically next time. AI has been mostly garbage which is why I dislike it. It should not be used in court. Did you even read the article? If so, re-read it
I mean honestly without the theoretical misdirection, I’d find this one of the better examples of a reasonable use of AI within a courtroom. IE it sounds like he asked to represent himself. He presented a video which, to my knowledge all the arguements were written by the person himself. Second the judge asked who it was he said the avitar is AI, presenting his arguements.
So in short, the only thing that’s attempted to be bypassed, are biases related to his appearence and speech.
IMO this concept could be the real future of trials if done right. Imagine say if we used say extreme facial tracking AI, hid the defendent’s actual appearence, but allowed the defendants to use avitars, that still map out any facial expressions and body language they make during the trial… but actually conceal the defendent’s actual race and appearance. We could literally be looking at the one solution to the racial bias… the reality that with the same evidence, race plays a huge part in conviction rate and harshness of sentences.
Guess they will start teaching vtubing in high school then.
I think the major problem here would be that all the avatars would be pretty white women if you wanted to really game the system.
Or black if they’re accused of a hate crime, or whatever.
That just seems… Weird.
just have couple of standardised avatars. It would be madness if everyone could choose whatever.
It’s a really interesting thought, and under ideal circumstances would work IMO. Obviously things are never ideal and there would be all sorts of roadblocks and gotchas as something like this was developed. Things we could think of now, and other things we probably couldn’t. Not to mention the whole problem of, “who develops it and how much trust can you give them?”
As I was reading the idea, it made me think of the suits from A Scanner Darkly that the undercover narcs wore. Basically heavily obfuscated the voice and displayed always-changing patchwork human features to anyone observing from the outside, including trying to hide body shape. Something like that could get similar results. Obviously a video filter would be much easier to develop than a sci-fi suit, but still.
Why even keep facial expressions? People who are good at acting can abuse it by mimicking what’s expected from them and for people with e.g. autism who have problems with body language it can backfire hardly. Let facts and evidence be the base for a sentence.
true, though at that point an avatar itself is unnecessary. Maybe that should be the standard, just change procedure to not ever bring the defendant into the court room.
Admitted I do suppose the biggest problem with the hypothetical goal of hide the defendant in the court room, is that some of the evidence is going to obviously require what the defendant looks like (Eye witness testimony, video surveillance clips etc…).
I do agree with the general gist though, if we could run courts without ever showing the appearance or even names of the people involved, it would be the ideal system to eliminate bias’s
Not that AI is the most effective representation or that it should replace public defenders, but this doesn’t seem far off from scolding a defendant for using Google to research his arguments.
Agreed, if AI can pass the bar AND the defendant’s right to a public attorney is unavailable due to resource and time constraints, then this is a whole lot better than the plea deals that some defenders are being coerced to sign without a public defender.
And let’s not kid ourselves. Most of the existing public defenders are probably using AI to support their case nowadays anyway.
again though missing the point, to my knowledge at least in the article, I don’t see anything to imply the arguements were AI. At least it sounds like the person is claiming the AI was only used for the face and voice.
So on the whole, it just sounds like he wrote the script himself. The AI doesn’t need to pass the bar in this example. because the AI is just a glorified costume. You don’t have to pass the bar to represent yourself, and at least with the information presented in this arguement, the AI did not create any of the arguements, only read a script written by the person.
Or they could pay public defenders a fair wage and hire more. The reason they don’t is because they don’t want people to have a fair trial. You’re constitutionally ineffective the second you get hired as a PD. We have the resources but many on the far right want to dismantle the requirement for representation and overturn Gideon.
AI isn’t the solution to this—proper governance is.
the only thing that’s attempted to be bypassed, are biases related to his appearence and speech.IMO this concept could be the real future of trials if done right.
How do you know if it is done right or wrong?
It is fake, and it is a manipulative kind of fake.
You assume some honorable purpose, but that isn’t the only possible purpose.
Even “bypassing biases” would be a kind of manipulation, and you can never know what other manipulation is going on at the same time. It could exploit other biases. It could try other tricks that we are not evil enough to imagine, and it would be “better” at it than any real human.
The point is the idea, that in general a system could be applied where… say universally the same avitar is applied to everyone while on trial. The fact is “looking trustworthy”, is inherently an unfair advantage, that has no real bearing on actual innocence or guilt of which we know these bias’s have helped people that better evidence have resulted in innocent people getting convicted, and guilty people walking.
Theoretically a system in the future in which everyone must use an avitar to prevent these bias’s would almost certainly lead to more accurate court trials. Of course the one hurdle in my mind that would render it difficult is how to accurately deal with evidence that requires appearence to asses (IE most importantly eye witness descriptions and video footage). When it comes to DNA, Fingerprints, forensics, and hell the lawyers arguements themselves, there’s no question in my mind that perception with no factual use, has serious consiquences that harm any attempt to make an appropriately fair system.
say universally the same avitar is applied to everyone while on trial.
The one and only “good” AI. Trustworthy for everybody?
I do not believe in that.
First you would need to decide on the one and only company to provide that AI. Then someone must prove that it is good and only good. Then it must be unhackable (and remain so while technology evolves).
All of this is hardly feasable.
Again I think our problem is the concept of what we are calling “AI”. IE I’m only talking of basically AI Generated art/avitars. If done in a consistant way I don’t think it even quite qualifies as AI. Really just glorified puppetry. There’s no “trustworhtyness”, because it doesn’t deal in facts. It’s job is literally just to take a consistant 3D model, and make it move like the defendent moves. It’s old tech used in movies etc… for years, and since it’s literally dealing in only appearence any “hacks” etc… would be plainly visible to any observers
Can’t wait to see what silliness ensues when “sovereign citizens” start using AI avatars to represent themselves too
The video will need to be at a 45⁰ angle and misuse a lot of Latin.