The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division’s First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute.

On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater.

“May it please the court,” the man began. “I come here today a humble pro se before a panel of five distinguished justices.”

“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”

“I generated that. That’s not a real person,” Dewald answered.

It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased.

  • Lupo@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 day ago

    The article reads like the guy gave the AI avatar a script to read rather than having the AI avatar generate its own argument. I doubt the plaintiff would have referred to it as prerecorded or readily admit it was an ai avatar if he intended for this thing to argue on his behalf rather than just speak on his behalf.

    I really dont see a problem with this.

    • gAlienLifeform@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      12 hours ago

      Yeah, if the situation is as the article implies then there is absolutely no issue, but if I was running a court I would want to put a pause on things and review source code or get sworn testimony from someone who built it first to be on the absolute safe side. Like, if something did go wrong it would be kind of hard to un-hear that and not allow it to influence the ultimate outcome of things.