By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI-generated legal errors: A case in New Hampshire
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > AI-generated legal errors: A case in New Hampshire
News

AI-generated legal errors: A case in New Hampshire

News Room
Last updated: 2025/10/08 at 10:23 PM
News Room Published 8 October 2025
Share
SHARE

It was a garden variety lawsuit in the state of New Hampshire.

A Windham couple hired a contractor to renovate their home and gave him a hefty down payment for the work. After they scaled back their plans, they claim, he ran off with the money. So they filed a lawsuit.

But the July 15 ruling in Superior Court Judge Lisa English’s case was far from typical, perhaps the first of its kind in New Hampshire.

Numerous quotes in legal arguments filed by the couple’s attorney were “wrong and misleading,” English wrote. The cases referenced were from another state, or “from a different year, with a different citation, and involved a completely different area of ​​law.” Some cases, English wrote, do not exist.

During a hearing in August, Nicole Blauwfort tried to explain.

Another lawyer at her firm who worked on the case used artificial intelligence instead of traditional tools to draft the legal briefs and didn’t tell anyone, she said. After the contractor’s attorney raised questions, an “amended” letter from Bluefort, drafted with the help of the same employee, contained additional errors. The AI ​​platform had mixed up the case law and Bluefort did not understand it.

“I should have done my due diligence,” she said during the hearing. “It’s up to me to verify this because my name is on it.”

Bluefort said she removed the other attorney from the case, began developing an AI policy for her law firm, which includes three offices in New Hampshire and Massachusetts, and paid opposing counsel at his request for the extra time he spent checking and rechecking her work. The amount involved was just over $5,000.

Rockhingham County Superior Court Judge Mark Howard that day thanked Bluefort for her honesty and proactive steps toward recovery. As a result, he wrote in an order, “the court considers the matter resolved and no further action is necessary.”

Bluefort declined to be interviewed for this story.

Similar situations are happening in courtrooms across the country.

In California, a state judge in September fined a lawyer $10,000 for submitting false information as a result of AI use, as reported by CalMatters. In May, a federal judge in the Golden State required law firms to pay more than $30,000 in fees to opposing attorneys and the court for their time.

In states including Arizona, Utah, New Jersey and Colorado, lawyers who have made mistakes because they used AI have been removed from cases, ordered to pay attorney fees, required to reimburse their clients, charged outright fines and, in some cases, formally disciplined or suspended.

Online databases have tracked hundreds of cases in which lawyers or court officials made mistakes using AI.

However, the Windham case appears to be the first reported case in New Hampshire.

‘A basic duty’

As in so many other areas, artificial intelligence poses great risks and opportunities for lawyers, and the potential for mistakes is high. Courts, law firms and law schools across the country are grappling with how to encourage responsible use.

Ultimately, the ethical standards for truthfulness set by state bar associations and courts remain unchanged whether AI is used or not.

“That obligation has always existed,” said Bob Lucic, chairman of the New Hampshire Bar Association’s Special Committee on Artificial Intelligence. “You’re not supposed to submit cases to judges who don’t exist. That’s a bad thing.”

Many lawyers overestimate how smart and reliable AI tools are, he said.

An online database compiled by Jenny Wondracek, director of the Law Library at Capital University in Ohio, lists nearly 500 cases where AI was used and errors were found. The earliest cases date back to 2023, and – whether it’s because people are using AI more or getting better at checking for hallucinations or, most likely, both – the number of cases is increasing.

Courts are tackling AI abuse in different ways, Wondracek said, looking for the best way to deter. Fines and suspensions are one tactic, but some courts are going in a different direction by requiring lawyers to undergo new training.

“We have one judge who waives monetary awards if the attorneys sit down with law students and explain what they did wrong,” Wondracek said. “So a little more creative.”

There are also more aggressive punishments.

The judge in the California case wrote an opinion condemning the lawyer for “breaching a basic duty… owed to his client and the court.” In addition to the hefty fine, the judge ordered the attorney to personally deliver a copy of her opinion directly to his client and to the California Bar.

On the other hand, Wondracek says, those who take steps to prevent future AI abuse are often rewarded by judges with a lesser sentence.

In a case in Wyoming involving the personal injury firm Morgan and Morgan, a judge fined both the attorneys who used AI and the other attorneys who signed false cases. But their punishments were lighter because, like Blauwfort, they were open about the mistake. The company itself, Wondracek said, was not sanctioned because it quickly implemented new guidelines and training requirements.

When Lucic, also a partner at Sheehan, Phinney, Bass and Green, graduated from law school, the most groundbreaking technology of the time was the Post-it note, he said.

Now that he has been active in litigating technology cases for decades, it seems strange to him.

Lawyers have always had an obligation to stay abreast of technological advances, he explained, and to know how to use them and how not to use them.

For the time being, the strengths of AI technology lie in saving time.

“I am not one of those people who really think that AI will completely take over the practice of law,” says Lucic. “But if you look at your desk and discover the things that really irritate you most about your work… AI is really good at doing those kinds of non-legal things for lawyers in general.”

At the same time, Lucic continued, lawyers need to know what AI is capable of in the courtroom in order to pay attention to it.

“We can’t just pretend this doesn’t exist, both in our own practice, but also because our clients use this,” Lucic said. “Things that we used to take for granted, like authenticating photos and ensuring that evidence is not, in fact, generated by artificial intelligence. These kinds of things will be a major concern for all of us in the future.”

While lawyers have always kept a close eye on what the opposing side says and submits – it is powerful to be able to correct the other side’s argument – ​​the availability of AI tools brings a renewed commitment to investigation within the same companies and teams.

In many cases where a court finds AI-hallucinated case law, it was a lawyer or paralegal who made the first mistake, as Bluefort told the court happened in her case.

Law firms are adding policies that prohibit the use of certain AI platforms on anything client-related, Lucic said. Not only because of the risk of errors, but also because of concerns about customer privacy. ChatGPT may track, use and store any information its users enter.

Some of Lucic’s biggest concerns stem from Pro Se situations, where people represent themselves.

“Now all of a sudden you can go to ChatGPT and ask them to write you a letter, as an unrepresented party, and it will be submitted to a court,” he said.

Even if the average user thinks he or she is checking for errors, he or she probably doesn’t have the legal expertise to spot a made-up case, an inappropriate quote, or a completely unfounded claim—whether in their own work or someone else’s.

An eye for mistakes

The Monitor has submitted the AI ​​briefings from the Windham case to ChatGPT and asked for confirmation of the accuracy and relevance of the cases. It noted some of the errors noted by opposing counsel in that case.

“The judges are now going to have to examine that very, very carefully, and they don’t have the resources or the technology to do that,” Lucic said. “It’s kind of an arms race in the court…their biggest concern right now is how are they even going to police this if there’s no lawyer on one side or the other?”

Ultimately, for lawyers and non-lawyers alike, Lucic sees the way forward as “throwing the tires” on new tools, both learning their potential for efficiency while also not thinking they are smarter than they are.

When Bluefort explained the errors in the assignment to the court, she was grateful because it gave her the opportunity to start developing an AI policy for her company.

“Ironically, I am grateful to have this hearing,” she said.

When Bluefort asked her employee why she was using AI, she had no clear explanation.

“She thought it was reliable,” Blauwfort said.

What to read next

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How Apple’s ‘Find My iPhone’ Helped Bust Gang Behind 40,000 Stolen Phones
Next Article Now Is a Great Time to Feed Hummingbirds—This Is My Favorite Feeder to See Them Up Close
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

‘I’m a composer. Am I staring extinction in the face?’: classical music and AI
News
Our Reviewers’ Favorite Vegan Protein Powder Is Below $30 on Prime Day
Gadget
This Hidden iOS 26 Feature Lets You See An Extended Call History For Any iPhone Contact – BGR
News
ransomware payments are declining in EMEA
Mobile

You Might also Like

News

‘I’m a composer. Am I staring extinction in the face?’: classical music and AI

7 Min Read
News

This Hidden iOS 26 Feature Lets You See An Extended Call History For Any iPhone Contact – BGR

4 Min Read
News

Prime offer: Snag the GoPro HERO13 Black Ultra Wide Edition at a record-low price

2 Min Read
News

Here’s what it’s really like to appear on billionaire VC Tim Draper’s ‘Meet the Drapers’ pitch show | News

9 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?