In the world of justice, the written word in the form of laws and supreme court rulings has long been considered a solid foundation. But with the introduction of generative artificial intelligence into law firms, this basis is beginning to shake. A recently published decision by the Berlin Court of Appeal highlights the risks when lawyers delegate research to language models and do not control the results.
Read more after the ad
The judges were forced to issue a clear warning to the legal profession: in a family law dispute, non-existent judgments were cited to support the chances of success of an appeal.
The case began at the Berlin-Kreuzberg district court (ref.: 130 F 12281/25). A mother had applied for an interim order to give her sole parental custody of her daughter. She justified this with an alleged endangerment of the child’s welfare by the father. At the same time, she applied for legal aid.
But the district court rejected the applications: the mother had not credibly demonstrated that she was particularly urgent. It is not clear why a decision in the main proceedings cannot be awaited.
The Discovery of Phantom Judgments
With the help of her lawyer, the woman immediately lodged a complaint with the Court of Appeal against the rejection of legal aid. The justification stated that the first instance had exceeded the requirements for the chances of success. To support this legal opinion, the law firm resorted to a supposedly strong argument: a decision of the Federal Court of Justice (BGH) of November 14, 2007 with the file number XII ZB 183/07, which was allegedly printed on page 137 in the specialist magazine FamRZ 2008.
What the lawyers apparently didn’t realize was that this quote was fictitious. When the judges of the 17th Senate for Family Matters wanted to consult the source, they came up empty. After “extensive” research, they discovered that a decision under this reference number does not exist in any legal database or on the BGH website.
Read more after the ad
There is actually a decision from the Karlsruhe judges on the page in question in the magazine mentioned. But it comes from a different year, has a different file number and deals with the assessment of the income of a self-employed person after consumer bankruptcy. This decision had nothing to do with legal aid.
Audit obligations in the age of AI
The Chamber Court complained in its statements (ref.: 17 WF 144/25): This is apparently the result of a “fantasizing” AI. It also occurred to the Senate that the original application contained another phantom quotation of a decision from the Brandenburg Higher Regional Court, which was also nowhere to be found. In its decision of November 20, 2025, the Chamber Court not only rejected the mother’s complaint as unfounded, but also used the opportunity to issue a fundamental complaint.
Lawyers are therefore obliged, both due to their client relationship and in their function as organs of the administration of justice, to examine written documents thoroughly before submitting them. This is especially true when tools such as AI are used.
The case illustrates a growing problem in legal practice: hallucinations of language models. These AIs are trained to generate linguistically plausible texts. But they have no understanding of facts or reality. When they are asked for a suitable verdict, if in doubt they “invent” a file number and a reference that look deceptively real. This means considerable additional work for the courts: judges now have to verify sources that were previously considered reliable.
The case that gained international fame was Mata vs. Avianca in the USA, in which a New York lawyer submitted completely fictitious precedents in a brief generated by ChatGPT and was sentenced to a fine. Reports of such “citation ghosts” are also increasing in Germany. The current Berlin resolution now sets a formal guideline for the legal profession. Ultimately, the mother’s complaint failed not because of the incorrect quotes, but because of a lack of substance.
(my)
