On Friday, October 10, 2025, Law360 reported that two federal judges are under scrutiny after withdrawing orders containing significant errors, raising concerns about the potential use of artificial intelligence in their drafting. Despite demands for answers from lawmakers, including Senate Judiciary Committee Chair Sen. Chuck Grassley, R-Iowa, it remains uncertain whether the judges will disclose the circumstances surrounding the errors or face disciplinary action.
U.S. District Judge Julien Xavier Neals of the District of New Jersey retracted an opinion in July that denied the dismissal of a securities class action. The order was found to contain fabricated quotes and references to incorrect court decisions. Similarly, U.S. District Judge Henry T. Wingate of the Southern District of Mississippi withdrew an order that temporarily halted the enforcement of a state law concerning diversity, equity, and inclusion (DEI) in public schools. Wingate’s order contained nonexistent allegations, misidentified litigants, and referenced terms absent from the legislative text.
The possibility that AI may have been used by either judge or their clerks to draft the problematic filings has been suggested, but neither judge has confirmed or denied these claims. According to University of Miami School of Law professor Christina M. Frohock, the judges are unlikely to be compelled to explain the errors and may not face disciplinary repercussions if they choose to remain silent.
Judge Neals, a relatively recent appointee nominated by President Joe Biden in 2021, has been described as “industrious, thorough, effective, and hard working” by Ralph J. Lamparello, co-managing partner of Chasan Lamparello Mallon & Cappuzzo PC. Lamparello, who has known Judge Neals for 35 years, suggested the incident might stem from a mix-up involving a draft opinion. Judge Neals declined to comment on the matter.
Judge Wingate, appointed by President Ronald Reagan in 1985, has served on the Southern District of Mississippi for four decades. After Mississippi’s attorney general requested an explanation, Judge Wingate corrected his temporary restraining order but declined to provide further details, stating that the corrected order was controlling and “no further explanation is warranted.”
This is not the first time Judge Wingate has faced scrutiny. In 2016, the Fifth Circuit removed him from a lawsuit due to delays in ruling on pending motions. The following year, the court’s chief judge restricted the assignment of new civil cases to Judge Wingate due to a backlog. Additionally, the Fifth Circuit overturned an injunction issued by Judge Wingate that halted an investigation into piracy on Google’s services.
Law professors have expressed concern over the potential use of AI by judges, but acknowledge the high volume of orders and opinions they issue. Zac Henderson, a visiting professor at the University of California College of the Law, San Francisco, emphasized that most judges and their clerks do not encounter such issues.
Frohock noted the disparity in accountability between attorneys and judges, stating that while attorneys are compelled to explain AI-related errors, judges are not subject to the same requirement. She also suggested that disciplinary consequences for the judges are unlikely, as judicial canons and the code of conduct lack strict enforcement mechanisms.
Fordham Law School professor Bruce A. Green stated that issuing opinions with nonexistent decisions or untrue facts is not “competent and diligent” judicial work. However, he believes that federal judges are unlikely to be sanctioned for a single instance of this nature, suggesting that “public embarrassment is enough to reinforce it.”
Source: Law360