By: Carla Rubio
We now live in a world where artificial intelligence (AI) seeps into every aspect of our lives. AI has become a tool, utilized for time-consuming tasks such as research or brainstorming, or a crutch, utilized in place of a simple Google search or to complete substantive work. This summer, two federal judges in Mississippi and New Jersey committed the shocking error of using AI to write their court orders. The use of AI, to this extent, risks our due process rights.
Following an order issued by Mississippi U.S. District Judge Henry Wingate, the attorneys in the civil rights lawsuit filed a motion on July 20 to clarify or correct the order. They pointed out the inconsistencies in the order, including “incorrect plaintiffs and defendants” and “allegations that do not appear in the operative complaint and/or are not supported by record evidence.” Along the same vein, in New Jersey, the attorneys in a securities lawsuit wrote to the court to point out “a series of errors in the Opinion” by U.S. District Judge Julien Neals, such as misstated cases and mistaken quotes.
These judicial errors mirror a pattern established by attorneys in recent years. Lawyers have made headlines for using AI in court and even using AI to defend their use of AI. Now they serve as cautionary tales, warning the legal community to double-check their work before submitting a document with fictitious cases or incorrect parties. These practitioners, among many others, have demonstrated that current procedural safeguards are not enough to avoid egregious mistakes—mistakes tantamount to bad faith and reckless conduct. However, now that AI has seeped onto the bench, there is a greater risk involved than just a misplaced quote. This is a possible due process violation.
Procedural due process requires the government to satisfy three basic elements before depriving a person of life, liberty, or property: notice, an opportunity to be heard, and an impartial decisionmaker. The Supreme Court stated in Marshall v. Jerrico, Inc. that “[t]he neutrality requirement helps to guarantee that life, liberty, or property will not be taken on the basis of an erroneous or distorted conception of the facts or the law.” Without sufficient safeguards, due process demands a greater opportunity to contest decisions raising these reliability concerns. Due process demands that judges make reasoned decisions grounded in established law, not in the errors or fabrications of AI. Senate Judiciary Committee Chairman Chuck Grassley said, “[w]e can’t allow laziness, apathy or overreliance on artificial assistance to upend the judiciary’s commitment to integrity and factual accuracy.”
Those on the bench have already begun taking corrective measures to verify the accuracy of their court orders and safeguard due process rights in the face of this ever-developing issue. In response to his errors, Judge Wingate implemented a second independent review of all opinions and now requires all cited cases to be printed and attached. Judge Neals enforced a written policy prohibiting law clerks from using AI and now requires a multi-level opinion review.
These approaches echo aspects of the corrective measures taken by Colombia’s court system, as both are concerned with transparency and accountability mechanisms. In Colombia, a judge used the AI model ChatGPT to rule that a child with disabilities was exempt from paying fees to a health insurance company in order to access medical treatment. Judge Juan Manuel Padilla used AI to help resolve the matter, asking the model questions such as, “Is an autistic minor exonerated from paying fees for their therapies?” It answered, “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”
Employing AI in a case dealing with a child’s fundamental right to health raised alarm bells on the use of AI tools for legal reasoning. Judge Padilla argued ChatGPT performs secretarial services in an “organized, simple and structured manner” and could “improve response times” in the justice system. Although the AI model helped motivate his verdict, he claimed that the model relied on precedent to reach its conclusion.
Yet Colombia’s Constitutional Court found that cutting these corners is not worth endangering one’s right to due process. In a hearing prompted by Judge Padilla’s actions, Prof. Juan David Gutiérrez testified to the Court that “[t]he Constitutional Court warns when human beings simply copy and paste information without a review, it cannot only be incorrect or inaccurate, but false[,] a judge must make value judgments, that cannot be done by a machine.” As a result, the Court directed the Consejo Superior de la Judicatura, Colombia’s judiciary governing body, to adopt the UNESCO model and create Colombia’s Guidelines for the Responsible and Safe Use of Generative AI in the Judicial Branch. These guidelines offer practical guidance on implementing AI ethically across different judicial cases as well as training spaces to promote learning about AI in the judicial context, along with its risks and benefits. This effort exemplifies a focus on how AI affects an individual’s rights.
The U.S. judicial system needs an enforceable uniform standard, similar to that adopted by Colombia, to help judges understand the risks and benefits of AI. Individual self-imposed reforms are not enough. Although Judges Wingate’s and Neals’s new remedial measures represent steps in the right direction, due process requires mandatory procedural safeguards. These may include mandatory disclosures when AI is used in opinion drafting, comprehensive training programs, and independent reviews to ensure verified citations. Colombia’s approach ensures accountability while still maintaining the innovation and potential benefits of AI tools. This must be an effort from both the bench and the government—courts must develop internal protocols while Congress establishes uniform standards. Only through this active effort can we safeguard due process rights, build fairness into AI systems, and avoid the washing away of judicial integrity.

