THE RISKS OF USING ARTIFICIAL INTELLIGENCE TO SOLVE YOUR LEGAL PROBLEMS IN AUSTRALIA

The rapid rise of AI tools, particularly large language models and products such as ChatGPT, has changed how people access information. For individuals facing legal issues, the temptation to use these seemingly authoritative technologies is understandable, particularly given the cost of legal services. However, using AI to address legal matters carries profound risks that can have serious consequences for your case, your finances, and your future.

This article examines those dangers, including the well-documented phenomenon of AI “hallucinations,” the psychological tendency towards overconfidence when using AI, the real-world consequences for self-represented litigants, and the regulatory responses from Australian courts, including the NSW Supreme Court, the Federal Court of Australia, and courts in other jurisdictions. 

 

What are AI Hallucinations in Legal Research?

One of the most dangerous characteristics of generative AI is its propensity to produce “hallucinations” — outputs that appear plausible but are factually inaccurate or entirely fabricated. In a legal context, this can manifest as false citations, invented case law, and fictitious statutory references. The NSW Supreme Court’s Practice Note SC Gen 23¹ explicitly warns that AI tools can generate “apparently plausible, authoritative and coherent responses” that are actually “inaccurate or fictitious.”

A May 2024 analysis² by Stanford University’s RegLab which examined AI research tools by LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI) showed that while these tools reduced errors compared to other models such as GTP-4, even these bespoke legal AI tools still generated hallucinations at alarming rates: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.

This is far higher than what would be acceptable for responsible legal practice.

For Australian users, there is also an added concern in that AI tools trained predominantly on overseas material may provide information that is simply irrelevant to Australian jurisdictions.

While AI has improved considerably since 2024 and continues to improve, the risk of hallucination remains very much present. 

 

The “Reverse Dunning-Kruger Effect”: Why AI Makes You Overconfident

Beyond AI’s technical failings, there is a psychological dimension that makes these tools particularly dangerous for non-experts. Research published in Computers in Human Behavior³ identified what scientists describe as a “Reverse Dunning-Kruger Effect” when people interact with AI: rather than less competent users being more overconfident, everyone overestimates their performance regardless of skill level. Most strikingly, those with greater AI literacy showed even more overconfidence.

The study also found that most users engaged in “cognitive offloading” — blindly trusting AI output without verification, typically posing a single question and accepting whatever answer was provided.

In the legal context, this is particularly unsafe. A person using ChatGPT to research their legal rights may receive confident sounding but completely wrong advice, and research suggests they will likely overestimate both the quality of that advice and their own understanding of the issues. 

 

How Australian Courts Are Responding to AI in Legal Proceedings

The NSW Supreme Court

Practice Note SC Gen 23⁴, issued by the NSW Chief Justice on 21 November 2024 and effective from 3 February 2025, takes a strict approach. It prohibits the use of AI to generate content for affidavits, witness statements, character references, or expert reports without prior court approval. Any citations in written submissions must be manually verified without relying on AI. The Practice Note makes clear that AI use does not reduce a practitioner’s professional and ethical obligations to the Court. 

 

The Federal Court of Australia: Practice Note GPN-AI

Following consultation that commenced in April 2025, the Federal Court issued its Generative Artificial Intelligence Practice Note⁵ (GPN-AI). It takes a more permissive approach, acknowledging that AI can increase efficiency and access to justice, but makes clear that existing legal and professional obligations are unchanged.

The Practice Note applies to everyone who files documents with the Court, including self-represented litigants. Where AI has been used in preparing a document, the responsible person must confirm that:

  • facts stated in pleadings are based on what the party reasonably considers can be proved;
  • legal authorities cited exist and support the propositions stated; and
  • evidence cited exists, is or will be before the Court, and is reasonably likely to be admissible.

Non-compliance carries serious consequences, including adverse costs orders. Where AI use requires disclosure, parties must be able to specify what AI was used, how, and for what purpose.

 

Other Australian Jurisdictions

Victoria’s Supreme Court and County Court have adopted guidelines emphasising transparency, data privacy, and professional obligations rather than outright bans. Queensland and Western Australia have issued practice directions, and other jurisdictions are actively developing their own guidance. 

 

Lawyers Being Sanctioned for AI Misuse in Australian and International Courts

Australian practitioners have already faced consequences.

In Handa & Mallick [2024] FedCFamC2F 957⁶, a family law lawyer produced a list of authorities that contained citations to cases that did not exist. In Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95⁷, a lawyer admitted filing submissions containing non-existent cases generated by ChatGPT, believing they “read well” without independently verifying them. The lawyer was referred to the NSW Legal Services Commissioner.

As reported by the Guardian⁸, a lawyer in Victoria became the first in Australia to face professional sanctions for using AI in a court case, being stripped of his ability to practise as a principal lawyer for two years after AI generated false citations that he had failed to verify. This means that he was no longer entitled to practise as a principal lawyer, not authorised to handle trust money, would no longer operate his own law practice, and would only practise as an employee solicitor.

Internationally, the problem is accelerating. As of 6 May 2026, researcher Damien Charlotin’s AI Hallucination Cases database⁹ has identified 1397 cases worldwide where fabricated AI material appeared in court filings, with 955 cases reported in USA alone representing the vast majority. In early 2025, Charlotin noted that before spring, they had around two cases per week, whereas now they are at 2-3 cases per day.

In Australia, the database identifies 73 decisions impacted by AI‑generated hallucinations, overwhelmingly involving self‑represented litigants. This imbalance is unsurprising, given the role of cognitive offloading and the limited capacity of self‑represented litigants to assess the reliability of AI‑generated legal content.  However, there is also an instance attributed to an ACAT Tribunal member which goes to show that even experienced legal practitioners can make errors when using AI generated content.

 

The Real-World Consequences for Self-Represented Litigants

AI hallucinations in court filings can mislead judges, appear in court judgments, and sway disputes between parties, undermining the integrity of the legal system.

For self-represented litigants, the risks are even greater than for lawyers. Without professional training, ethical obligations, or insurance, members of the public have no safety net when AI leads them astray. Relying on fabricated case law or a misunderstanding of your rights could mean losing a case you might have won and possibly cannot appeal, missing a limitation period, inadvertently waiving rights, or making admissions that damage your position.

By way of example, we were recently briefed to act for a company who had been served with a creditor’s statutory demand issued under s459E of the Corporations Act 2001 (Cth) (Act). The client had relied on ChatGPT to prepare a response disputing several parts of the demand, citing a number of cases and legal principles that appeared facially credible. However, this focus on substantive argument obscured a critical procedural requirement: the client failed to apply to set aside the statutory demand within the strict 21‑day period prescribed by the Act. As a result, despite having arguable grounds to challenge the demand, the company lost the right to do so. Once winding‑up proceedings were commenced, the company was left with no realistic option but to settle with the plaintiff or risk being wound up. 

 

Frequently Asked Questions About AI and Legal Advice in Australia

Can I use ChatGPT to write a legal document in Australia?

There is no blanket law preventing you from using ChatGPT to draft a document, but doing so carries serious risks. AI tools frequently fabricate cases, misstate the law, and produce advice inapplicable to your Australian jurisdiction. If you file a document containing AI-generated errors, you may face adverse costs orders or dismissal. For any document with legal consequences, obtain advice from a qualified Australian legal practitioner.

An AI hallucination occurs when an AI tool invents a case, statute, quote, or legal principle that does not exist, typically presented with complete confidence. These fabrications can be almost impossible to detect without checking against authorised legal databases, and multiple Australian lawyers have already been sanctioned for filing them.

Australian courts have issued increasingly detailed guidance. The NSW Supreme Court prohibits AI use in affidavits, witness statements, and expert reports without leave. The Federal Court’s Practice Note GPN-AI requires disclosure of AI use and independent verification of all citations. Victorian, Queensland, and Western Australian courts have issued or are developing their own practice directions.

 

Conclusion: There Is No Substitute for Qualified Legal Advice

The accessibility of AI tools like ChatGPT makes them an attractive starting point for people facing legal problems. But the evidence is clear: AI systems routinely hallucinate, fabricating cases and legal principles that do not exist. Research shows that AI use triggers overconfidence in users, compounding the risk. Australian courts have responded with significant restrictions and disclosure obligations. And legal professionals worldwide are being sanctioned at accelerating rates for filing AI-generated content without verification.

The law is complex, nuanced, and jurisdictionally specific. Getting it wrong can mean losing your case, your money, your liberty, or your rights. When your legal interests are at stake, there is no substitute for qualified legal advice from a practitioner who understands your jurisdiction, can verify the accuracy of their research, and bears professional responsibility for the advice they provide.

AI-generated legal advice can carry significant risks. If you are dealing with a dispute, statutory demand or court proceeding, contact Madison Marcus’ Commercial Litigation team for strategic legal advice tailored to your circumstances.

 

References

¹ Supreme Court of New South Wales, Practice Note SC Gen 23 – Use of Generative Artificial Intelligence (Gen AI): https://supremecourt.nsw.gov.au/documents/Practice-and-Procedure/Practice-Notes/general/current/PN_SC_Gen_23.pdf

² Stanford HAI, ‘AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries’: https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries

³ ‘Generative AI and the Reverse Dunning-Kruger Effect’, Computers in Human Behavior: https://www.sciencedirect.com/science/article/pii/S0747563225002262?via%3Dihub

⁴ Supreme Court of New South Wales, Practice Note SC Gen 23 – Use of Generative Artificial Intelligence (Gen AI): https://supremecourt.nsw.gov.au/documents/Practice-and-Procedure/Practice-Notes/general/current/PN_SC_Gen_23.pdf

⁵ Federal Court of Australia, Generative Artificial Intelligence Practice Note (GPN-AI): https://www.fedcourt.gov.au/law-and-practice/practice-documents/practice-notes/gpn-ai

Handa & Mallick [2024] FedCFamC2F 957: https://jade.io/article/1088721

Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95: https://jade.io/article/1115083

⁸ Melissa Davey, ‘Lawyer caught using AI-generated false citations in court case penalised in Australian first’, The Guardian (online, 3 September 2025): https://www.theguardian.com/law/2025/sep/03/lawyer-caught-using-ai-generated-false-citations-in-court-case-penalised-in-australian-first

⁹ Damien Charlotin, AI Hallucination Cases Database: https://www.damiencharlotin.com/hallucinations/

 

Cristian Fuenzalida

Cristian Fuenzalida: Partner, Commercial Litigation & Insolvency

Cristian Fuenzalida is a Partner in the Commercial Litigation division at Madison Marcus, with more than a decade of experience across commercial litigation, insolvency and restructuring, property and co-ownership disputes, and building and construction disputes. He is known for delivering practical, commercially focused legal advice and effective dispute resolution strategies tailored to his clients’ objectives.
CONTACT CRISTIAN