Menu Close
Menu Close
How Can Artificial Intelligence Help Family Lawyers?

How Can Artificial Intelligence Help Family Lawyers? I asked ChatGPT this succinct question. Like many writers, I type my writing prompts into the search bar, and wait patiently as the answers float dreamily onto my screen. Overall, the categories are predictable, with workflow automation, document drafting and review, and legal research leading as mechanisms to make the practice of family law more efficient and accessible. Artificial intelligence is synonymous with access to justice.

I consider myself to be technology reluctant. It is not that I don’t like technology, or that I resist efficiency, it is that I have established my ways of getting things done and I am hesitant about change. But what about those little changes that AI introduces into your life so subtley, you don’t even realize it is AI? Like the “suggested” replies at the bottom of your email reply box? Or the swipe right function with suggested text?

We use AI every day without realizing it. Whether it is the Chatbot that greets us on a website we are visiting or our favourite restaurants being pushed to the top of our mobile food app when the App has predicted is our hungriest time, based on previous behaviour. When I am reading my local news, advertisements for my favourite shoe store push shoes “on sale” that seem a little too good to be true. I know deep down inside to resist the AI sales tactics, but I can’t help myself. I spend the rest of the month squirreling away packages in my closet, away from my spouse’s questioning eyes.

There are a few legal based AI programs that my office has learned to love, and I boast about them without any endorsement deals from their creators. First, the CLIO document creator is predictive and efficient. Second, and this is a recent development, are the AI summaries for cases available on Canlii. From other practictioners, I have learned that AI can be used to project potential settlements of financial issues or to provide summaries of relevant case authorities.

Not only do I see AI as being a helpful tool for practitioners, but also a mechanism for self represented litigants to gain access to plain language legal information. For those lawyers who access ChatGPT to conduct research, being able to access research summaries quickly can save clients money and help.

While many of us dream about our cases at night, AI’s greatest weakness is its capacity to hallucinate case law. This article could just easily been entitled “How does Artificial Intelligence terrify family lawyers?” A hallucination is exactly what it sounds like: a fake, non-existent case that the AI tool dreams up to fill a gap in the tool’s search efforts.

When artificial intelligence is left unchecked and hallucinated cases end up in materials submitted to opposing counsel and courts, it has the potential of being a source of career embarrassment, solicitor costs orders, and even professional liability. Ensuring that hallucinated cases don’t end up in facta submitted to court or to opposing counsel must be stringently guarded against.

To avoid falling victim to hallucinations it is necessary to double check all of your work against the case law that your AI research tool is citing.

The recent case of Ko v Li 2025 ONSC 2766 is instructive. In this case, counsel for the Applicant delivered a factum that was prepared by Artificial Intelligence. Hyperlinks to specific cases within the factum were broken, went to incorrect cases, or were just broken. There were other signs that the factum had the help of AI – the factum had no page numbers or paragraph numbers.

As counsel, we sign our name to the legal memorandum, advice, and factum we disseminate. In Ko v Li the judge highlighted that the factum was signed, “All of which is respectfully submitted….” Indicating a lawyer’s affirmation to close off the factum. Our signatures are our reputations. Justice Myers for the Superior Court couldn’t have been clearer, when he decisively wrote:

[15] All lawyers have duties to the court, to their clients, and to the administration of justice.

[16] It is the lawyer’s duty to faithfully represent the law to the court.

[17] It is the lawyer’s duty not to fabricate case precedents and not to mis-cite cases for propositions that they do not support.

[18] It is the lawyer’s duty to use technology, conduct legal research, and prepare court documents competently.

[19] It is the lawyer’s duty to supervise staff and review material prepared for her signature.

[20] It is the lawyer’s duty to ensure human review of materials prepared by non-human technology such as generative artificial intelligence.

[21] It should go without saying that it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities. At its barest minimum, it is the lawyer’s duty not to submit case authorities that do not exist or that stand for the opposite of the lawyer’s submission.

[22] It is the litigation lawyer’s most fundamental duty not to mislead the court.

This restatement of what is otherwise encapsulated in the Rules of Professional Conduct is a reminder of how our basic obligations to the profession and the public carry over to technology as advanced as artificial intelligence.

We all make mistakes. If you have mistakenly included a hallucination in your factum, get out ahead of it as soon as possible. Embrace your error, and do not be afraid to take responsibility for it. We don’t need to pretend we aren’t using artificial intelligence tools, we just need to be better at acknowledging its frailties and taking responsibility for our shortcomings. Take for example the case of Zhang v Chen 2024 BCSC where the lawyer took responsibility for the hallucinated cases contained in the factum by sending a letter of apology to the Court. In a profession where showing weakness is admonished, this was a classy act.

The message from Zhang v Chen is stark: “Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice.” In Zhang v Chen, Justice Myers cited the issued apology in Zhang v Chen as a distinguishing point from the case in front of him. The consequences for counsel for Ko were quite different: an invitation to attend to advise why contempt proceedings shouldn’t be commenced. Again, in light of a sophisticated technological error, the human act of an apology carried the utmost of weight.

This is not a warning to cease using AI tools entirely. But be cautious. Be aware of your Court or Tribunal’s most current practice directive on the use of artificial intelligence tools. Review the cases that AI suggests and ensure the links are authentic. Your role as a professional is to have a complete command of the information that you are disseminating. AI has a long way to go to replacing the critical oversight that knowledgeable counsel brings to a file.

A small caution at the bottom of my ChatGPT query that started this article has also stuck with me:

Family law often involves high emotions and vulnerable parties (like children), so AI must be used carefully. Human judgment, empathy, and ethical sensitivity cannot be replaced. AI should be a support tool, not a decision-maker in family law.

This is a fact that can never be denied. There is no automation that can help a family process grief of divorce or that can fully grasp the nuances of raising kids. Even ChatGPT showed remarkable insight into this fact by identifying the human factors that are so prevalent in family law.

So while AI is infiltrating the legal profession, fast and hard, and we all find ourselves jumping on this learning curve, I can comfortably say we have a long way to go before it completely replaces us.

By: Neha Chugh, Criminal, family and child protection lawyer in Cornwall