Artificial Intelligence (AI) has quickly moved beyond the realm of science fiction and techworld speculation to weaving itself into our everyday lives, including our work as family mediators. In fact, AI isn’t entirely new at all. It’s been showing up for years in the form of Word’s spellcheck, predictive text on our phones, and GPS systems that adjust our route mid-drive. We may not have called these tools ‘AI’ at the time, but they’ve been helping us work more efficiently, communicate more clearly, and make smarter decisions all along.
Today, AI tools are becoming more visible in our field, from intake forms and parenting plan templates to the ways clients initially engage with us. Some clients arrive with AI-generated messages or sample agreements in hand, using these tools to clarify what they’re hoping to achieve through mediation. Before we hand over the flip chart, or Zoom link to a chatbot, it’s worth asking: what role should AI actually play in family mediation?
Used wisely, AI offers some genuinely helpful tools.
It can enhance efficiency. Automated systems can now support tasks like intake, scheduling, and even the initial drafting of agreements, by using tools like the Parenting Plan Guide and Template provided by AFCC-O. Mediators may use AI to generate templates, summarize case notes, or lighten their administrative load.
AI can help broaden access to services. Clients, whether in urban centres or remote, under-served communities, can now access plain-language legal information anytime, thanks to resources like Steps to Justice.
And, as mentioned earlier, AI can help families feel more prepared when they walk through our (virtual or physical) doors. For families who are stressed, overwhelmed and navigating separation for the first time, AI can offer a welcome starting point. When clients arrive having already gathered key documents or drafted a basic parenting plan, sessions may run more efficiently and affordably. In this sense, AI can act like a virtual assistant, quietly running in the background so we can focus on the human side of mediation.
Even so, AI is a tool, not a finished product. Like a recipe book, it provides structure and guidance towards an end result, but it doesn’t account for every individual’s needs or preferences. As mediators, we know that no two families are alike. We don’t serve the same “dish” to every client. AI requires the same thoughtful approach. It might generate a draft parenting plan, but we must tailor that plan based on the emotional, cultural, and practical realities of the people in the room. Blindly following an AI-generated solution, without adapting it to context, misses the very heart of mediation: responsiveness and relationship.
Issue 30, July 2025 AI has significant limitations and cautions.
One of the primary concerns is accuracy. AI-generated content can look polished and sound confident yet be entirely incorrect. We’re already seeing examples where clients present beautifully formatted parenting plans or legal clauses created by ChatGPT, complete with outdated laws, missing context, or assumptions that don’t apply to their situation. It’s the digital equivalent of baking a cake with salt instead of sugar because the online recipe looked professional, and the ingredients looked the same at a glance. These so-called “hallucinations”, when AI generates content that sounds convincing but is completely inaccurate, can lead clients and mediators down the wrong path if they put too much faith in the generated outcome or expect AI to do all the work for them.
Privacy presents another serious challenge. AI platforms process user input to generate results, and that input may include highly sensitive personal details about abuse, finances, parenting conflict, or trauma. If users aren’t aware of how their data is stored, shared, or used, they may inadvertently put themselves at risk. As mediators, our duty to protect confidentiality extends to any technology we use in our practice. We must understand how AI tools work before relying on them or allowing them to shape our process. A simple but important tip: never input any client names, or any identifying information into an AI tool. When in doubt, keep details general and protect confidentiality at every stage. Did you know that in some circumstances, AI can read information that has been “redacted” by leaving the text and covering it with a shape in a PDF document?
Bias is a quieter and equally pressing issue. Because AI is trained on vast amounts of historical data, much of which reflects longstanding social, cultural, and systemic inequities, it can easily replicate and reinforce those patterns. Parenting norms, cultural expectations, or gender roles can be embedded in an algorithm without the creators or users even realizing it. For example, AI-generated parenting plans may default to traditional assumptions about caregiving roles or decision-making authority, subtly skewing outcomes before the mediation even begins.
In mediation, we are already genuinely attuned to the importance of screening for family and intimate partner violence, and for power imbalances, especially those shaped by gender, race, socio-economic status, or family dynamics. As we begin to explore how AI might assist our work, we need to remember: AI is not a neutral voice. It doesn’t bring the same care, context, or human understanding we do. AI reflects the world it was built in, and unless challenged, it can unintentionally validate the very biases we work so hard to recognize and dismantle in our practice. Mediators have a responsibility to critically assess AI-generated content, just as we would any other source of influence in the room.
Another limitation is that AI lacks emotional intelligence. While it can provide information and generate options, it cannot detect undertone, hear the hesitation in a client’s voice, or notice when someone’s “I’m fine” actually means “I’m not okay.” It doesn’t know how to pause for silence, adjust to a shifting dynamic, or recognize the subtle signs of distress or Issue 30, July 2025 resistance. Mediation is an art of presence and attunement; no algorithm can replicate that. When AI is part of the process, the professional entering the data becomes the sole communicator between people and the problem. That role is delicate. So much can be lost in translation, tone, nuance, context. The deeper story often slips through the cracks, and when the mediator becomes just a messenger, we risk losing the very human insight that makes this work transformative.
At the heart of family mediation is our ability to show up for our clients and be fully present. We bring not only our knowledge base, but our empathy, curiosity, and intuition into the room. We ask thoughtful insightful questions. We listen deeply. We remain open and curious, learning in each conversation what the person in front of us truly needs. The heartled intentions of family mediators matter. We are not just guiding a process; we are holding space for people at one of the most vulnerable times in their lives.
This is especially true when it comes to screening for family violence or coercive control. AI cannot replace the skill, sensitivity, and trust-building required for meaningful screening. Survivors often disclose risk only after rapport has been carefully built over time. How questions are asked, and who is asking them, matters profoundly. Screening is not just a checkbox; it’s a human interaction rooted in safety, and respect. While AI may assist with tools or prompts, it cannot replicate the ethical presence and human-centric this work demands.
At the same time, avoiding AI altogether isn’t the solution either. The question isn’t whether we use it, but how we use it, ethically, intentionally, and with our professional values intact. To that end, a few guiding principles can help shape our approach:
- People first: AI can support our work, but it should never replace the human connection at the core of mediation.
- Transparency and informed consent: Clients should always know when AI is being used, what it does, and what happens to their data.
- Cultural humility and equity: Documents and processes must be tested with all shapes and sizes of families, and their design should reflect the real-world complexities of lived experience.
- Digital literacy for mediators: We need to understand the tools we’re using and be able to explain their benefits and risks to clients.
Professional associations like the Ontario Association for Family Mediation (OAFM) have an important leadership role to play in this space. Through standards of practice, education, and advocacy, we can help shape how AI is integrated, ensuring it aligns with principles of trauma-informed care, inclusivity, and relational integrity. We can also be proactive in pushing back on tools that overstep or oversimplify, while advocating for those that support accessibility, safety, and efficiency.
Issue 30, July 2025 Ultimately, AI is here to stay. It can be a helpful companion in our work, saving time, increasing access, and supporting families through the early stages of separation or divorce. Yet we must remember that AI is not the chef, it provides an average recipe from 10,000 cookbooks. It gives us structure, ideas, and shortcuts, but we are still the ones in the kitchen. We are the ones who taste, adjust, and adapt. We know when to add a pinch more empathy, when to turn down the heat, and when to stop everything because someone just needs to be heard.
So let people lead and let AI assist.
By: Mary-Anne Popescu, Executive Director, OAFM