In a bombshell of a case, a Quebec Court recently overturned an arbitration decision based on the arbitrator’s use of generative artificial intelligence in writing reasons for decision. The decision in Association des ressources intermédiaires d’hébergement du Québec (ARIHQ) c. Santé Québec – Centre intégré universitaire de santé et de services sociaux du Centre-Sud-de-l’Île-de-Montréal, 2026 QCCS 1360 was premised on the principle that when parties select an arbitrator to hear a dispute they are entitled to expect the arbitrator to make the decision. This follows from the principle the maxim ” he who decides must hear ” is a consequence of the rule audi alteram partem” insofar as a litigant is truly ‘heard’ only if he is heard by the one who will decide his case”. However, according to the Quebec court, when large language models (LLMs) are used not as a mere tool, but to effectively decide a case and to write the reasons, this principle is breached. The smoking gun in the case were numerous references to non-existing or hallucinated authorities.
The following are important excerpts from the decision (translated using Google Translate):
2.2.4 The special case of artificial intelligence
[ 88 ] As Justice Morin pointed out in a recent case: “any technological measure that can facilitate citizens’ access to the justice system should be welcomed and regulated rather than prohibited and stigmatized”[81].
[ 89 ] An increasing number of legal professionals are using Large Language Models ( LLMs ) to help them summarize documents, identify relevant documents through gigabytes of data, identify trends, transcribe audio files, correct or refine the drafting of existing text, cross-check precedents or conduct legal research.
[ 90 ] Moreover, the use of AI in a judicial context involves certain challenges and risks [82] :
90.1. Creation of False References or Hallucinations: The use of artificial intelligence to draft legal opinions or notes and authorities can sometimes lead to the creation of false legal references referred to as “hallucinations.” These hallucinations can be extremely deceptive and difficult to detect, as they often exhibit all the characteristics of genuine quotations and can only be identified as false after careful examination. This type of hallucination forces other parties and the court to devote valuable time not only to attempting to locate something that does not exist, but also to reviewing observations that rely on such nonexistent sources.
90.2. Lack of Discretion and Human Values: Some decisions made by the judicial system involve a degree of discretion. Computer programs operate according to programmed logic to arrive at a predetermined result. Such rigidity is incompatible with discretionary decisions [83] . Judicial decisions must sometimes take into account community values, the subjective characteristics of the parties, and any other contextual circumstances that may be relevant. AI algorithms are generally unable to consider human values, the application of which may vary depending on the circumstances. The changing nature of human values and the unprecedented situations that may arise make it difficult to know when and how these values should be integrated.
90.3. AI can introduce bias: AI operates on a statistical basis. It absorbs text, images, and conversations and reproduces what appears most likely based on contextual elements. Thus, AI systems can be unintentionally programmed to exhibit bias due to programmers’ prejudices or learn to be biased based on the data they are trained on [84] . These risks are exacerbated by the fact that the companies creating the algorithms hold the associated trade secrets and are unwilling to disclose the source code or even discuss how the algorithm works (this opacity is sometimes referred to as the “black box” phenomenon).
90.4. Lack of Confidentiality: Information transmitted to an AI tool is integrated into the system’s database and may subsequently be disclosed by the system when it responds to requests from other users. Therefore, feeding sensitive confidential information to an AI tool violates a lawyer’s obligation to maintain the confidentiality of their clients’ information [85] . When initiated by a decision-maker, this process may violate the obligation to maintain the secrecy of deliberations.
90.5. Public Trust in the Justice System: Our justice system only works if people trust it. The use of artificial intelligence in procedures that may then be used in judgments can lead to cynicism toward the legal profession and the justice system. Justice requires that decisions be perceived as fair, humane, and reasoned. If the public believes that a disembodied or opaque machine has made a decision, rather than a human being, trust in the justice system could be shaken.
[ 91 ] Aware of these risks, the courts have not hesitated to sanction the use of artificial intelligence by parties or their lawyers to draft procedures or notes and authorities, describing this practice as a serious breach which undermines the integrity of the judicial process [86] , even justifying a conviction for court costs or disciplinary sanctions.
[ 92 ] In one of these cases, Justice Myers noted that the effect of such a practice could influence judges to make decisions based on a law that does not exist:
[14] Irrespective of issues concerning artificial intelligence, counsel who misrepresent the law, submit fake case precedents, or who utterly misrepresent the holdings of cases cited as precedents, violate their duties to the court.
[…]
[16] A court decision that is based on fake laws would be an outrageous miscarriage of justice to the parties and would reflect very poorly on the court and the civil justice system.
[17] In determining and pronouncing the law applicable to its decisions, the court receives submissions from counsel concerning the applicable law. As discussed below, the court relies on counsel to state the law accurately and fairly. Misrepresentation of the law by a lawyer poses real risks of causing a miscarriage of justice that undermines the dignity of the court and the fairness of the civil justice system.[87]
[ 93 ] The management of our Court, in a notice to the legal community and the public, also urged litigants and parties to exercise the utmost caution by warning of the risks associated with the inappropriate use of artificial intelligence:
Reliability: For any reference to case law, legislation, or commentary in submissions to the court, it is essential that the parties rely exclusively on sources from court websites, commonly cited commercial publishers, or well-established public services.
Human intervention: To ensure the highest standards of accuracy and authenticity, AI-generated observations must undergo rigorous human review. This verification can be achieved by cross-referencing with reliable legal databases to confirm that the references and their content withstand close scrutiny. Such an approach is consistent with long-standing legal practice [88] .
[ 94 ] Thus, while there is nothing inherently wrong with using a reliable artificial intelligence tool, the rules in force impose on lawyers and parties a duty of oversight to ensure the accuracy of their procedures. A lawyer who fails to carry out these checks evades their responsibilities.
[ 95 ] The pitfalls for decision-makers are even greater.
[ 96 ] As one author points out:
[A] judicial opinion is often thought to convey full authority or legitimacy only because (or if) its author has offered an adequate justification. Similarly, the judges who produce judicial opinions are often thought to be fulfilling their roles—to be instantiating the character of their occupation—only if they generate adequate justifications. The issue, in other words, is not whether the decision is justified at all or by anyone. The issue, at least in part, is whether the deciding court itself has adduced an acceptable justification. Judicial decisions without any accompanying justification can therefore be unsettling and are usually deemed tentative or otherwise peripheral—even though such decisions could be rationalized by one of the parties or by outside commentators.[89]
[ 97 ] Moreover, the Canadian Judicial Council clearly prohibits the delegation of judicial decision-making to artificial intelligence:
Judges are solely responsible for the judicial decisions they make. It must be understood unequivocally that no judge is authorized to delegate their decision-making power, whether to a judicial assistant, an administrative assistant, or a computer program, regardless of their capabilities [90] ….
[ 102 ] The Respondent also suggests that, in the absence of expert evidence, the Tribunal should refrain from concluding that artificial intelligence was used or that the cited authorities do not exist. However, the burden of proof in civil matters is based on the balance of probabilities. Furthermore, the fact that the decisions do not exist has been established, since each of the neutral references cited by the Arbitrator leads to other decisions that have no connection to the subject matter in support of which they are cited.
[ 103 ] In paragraph [71] of the Sentence, we read:
[71] The legal scholar Frédéric Bachan 4 establishes that “the courts must be careful not to confuse prescription and forfeiture. One is a matter of public policy, the other is based on contractual freedom”;
[ 104 ] However, the article cited in support of the claim (footnote 4: Frédéric Bachand, “Prescription and forfeiture: shifting boundaries and practical issues”, Recent Developments in Contract Law , Barreau du Québec, 2016) cannot be found.
[ 105 ] In paragraphs [84] and [88] to [90], the Arbitrator writes:
[84] The Court of Appeal is of the opinion that a contractual clause imposing a rigid time limit for asserting a right in a conventional or administrative regime is permissible if the parties are in a position of contractual equality (e.g. collective agreements);
[…]
[88] The Court of Appeal 5 confirms that forfeiture clauses are valid and enforceable when they are clear and reasonable. The latter recognizes the validity of a limitation period provided for in a collective agreement, specifying that it is not a prescription within the meaning of the Civil Code of Québec , but a rule of contractual order, applicable if it is clear and accepted;
[89] In another decision 6 , the Superior Court emphasizes that contractual forfeiture is lawful, distinct from prescription, and must be respected when it results from a clear and negotiated contract;
[90] Our Court of Appeal 7 also confirms the validity of a forfeiture clause in a conventional regime and specifies that such a clause is not comparable to a legal prescription;
5 City of Montreal v. Montreal Blue Collar Workers Union (CUPE, Local 301) , 2005 QCCA 591
6 Groleau and Groupe Pages Jaunes Cie , 2011 QCCS 5386
7 Tremblay v. Commission scolaire de la Jonquière , 2002 CanLII 24357 (QCCA)
[ 106 ] However, the decisions cited in footnotes 5 to 7 do not exist.
[ 107 ] The decision in City of Montreal v. Syndicat des cols bleus regrouped de Montréal , 2005 QCCA 591 (note 5) does not exist. The numerical reference refers to another decision [94] .
[ 108 ] The decision in Groleau and Groupe Pages Jaunes Cie , 2011 QCCS 5386 (note 6) does not exist. The numerical reference leads to another decision [95] .
[ 109 ] The decision in Tremblay v. Commission scolaire de la Jonquière , 2002 CanLII 24357 (QCCA), does not exist. The numerical reference also refers to another decision [96] .
[ 110 ] In paragraphs [102] to [106], the arbitrator states:
[102] Clause 5-4.05, although admissible as a contractual termination clause, must be interpreted in the light of the applicable legal regime.
[103] The Tribunal considers that clause 5-4.05, although restrictive, is within a valid conceptual framework and pursues a legitimate objective. It does not violate the mandatory provisions of the Civil Code of Québec regarding prescription, since it constitutes a clear contractual admissibility requirement applicable in the specific context of collective health relations. As an agreed forfeiture clause, it is enforceable against the parties who consented to it.
[104] Moreover, this clause does not modify the civil limitation period, but defines the internal parameters of an administrative or conventional remedy.
[105] Such a clause is common in collective agreements (e.g., time limits for grievances, bonus claims), and the courts recognize it as valid provided it is clear, negotiated, equally applicable to the parties, and provides a reasonable time limit. An arbitration tribunal has even recognized a contractual time limit of 30 days . <sup> 8</sup>
[106] Arbitrators and courts accept these clauses when the reasonableness and functionality of the time limit is demonstrated, which is the case here.
8 Arbitrage CHU Ste-Justine (D.T.E. 2018-30).
[ 111 ] SOQUIJ confirms that the arbitration award CHU Ste-Justine Arbitration ( DTE 2018-30 ) does not exist [97] .
[ 112 ] Decision 2018EXPT-30 – Sonin v. Concordia University , rendered by the Administrative Labour Tribunal on December 5, 2017, and referenced as 2017 QCTAT 5536 [98] , dismissed an employee’s claim for retaliation by the employer on the grounds that the complaint had been filed outside the 30-day period stipulated in the Labour Code . Therefore, when the Arbitrator cites the decision to support the argument that “an arbitration body recognized a contractual period of 30 days” (paragraph 105 of the Award), this is incorrect.
[ 113 ] The references mentioned above are central to the Arbitrator’s reasoning. They constitute the only doctrinal or jurisprudential references used as legal support for the Award. The other jurisprudential references are included in sections of the Award that summarize the parties’ positions.
[ 114 ] The preponderance of evidence therefore leads to the conclusion that the Arbitrator’s authority was delegated and that he abdicated his role in reviewing the result.
[ 115 ] This conclusion is necessary because all the doctrinal and jurisprudential references on which the Arbitrator relies are non-existent and “hallucinated”.
[ 116 ] For this reason, the Sentence must be set aside.
[ 117 ] This conclusion does not imply that every judgment that cites erroneous references or uses artificial intelligence as a drafting tool should suffer the same fate. It is conceivable, for example, that there will be cases where the use of artificial intelligence will have been minimal or will have concerned a less important issue.
[ 118 ] In such cases, weighing the nature of the breach in relation to the arbitration proceedings initiated, determining the impairment of the integrity of the proceedings, and assessing the impact of the breach on the award [99] could lead to a different result.
[ 119 ] On the other hand, where the standard of review differs, for example, in an appeal (error of law or manifest and decisive error of fact) or judicial review (reasonableness), the absence of error or a decision deemed reasonable could lead to the decision being upheld notwithstanding a procedural defect. It is not necessary to decide on this here.
[ 120 ] In the present case, the deficiency is significant. It is likely to affect the parties’ confidence in the outcome and in the arbitration system in general. Since non-existent decisions are central to the arbitrator’s reasoning, a party may reasonably believe that a more thorough review of the decisions would have prompted the arbitrator to reconsider their position.
[ 121 ] Therefore, it is justified to conclude that the failure probably had a significant impact on the result.
FOR THESE REASONS, THE COURT:
[ 122 ] ANNULS the arbitration award issued on August 8, 2025 by the defendant, Mr. Michel A. Jeanniot;
[ 123 ] ORDERS the parties to choose a new arbitrator within 60 days of this judgment;
[ 124 ] ALL OF THIS , including court costs.
H/T Alvin Antony