Table of Contents Hide
- Overview of AIDA Companion Document
- Comments on AIDA Companion Document
- Does AIDA lack Parliamentary oversight?
- Should AI systems be regulated by ISED?
- Will AIDA impose responsibilities on AI actors that are impossible to meet?
- Is it premature to regulate AI now via AIDA?
- Will AIDA impede innovation by imposing new restrictions on uses of anonymized data, with its duplicative regulatory regimes, and disproportionate penalties?
ISED released a document purporting to explain the purposes and scope of AIDA and how the regulations will be rolled out. The long expected document, The Artificial Intelligence and Data Act (AIDA): Companion Document was released in anticipation of debates about AIDA in Parliament. The Companion Document is meant to provide assurances about AIDA, to quell the considerable unease among MPs and members of the public that AIDI is nothing but a shell of a bill with no details of what AI systems it will apply to and how it will regulate them. In my view, this information and marketing document does not answer the fundamental criticisms levied against AIDA.
This blog post will provide an overview of the Companion Document. It will then re-raise questions about AIDA that were the focus of my prior lengthy blog post on AIDA, AIDA’s regulation of AI in Canada: questions, criticisms and recommendations.
Overview of AIDA Companion Document
The Companion Document provides useful information about what ISED’s intentions are for AIDA. But it leaves many fundamental questions and concerns unaddressed including why MPs and the public should enact a law based only on what a vague explanatory document says about AIDA, especially when the document has no legal force.
My colleagues Charles Morgan, Francois Langois and Gabriel Gobell at McCarthy Tetrault summarized the contents of the Companion Document in their post One Step Closer to AI Regulations in Canada: The AIDA Companion Document. In my view the salient features of the Companion Document are its attempts to position AIDA as:
- Being in line with (“a corresponding framework” and “interoperable”) regulatory developments in the EU, the UK, and the United States.
- A law that will fill the “regulatory gaps” in our existing laws such as those that regulate human rights and hazardous products.
- Being “agile” and that will work “seamlessly” with the existing regulatory frameworks.
- Ensuring that high impact systems will meet the “same expectations” with respect to safety and human rights to which Canadians are accustomed.
- Leveraging recognized responsible AI frameworks such as OECD and NIST standards.
- Taking nuanced approaches to avoid over regulation such as by excluding open source models, models developed by researchers, and by adopting different standards of responsibilities AI for different AI actors such as designers, developers, those making AI systems available, and those managing AI systems.
- A law that will only come into force following consultations. AIDA will not come into force until at least 2025. Regulations will be rolled out only following 6 months of consultation, after 12 months to develop draft regulations, to be followed by 3 months of Part 1 consultations and then 3 months until the regulation are finalized.
The AIDA Companion Document tries to address criticisms that it fails to define the types of AI systems that will be regulated as “high impact” systems. It does so by outlining the factors that will “be among” those used to classify high impact systems and to provide examples of what may be targeted.
The factors that may be considered are:
- evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- the severity of potential harms;
- the scale of use;
- the nature of harms or adverse impacts that have already taken place;
- the extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
- imbalances of economic or social circumstances, or age of impacted persons; and
- the degree to which the risks are adequately regulated under another law.
The examples given of what may be regulated as high impact systems are:
- Screening systems impacting access to services or employment.
- Biometric systems used for identification and inference.
- Systems that can influence human behavior at scale such as online content recommendation systems.
- AI systems that have been used to create “deepfake” images, audio, and video that can cause harm to individuals.
Comments on AIDA Companion Document
In my prior lengthy blog post on AIDA, AIDA’s regulation of AI in Canada: questions, criticisms and recommendations (“my prior blog on AIDA”) I asked the following questions:
- Does AIDA lack Parliamentary oversight?
- Is AIDA’s scope too narrow?
- Should AI systems be regulated by ISED?
- Does AIDA take the wrong approach to regulating AI?
- Does AIDA impose responsibilities on AI actors that are impossible to meet?
- Is it premature to regulate AI now via AIDA?
- Will AIDA impede innovation by imposing new restrictions on uses of anonymized data, with its duplicative regulatory regimes and harsh and disproportionate penalties?
In my view, many of these fundamental questions have not been answered by the AIDA Companion Document.
Does AIDA lack Parliamentary oversight?
The AIDA Companion Document does not address the fundamental concern that AIDA is still completely lacking in Parliamentary oversight. As noted above, the Companion Document attempts to provide some guidance as to what factors may be used to determine what will be regulated as high impact systems. However, these are only vague high level criteria and the document has no legal significance whatsoever.
Nothing in the Companion Document changes these conclusions summarized in my prior blog on AIDA:
“AIDA leaves unfettered discretion on the Minister to establish what systems will be subject to regulation, what harms or degrees of risk will be regulated, which AI actors will have responsibilities and what those will be, how to balance sensitive fundamental rights such as the right to freedom of expression with other concerns, and the penalties including administrative monetary remedies (AMPs) for non‑compliance. The Minister will also have unfettered discretion to impose significant penalties and make prohibition orders for non‑compliance. Is this massive delegation of authority consistent with Parliamentary sovereignty?”
“AIDA lacks Parliamentary control over the regulation of AI systems. AIDA is like an algorithmic black box. It lacks transparency as to what is covered. It lacks explainability as there is no way of knowing how AI systems will be regulated. It lacks details which calls into question its robustness. There is no mechanism for assessing its effectiveness against its impacts on innovation, which calls into question its safety as a regulatory vehicle. It fractures the regulation of consumer products and discrimination potentially dissipating regulatory authority accountability. It lacks human oversight by Parliament. Should Parliament delegate away regulatory authority over AI systems under a regulatory model that that would not satisfy ethical principles for the AI systems that will be regulated?”
The Companion Document suggests that high impact systems will meet the “same expectations” with respect to safety to which Canadians are accustomed. Yet, there is no proposal for defining high impact systems in line with definitions in our existing federal consumer protection and hazardous products laws.
Many of you will be aware of the recent Open Letter by Elon Musk and others recommending a 6 month pause in the development of more advanced AI systems. It recommended that “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal” and that “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems”.
The three expressed concerns were framed by the following questions:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
It is hard to see know whether and how AIDA will address these existential questions. Moreover, even if AIDA would tackle these via regulations, these questions involve quintessential policy choices. Should questions like these be left to consultations and regulation by ISED or should these questions be for legislatures across the country?
One has to ask, if the government proposed to re-write and overhaul the Income Tax Act, would it be acceptable for the government to enact a replacement law with a shell structure that leaves all key elements to be determined by regulations promulgated by the Minister of Revenue including who is subject to tax, what is taxable income, and what deductions, exemptions, and tax credits are allowed and which leave the CRA with the power to enforce and fine taxpayers with no right of appeal to the courts? Would this scheme be more acceptable if the government published an “Income Tax Companion Document” that explained at a high level what is sought to be accomplish? Of course not. But that is what the government is asking MPs and the public to accept with AIDA and the Companion Document.
Is AIDA’s scope too narrow?
I noted in my prior blog on AIDA that while AIDA’s scope is potentially extremely broad, it is much more limited than the EU AI Act, which does not limit its scope to private sector harm to individuals. It would include, for example, critical infrastructure, access to and enjoyment of essential public services and benefits, law enforcement, migration, asylum and border control management, and administration of justice and democratic processes.
In short, nothing in the Companion Document suggests that AIDA will address what may be the most impactful aspects of AI, namely its uses in public sector and ensuring respect for democracy, human rights, and the rule of law.
Should AI systems be regulated by ISED?
My prior blog on AIDA noted that “AIDA concentrates enormous powers in the executive. This includes everything from what, how and who AIDA regulates, the quantum of AMPs and when they will apply, and all enforcement powers. Thus, the law, policy, administration and enforcement will all fall within a single Ministry, ISED.” Further it observed that there “are also unanswered questions as to whether ISED has the expertise and capacity to regulate AI systems.”
The Companion Document confirms the extensive role that ISED will play. Perhaps with the exception of selecting who will be prosecuted for criminal offenses, all other regulation and enforcement will lie with the ISED, assisted by the AI and Data Commissioner.
Given the explosive growth of AI, how can Parliament be assured that ISED will be able to competently develop the expertise to regulate and enforce AIDA. Can Parliament be assured that creating an ISED AI regulatory fiefdom would be better than empowering and strengthening existing regulatory authorities?
Further AIDA will concentrate enormous power in the Minister who will be able to prescribe regulations, investigate AI actors, impose AMPs on them if they don’t follow the Ministers directives or shut businesses down. There is also no mechanism to appeal any orders made by ISED, which raises substantial questions about procedure justice and commitments to the rule of law. Nothing in the Companion Document answers the concerns expressed in my prior AI blog that this is a poor governance structure.
Does AIDA take the wrong approach to regulating AI?
The Companion Document suggests that AIDA will be a corresponding framework with the regulatory developments in the EU, the UK, and the United States. Each of these three frameworks rely, to some degree, on similar ethical principles that animate their approaches to regulating AI systems. But, it is simply not accurate to suggest that AIDA is in line with what are three dramatically different approaches to regulating AI systems. As I summarized in my prior blog on AIDA, the EU takes a prescriptive approach with defined categories of high risk systems and relies extensively on existing regulatiry regimes that regulate product safety. The U.S. federal approach is voluntary. The U.K. approach, as recently confirmed by its A pro-innovation approach to AI regulation, focuses on implementing a pragmatic and flexible coordinated framework that leverages existing regulatory frameworks. AIDA is not structurally similar to any of these regimes.
The Companion Documents refers to filling “regulatory gaps”, using an “agile” approach to regulation that it will work “seamlessly” with existing legal frameworks. However, AIDA’s structure is a centralized one, and there is nothing in the Companion Document that provides any details as to how these key objectives will be achieved. Moreover, it identifies for potential regulation areas that are already subject to federal or provincial regulation such as “autonomous driving systems and systems making triage decisions in the health sector” and biometric systems (that could be regulated under the CPPA). Without further explanation, these examples appear to illustrate overlapping regulation with extensive intrusion into provincial jurisdiction.
In my prior blog on AIDA, I suggested that another approach to addressing AI bias “would be to extend the jurisdiction of the federal Human Rights Commission and to amend the CHRA to give the Minister of Justice (the Minister responsible for the CHRA) and/or the Commission new powers to obtain information from organizations or establish new regulations to give the Commission greater processes for investigating possible violations of the CHRA”.
The Companion Document does not provide any justification for fragmenting jurisdiction over human rights regulation by not empowering the CHRA with the rights it needs to address bias in AI systems.
Will AIDA impose responsibilities on AI actors that are impossible to meet?
In my prior blog on AIDA, I noted the extensive and potentially overlapping burdens that could be imposed on the entire ecosystem of AI actors. The response in the Companion Document, in short, is “trust us” to get it right.
The descriptions of the proposed regulatory regime is replete with providing “assurances” that “AIDA would require appropriate measures”, require “appropriate actions”, “appropriate accountability”, “appropriate internal governance”, responsibilities for monitoring would be “proportionate to the level of influence that an actor has on the risk associated with the system”, and so forth. These comforting statements provide scant real insight into what the regulations would look like.
Should MPs and the public be assured by such high level and non-legally binding statements?
Is it premature to regulate AI now via AIDA?
In my prior blog on AIDA, I asked the question as to whether now is the right time to enact a potentially sweeping law to regulate AI systems. I pointed out that “there has been no meaningful dialog or debate in Canada about whether Canada should regulate AI systems now, especially given that no country as yet has enacted national AI specific laws to regulate the health and safety of AI systems including by Canada’s major trading partner, the United States.” Further, I asked whether the approach of being a “first mover” in AI regulation takes into account the significant repercussions of slowing down the development of AI in Canada.
The Companion Document does not address these questions.
Will AIDA impede innovation by imposing new restrictions on uses of anonymized data, with its duplicative regulatory regimes, and disproportionate penalties?
The Companion Document does not explain why there is a need for overlapping regulation of and divergent standards governing privacy under the CPPA and AIDA. Nor does it address the double jeopardy risks of enforcement under overlapping regimes such as the CPPA and the CCPSA or the disproportionate fines that can be imposed under AIDA compared to other existing health and safety regimes such as the CCPSA, and the Hazardous Products Act.
Based on the foregoing, my first recommendation with respect to AIDA remains unchanged:
Recommend: Parliamentarians should give serious consideration to questions about AIDA. These questions include whether AIDA is sufficiently detailed for Parliamentarians to give it sufficient consideration in its present form; the appropriateness of the substantial delegation of policy and enforcement choices to the executive and to ISED; whether AIDA is the appropriate framework for addressing harms and bias in AI systems and whether a cross sectorial regulatory approach similar to what the United Kingdom, and Israel are doing is a preferable structure; whether AIDA could impose impractical responsibilities on the ecosystem of persons that design and develop AI systems, put AI systems into production, or make data available for use with AI systems; whether, on balance, AIDA will promote trust and confidence in AI without substantially inhibiting innovation in a critical technology that will power the 4th generation industrial revolution; whether AIDA fails to protect the public by exempting public sector uses of AI systems from regulation; whether this is the time to enact the AI specific law; and whether AIDA’s disproportionate and overlapping penalty regime is appropriate.
To be clear, I support responsible uses of AI. I also support thoughtful regulation of AI, such as what is being proposed in the UK – flexible and pragmatic regulation that delegates regulation to existing authorities with built up expertise and domain knowledge, and which is demonstrably pro-innovation. Regulation of AI should empower existing regulatory authorities such as the Canadian Human Rights Commission to better fulfill their mandates, rather than fragmenting their jurisdiction and creating overlapping regimes and the regulation of AI should address key public and private sector threats posed by AI. AI regulation should also follow extensive consultation (and not be restricted to ex post facto consultations that cannot change the law’s basic structure) and place key policy decisions with elected officials who can and should be accountable in our democratic system of government.