Table of Contents Hide
- AIDA lacks Parliamentary oversight and fails to define what AI systems will be regulated
- AIDA’s general alignment problems
- No Guiding Principles on How to balance Innovation with Risks
- Lack of Public Consultation
- AIDA’s Regulation of AI Actors
- AIDA’s definition of AI
- Regulation of Generative AI
- What should be done with AIDA
AIDA has been subject to widespread and sustained criticisms. I summarized many of them in my blog posts AIDA’s regulation of AI in Canada: questions, criticisms and recommendations and AIDA Companion Document: overview and questions (my “prior blog posts”). But, there have been many others who have similarly criticized AIDA including Prof. Scassa, Prof. Geist, Bianca Wylie (member of the advisory boards for the Electronic Privacy Information Centre (EPIC), The Computational Democracy Project and senior fellow at the Centre for International Governance Innovation), by many organizations and individuals in an open letter to the Minister, and by witnesses before the INDU Committee studying Bill C-27.
In response to criticisms about Bill C-27, the Digital Charter Implementation Act, 2022, and especially criticisms related to AIDA, the Government disclosed amendments it proposes to make while the Bill is before the INDU Committee. My prior blog post focused on the amendments to the CPPA. This blog post focuses on the proposed changes to AIDA.
In response to the many serious criticisms, the Government published the Companion Document. It has also now outlined in the Minister’s letter to the INDU Committee the amendments it proposes to make to AIDA. While the proposals outlined are helpful, a careful look at them shows that overall they fail to address or adequately many of the fundamental criticisms levied against AIDA.
The Government still does not appear ready to clarify what threshold or risk to health and safety or bias would be sufficient to make an AI system a “high impact” system, either in the initial list or for AI systems that can be added later. The initial list of systems described in the Minister’s letter is vague and open ended, especially when compared to the systems to be regulated under the EU AIA. Further, the centralized/horizontal approach to regulation remains out of alignment with the hub-and-spoke decentralized approaches being adopted by the UK and the US. These and other problems are summarized below and should be read together with my prior blog posts where some of these problems are discussed at more length.
AIDA lacks Parliamentary oversight and fails to define what AI systems will be regulated
A fundamental criticism of AIDA is that it leaves all of the key aspects of the Bill to future regulation – something that usurps the important role of Parliament. The Minister’s letter does little to alleviate this structural defect in AIDA.
A key criticism relates to what AI systems will be regulated as “high impact” systems. To respond to this criticism, the Minister announced what AI systems would be initially classified as high impact systems.
The Government heard consistent feedback that the bill should include key classes of “high impact AI systems” that the bill would apply to at the outset, for example, those that deal with health and safety. Therefore, the Government would propose amendments that clarify the meaning of high-impact systems as those of which at least one intended use may reasonably be concluded to fall within a list of classes to be set out in a schedule to the Act. To assist the Committee’s work, the proposed initial list of classes would be the following:
The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination.
The use of an artificial intelligence system in matters relating to (a) the determination of whether to provide services to an individual; (b) the determination of the type or cost of services to be provided to an individual; or (c) the prioritization of the services to be provided to individuals.
The use of an artificial intelligence system to process biometric information in matters relating to (a) the identification of an individual, other than if the biometric information is processed with the individual’s consent to authenticate their identity; or (b) an individual’s behaviour or state of mind.
The use of an artificial intelligence system in matters relating to (a) the moderation of content that is found on an online communications platform, including a search engine and a social media service; or (b) the prioritization of the presentation of such content.
The use of an artificial intelligence system in matters relating to health care or to emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition of “device” in section 2 of the Food and Drugs Act that is in relation to humans.
The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.
The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and function.
No definition of High Impact Systems
Specific comments on this list of proposed classes is provided below. What is notable, however, is that the Government refers to this list as only an “initial” list that would apply at the outset and that could “be modified by the Governor in Council as the technology evolves and systems of interest and their impacts change”. It is also notable that the Government provides no criteria to delineate what new systems could be subject to regulation in the future.
In short, the Government has not addressed the fundamental criticism that all decisions about what could be designated as a high impact system will be made by regulation with no Parliamentary oversight.
Moreover, the Government still does not appear ready to clarify what threshold or risk to health and safety or bias would be sufficient to make an AI system a “high impact” system. The definition of harm in AIDA does not have any materiality threshold. Any scintilla of potential harm with any autonomous or semi‑autonomous system would theoretically be enough to trigger regulation of the AI system under the current definition.
AIDA’s structure appears intended emulate the Canada Consumer Product Safety Act (“CCPSA”), federal legislation intended to protect the public from dangers to human health or safety that are posed by consumer products in Canada. However, unlike AIDA, the CCPSA’s scope is delimited by a definition of the phrase danger to health or safety” to mean:
“any unreasonable hazard — existing or potential — that is posed by a consumer product during or as a result of its normal or foreseeable use and that may reasonably be expected to cause the death of an individual exposed to it or have an adverse effect on that individual’s health — including an injury — whether or not the death or adverse effect occurs immediately after the exposure to the hazard, and includes any exposure to a consumer product that may reasonably be expected to have a chronic adverse effect on human health”.
There is no reason why AIDA could not also contain a risk materiality threshold. By further comparison, under the proposed compromise amendments to the EU AIA, the term ‘risk’ is defined to mean the combination of the probability of an occurrence of harm and the severity of that harm; and ‘significant risk’ means a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons”.
The Government acknowledged in the Companion Document that factors were needed to help define what classes of AI systems could be regulated as “high impact” systems. However, the Minister’s letter contains no indication of any intent to set out criteria that will be used to assess what new AI systems will pose sufficient significant levels of risk to be regulated.
Initial classes of AI systems to be regulated is vague
The Minister’s letter outlined the initial list of classes of AI systems to be regulated by ISED. The class descriptions provided in the Minster’s letter are somewhat helpful in pointing out generally what may be regulated. Some of these classes somewhat track the classes of AI systems to be regulated in the EU. Some classes also track some areas highlighted in the US Executive AI Order. However, the classes are vague and one has to guess what the Government actually intends to regulate. Without seeing the actual proposed text one cannot make any determination of what may be in the listed classes.
Further, some of the classes appear to overlap significantly with existing federal, provincial and local frameworks and to matters of provincial jurisdiction. This raises questions about not only scope of the regulations, but also how the regulations will operate with the other frameworks.
One of the prior criticisms of AIDA was that it was too narrow. I summarized this in my prior blog stating:
While AIDA would cover discrimination by private sector organizations, it left uncovered well-recognized other forms of discrimination caused by the use of AI systems which have direct impact on equality of access to fundamental rights, including access to justice, the right to a fair trial, and access to public services and welfare. It would not encompass the use of problematic AI systems in the public sector. The Federal Directive on Automated Decision-Making may provide some protection to individuals, but its scope and avenues for redress are limited.
In short, AIDA also does not address what may be the most impactful aspects of AI, namely its uses in public sector and ensuring respect for democracy, human rights and the rule of law.
The proposed classes described in the Minister’s letter now potentially address some of these gaps by proposing classes which would potentially apply in the public sector. However, as AIDA, as currently drafted applies only to the private sector, it is unclear in some cases which proposed changes would apply to either the public or private sectors. Here are some examples of this.
- The use of an artificial intelligence system in matters relating to determinations in respect of employment. Use of AI related to employment is in scope under the EU AIA and is also the subject of the US Executive AI Order. This listed class does not indicate whether it is limited to the private sector or whether it would include public sector employees. It is unclear why AIDA is needed for federally regulated businesses as any required standards could be include in the Canada Labour Code or in regulations that can be promulgated thereunder. It is also unclear how the regulations would apply provincially as each province and territory within Canada already has its own employment standards legislation.
- The use of an artificial intelligence system in matters relating to the determination of whether to provide services to an individual. This may be intended to cover the public sector and private sector based on the EU AIA class that covers “Access to and enjoyment of essential private services and public services and benefits”. However, the wording in the letter is not limited to “essential” services, potentially making it much more sweeping than in the EU. It also potentially covers much more and could apply to both the private and public sectors stating:
“The use of an artificial intelligence system in matters relating to (a) the determination of whether to provide services to an individual; (b) the determination of the type or cost of services to be provided to an individual; or (c) the prioritization of the services to be provided to individuals.”
Services to individuals is also the subject of the US Executive AI Order which applies to US Federal Government programs and benefits administration which includes, for example, housing. The wording in the Minister’s letter is broad enough to cover services offered by federal, provincial and municipal public services, by the private sector, and by services that are already regulated provincially or locally.
- As noted above, the Minster’s letter refers to the use of an artificial intelligence system in matters relating to “the prioritization of the services to be provided to individuals”. This may be intended to cover the public sector based on the EU AIA class that covers “AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.” Here again, as this is not expressly limited to the public sector, it is impossible to know the scope intended by ISED. Public Safety Canada provides some services related to emergency management. However, emergency management is also provided provincially. Generally, emergency medical services are regulated at the provincial or territorial level. We simply do not know if this is intended to be a targeted class or an open ended one that will overlap with other federal and provincial services or the regulation of services provided by the private sector which are already subject to many different federal and provincial laws.
- The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual. This is an important class to be concerned about, although it is unclear why the Government needs AIDA to implement justice reforms. While not indicated, it is possible the Minister’s letter contemplates that AIDA will tackle preventing and addressing discrimination in the use of automated systems, including algorithmic discrimination in the criminal justice system. This is addressed in the US Executive AI Order which has as a goal “fair and impartial justice for all, with respect to the use of AI in the criminal justice system” and includes any use in sentencing, parole, supervised release, and probation, bail, pretrial release, and pretrial detention, and risk assessments, including pretrial, earned time, and early release or transfer to home-confinement determinations. However, one cannot divine from the Minister’s letter whether these are the intended focuses of this class.
- The use of an artificial intelligence system to assist a peace officer. This class could be very broad and include many of the AI systems regulated under the EU AIA including AI systems used by law enforcement officers in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences; as polygraphs and similar tools or to detect the emotional state of a natural person; to detect deep fakes; for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences; for predicting the occurrence or reoccurrence of an actual or potential criminal offence. The US Executive AI Order also includes classes of AI systems in this category include the use of AI for police surveillance, crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density “hot spots”, prison-management tools, and forensic analysis.
This is also an important class to be concerned about, although here again, it is unclear what the Government intends or why the Government needs AIDA to implement these justice reforms. Further, police services are already regulated at the federal, provincial and local levels so it is unclear to what extent regulation in this area would be limited to matters within federal jurisdiction and how it would overlap with other existing regulatory structures.
- The use of an artificial intelligence system in matters relating to health care or to emergency services. This is another important potential class. The EU AIA also could apply to high risk AI systems in health care. Then US Executive Order also applies “to help ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors” and covers a wide range of possible services.[i]
The Minister’s letter leaves this category open ended giving no indication as to what it could cover. It is unclear how AIDA would apply to the delivery of health care across the country as the provinces and territories administer and deliver most of Canada’s health care services throughout the country. For example, hospitals in Canada are primarily regulated at the provincial level with each province or territory having its own legislation or regulations covering a wide range of areas, including the licensing of facilities, the qualifications of medical staff, patient rights, and the types of services that can be offered. The federal government also plays a role, particularly through the Canada Health Act, which sets out the principles for the Canadian health care system.
- The use of an artificial intelligence system in matters relating to the moderation of content that is found on an online communications platform, including a search engine and a social media ser The Minister’s letter does not define what it means by “content moderation”. Presumably, this is meant to cover the process of reviewing and monitoring user-generated content (UGC) and generative AI content to ensure it is not offensive, harmful, misleading, illegal, or otherwise inappropriate including for children. Content moderation online is incredibly important. One can see what happens on social media when content moderation is lacking by looking at what happened to the trustworthiness of content on X (formerly Twitter), including recently with posts related to the hostilities between Hamas and Israel.
There is no doubt about the growing role and importance of AI in content moderation because, even though it is not a perfect solution, AI-powered systems can automatically analyze and classify potentially harmful content and increase the speed and effectiveness of the overall moderation procedure. This includes identifying hate speech, terrorist content, and inappropriate or scammy AI-generated content. Social media companies like Instagram and search engines like Google have been extensively using AI for content moderation.
Content moderation is incredibly important, as is also evident by the steps taken, especially in the EU and the UK to tackle harmful content online. The EU passed a regulation in 2021 to address the dissemination of terrorist content online. The EU then adopted the Digital Services Act (DSA) in 2022 that requires, among other things, very large online platforms (VLOPs) and very large search engines (VLOSEs), to assess and mitigate risks, including obligations to prevent their systems from being misused and for their risk management systems to undergo independent audits. Measures adopted under the DSA call for algorithmic accountability and transparency audits. Under this framework, designated platforms have to identify, analyse and mitigate a wide array of systemic risks on their platforms, ranging from how illegal content and disinformation can be amplified through their services, to the impact on the freedom of expression or media freedom. Similarly, specific risks around gender-based violence online and the protection of minors online and their mental health must be assessed and mitigated. The risk mitigation plans of designated platforms’ and search engines will be subject to an independent audit and oversight by the European Commission.
The UK Online Safety Act (OSA), which was enacted in the UK in October 2023, also aims to make social media companies more responsible for their users’ safety on their platforms. The OSA will make social media companies legally responsible for keeping children and young people safe online. Among other things, all in scope services will need to put in place measures to prevent their services being used for illegal activity and to remove illegal content when it does appear. Platforms will have transparency obligations to show they have processes in place to meet the requirements set out by the OSA. Ofcom will check how effective those processes are at protecting internet users from harm.
Combing back to AIDA, the Minister’s letter is completely vague as to how AIDA would regulate content moderation. This vague language could conceivably be an effort to provide the legal authority to require large social media companies and large search engines that use AI for content moderation to undertake obligations similar to those enacted under the DSA or the OSA to combat harmful online content including to require algorithmic transparency and auditability of the algorithms. It is possible that the reference to content moderation is the Government’s obscure attempt to regulate online harms via AIDA.
The regulation of content moderation is a very sensitive one, with many differing views on whether and how it should be regulated. There were considerable debates in the EU and the UK over long periods of time about the scope of what harms should be included in those laws, which platforms and entities should be regulated, what oversight the regulatory authorities should have, and how those laws should balance issues of freedom of speech and the protection of the public. These content filtering issues are particularly thorny, as Rachel Griffin pointed out in an article published by the think tank CIGI.
While protecting the public from online harms is an extremely important goal – and one I support – It seems singularly inappropriate to leave the decisions about online content moderation solely to the Minister with no opportunity for fulsome public debate on the issues. The Government has announced plans to release a new Online Harms bill. If the Government wants to regulate harmful online content it should, as it had planned, introduce a specific law to be vetted and passed by Parliament rather than leaving this important area to a single Minister of the government.
AIDA’s general alignment problems
The Minister’s letter states that one of the goals of the proposed amendments is to align “with the EU AI Act as well as other advanced economies”. Alignment with international standards and approaches is a good thing. It is especially important for a middle power like Canada whose citizens stand to lose out big time if our laws are truly not interoperable with those of our major trading partners. A regulatory regime that is much more onerous than those of our trading partners including our largest trading partner, the U.S., creates barriers to entry which could deprive Canadians with access to the most valuable AI technologies. AI laws also require investments for regulatory compliance which divert resources from other investments, especially if our laws are out of alignment with those in the US. In the AI space, this could be catastrophic.
There is a lot of hype about the EU AIA. Still, it is unclear why the Minister emphasizes alignment with the EU AIA. It is true that the EU was first out of the gate to propose to regulate AI systems. If the EU ever passes the law, it will likely have a “Brussels Effect” and any Canadian business that wants to distribute its products or services in the EU will need to comply with that law. But, there is no indication that the EU approach will be widely adopted by many of our largest trading partners and the difficulties the EU is having reaching a consensus now makes “EU now look[s] a little out of place”. Infact, the EU AIA approach to AI regulation is at odds with and is, arguably, being overtaken with the approaches of the UK and the US.
A critical issue associated with regulating AI is whether it should be done by enacting a centralized regulatory body like a nuclear regulatory agency or a “department of electricity” that assumes that there is a “one-size fits all” approach to regulating AI, or a horizonal “hub-and-spoke” decentralized model that that allows different sectors and agencies to regulate AI in their own ways, takes advantage of existing regulatory frameworks, places authority in agencies with domain expertise, and avoids inefficient and costly regulatory overlaps and duplication. In this respect, there is much merit to the recent commentary in the op-ed published in the Hill Times by Kent Walker of Google who argues for a regulatory model is one that takes a sectorial hub-and-spoke approach rather than a horizontal one with central regulation.
The US Executive AI Order exemplifies a coordinated hub-and-spoke to AI regulation. It establishes within the Executive Office of the President, the White House Artificial Intelligence Council (White House AI Council). The function of the White House AI Council is to coordinate the activities of agencies across the US Federal Government to ensure the effective formulation, development, communication, industry engagement related to, and timely implementation of AI-related policies, including the policies set forth in the order. The order requires numerous departments and agencies to implement the policies. These include the Attorney General, the secretaries of Agriculture, Commerce, Labor, Housing and Urban Development, Transportation, Energy, Homeland Security, and numerous governmental agencies. These agencies are directed not only to implement the policies within their own departments and agencies, but in many cases also to provide standards and assistance to State and local agencies to help promote the AI regulatory policies.
The hub-and-spoke approach to regulating transformative technologies is exemplified by the approach taken to regulate microchips and semiconductors that have powered a ubiquity of products and services including everything from computers, to smartphones, and the Internet. Many of these products and services are regulated or governed by an array of standards, certifications, risk management frameworks, and audit standards.
When the potential transformational effects of these technologies were envisioned governments around the world did not create “super ministries” to regulate every product that could pose a danger to health or safety or that could be used in a discriminatory way. Rather the technologies embodying chips and semiconductors are regulated having regard to how they are used. For example,
- chips used in nuclear power plants are regulated by the Canadian Nuclear Safety Commission (CNSC);
- chips used in vehicles including self driving cars are regulated by Transport Canada or provincial ministries of transportation;
- organizations that use chips (computers) that are used to collect, use and disclose personal information are regulated by federal and provincial privacy regulators (and various statutes and the common law);
- organizations that use chips (computers) to discriminate against individuals are regulated provincially and federally by human rights and labour laws;
- products that may endanger health and safety are regulated by various laws including the CCPSA, consumer protection laws, and the common and civil law regimes; and
- export of advanced products containing chips are handled via export control laws.
When there are gaps in the laws, as there are inevitably, these can be addressed by specific laws. Canada’s proposal to enact an Online Harms bill is a good example. It may be that there will be identified significant gaps in the existing laws such as with dual use frontier models that can be used for creating biohazards or disinformation at a mass scale. These could be addressed with a specific law that is administered by an appropriate Government department with the relevant expertise or through a coordinated approach of different departments or regulatory agencies.
AIDA’s fundamental structure is to regulate “high impact” systems centrally via regulations promulgated and enforced solely by ISED. While the regulations could focus only on specific classes of AI systems and leave regulatory authority to other agencies and while the regulations could vary for each class, the essential structure is not devised to enable coordinated approaches. Essentially what is proposed is premised CCSPA model for regulating the safety of products which is regulated by ISED and on the “Health Canada” model that applies to regulated medical devices.
A major criticism of AIDA is that it’s centralized structure will overlap with existing regulatory regimes. In the Companion Document, the Government stated that it “is cognizant that developments in AI have created regulatory gaps that must be filled in order for Canadians to trust the technology”. In his appearance, the Minister stated
“First of all, let me tell you what the future artificial intelligence and data act will not do: it will not duplicate what existing legislation already effectively covers.”
However, there is nothing in the proposed amendments that suggests any such Government limitation for AIDA. In fact, as explained above, it appears that there could be very significant duplication or overlaps with existing regulatory processes. The examples of the proposed classes of AI systems to be regulated actually appear to tread extensively into areas that are already heavily regulated federally or provincially such as employment standards, human rights laws, health care, and emergency services.
There is no explanation as to why the Government believes that the regulation of AI should be under the sole authority of ISED or the Minister. For example, as I argued in prior blog posts, the bias and discrimination aspects of AIDA could and should be regulated by the Canadian Human Right Commission. If additional powers are needed these could be provided via new regulations under the Canadian Human Rights Act. Further the better placed responsible Minister is the Minister of Justice. The Minister of Justice would also seem to be better placed to be the Minister in charge for regulating bias and discrimination by AI systems in the courts or by peace officers. Similarly, the Minister of Health would be better placed to regulate health and safety issues arising from AI systems and the Minister of Labour issues related to discrimination and bias in employment matters. Even assuming it makes policy sense for these areas to be regulated under AIDA, why does all of the authority lie with ISED?
The lack of any consensus on the best tools to regulate AI including by a specific centralized law is illustrated by the G7 Hiroshima Process which is working on a Global AI Governance and Generative AI. While G7 members share common values of mitigating AI risks and speak of interoperable frameworks, there is no agreement any harmonized approaches to promoting interoperable trustworthy AI. The tools being discussed include a wide range of regulatory and nonregulatory frameworks, technical standards and assurance techniques, risk assessment and management frameworks, auditing, and potential certification schemes. According to G7 statements, the future of AI governance will likely not rely solely on top-down, Government-led rule-setting, but on private stakeholders including AI developers, users, and civil society organizations.
No Guiding Principles on How to balance Innovation with Risks
A central issue in the debates about regulating AI are how to balance the existential need to use AI to promote innovation with the need to mitigate significant risks associated with certain AI systems. The consensus among Canada’s trading partners as reflected, for example, in the Bletchley Declaration is that AI regulation “should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI”[ii] and in the US OMB Guidance (which followed the Executive AI Order) that AI regulation “should focus resources and attention on concrete harms, without imposing undue barriers to AI innovation”.[iii]
However, there is nothing whatsoever in AIDA or the Minister’s letter that sheds any light on how AIDA will balance these two objectives as everything will be left to regulations with no guardrails or even guiding principles to inform the regulatory objectives.
While regulation of AI may increase trust and confidence in AI technologies, regulation can also impede innovation, particularly in the short term, by increasing the cost of entry into markets and distorting competition. Unnecessary and overly burdensome regulations can create barriers to entry and limit the ability of firms to innovate and capture the social benefits of AI. It may be that more upstream governance will translate to less downstream innovation. This is not an insignificant issue and the Government has not released any studies on the potential impacts of new regulations on the adoption of AI in Canada.
It is also possible that Canada’s approach to AI regulation will be different than those of our major trading partner the US. If this results in newer AI models or products not being released in Canada, or being delayed, this could have severe impacts on our nascent but very strong AI ecosystem.
As I noted in my prior blogs, AIDA will also create potentially new conflicting obligations related to data anonymization. It will therefore create double jeopardy risks for AMPs under the CPPA and AIDA.
Lack of Public Consultation
The Government had virtually no consultation on the Bill before it was introduced into Parliament. Although the Minister says his department has extensively consulted on the Bill after its introduction, most of these have been in private and have focused on commenting on the Bill as it stands rather than a more broad based consultation on the key questions of what AI systems should be subject to regulation and the best ways to regulate them having regard to multiplicity of issues associated regulating a technology that will become ubiquitous like electricity and microchips.
The Government’s position appears to be, in essence, that it needs a framework that will enable it to regulate AI systems on an agile basis and that consultation on the regulations after AIDA is passed is all that is really needed. This approach, however, is fundamentally at odds with democratic governances norms. This point was recently emphasized by Andrew Clement in an article published by CIGI, AIDA’s “Consultation Theatre” Highlights Flaws in a So-called Agile Approach to AI Governance:
An agile approach is worth considering, but one well-aligned with the norms of democratic governance would look quite different — in particular, open, inclusive well-informed public education and engagement should be accelerated rather than impeded or postponed. While varied in their analyses and prescriptions, many relevant materials around AI governance are already available for the government to draw on to help people understand the state of AI development, the wide range of issues at stake and possible alternative regulatory approaches.
The Government’s approach to regulating AI can be contrasted with the approaches in the EU and UK where there has been open and transparent debates about AI regulation. It can also be contrasted with the approach being taken in the US under the recent White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “US Executive AI Order”) that emphasizes consultation and study from all stakeholders. This is reflected in the Policy and Principles which states the following:
Sec. 2. Policy and Principles. It is the policy of my Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities. When undertaking the actions set forth in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations:
Bianca Wylie aptly summarizes the Government approach to AIDA:
In its race to catch up to international peers and their regulatory approaches for AI, the federal Government is threatening to replicate one of the most troubling historical tendencies of the technology industry: move fast and break things. As AI researcher and scholar Ana Brandusescu explains, the federal Government is currently porting an ill-conceived approach from the tech industry to the idea of regulation: better to go fast and be imperfect. In this context, it’s “move fast and break how we make laws in a democracy”…
To properly deal with the legal liabilities and risks that automation poses, in addition to human rights concerns, cultural impacts on labour and society, is overwhelming. It’s also necessary. It’s groundwork the federal Government hasn’t done. Long-held historical legislative norms are being tossed aside in an effort to shape Canadian society in the name of an over-hyped technology. One would struggle to find any recent bill or law in modern history so explicitly political, and so fundamentally in need of a rethink.
AIDA’s Regulation of AI Actors
A major criticism of AIDA was its overreaching potential regulation of every AI actor. Helpfully, the Government proposes to drop the requirement that a person responsible for an artificial intelligence system must assess whether it is a high-impact system in accordance with regulations. Instead, the Government intends to propose amendments in which sections 8 and 9 of AIDA would be replaced with new sections laying out more potentially nuanced responsibilities of different Ai actors. These changes will require close scrutiny.
AIDA’s definition of AI
There were many criticisms of the definition of AI system in AIDA. To respond to this criticism, the Minister’s letter proposed a change to the definition.
“The Government would be supportive of an amendment that aligns with the OECD definition of AI: a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.”
The OECD definition is one that is often referred to. But, the OECD recently announced it is changing the definition as follows:
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.
NIST’s leading Risk Management Framework refers to an AI system
“as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (Adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).
The US Executive AI Order
The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner
The EU AIA uses a close variation of the NIST definition by defining ‘artificial intelligence system’ (AI system) as
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.
This definition is likely going to be replaced by the new OECD definition.
The Government may want to reconsider the definition proposed in the Minister’s letter to more closely align with the EU AIA and the US Executive AI Order.
Regulation of Generative AI
The Minister’s letter indicates that AIDA would be amended to create distinct obligations for generative general-purpose AI systems, like ChatGPT. According to the Minister’s letter:
The Government would look to propose amendments to create distinct requirements for AI systems like ChatGPT that are designed to be used for many different tasks in many different contexts. While they could be regulated as high impact systems, we have heard from stakeholders that these systems are distinct enough that they deserve recognition in the law.
Therefore, the Government would propose to set out clearer responsibilities. Developers of general-purpose systems would, before placing on the market or putting into service and in accordance with any regulations:
Perform an impact assessment;
Establish measures to assess and mitigate risks of biased output;
Conduct testing of the effectiveness of mitigation
While GenAI systems are raising many questions about whether and how it should be regulated, there is no international consensus yet on the question. GenAI systems would be regulated under the EU AIA. The governance of GenAI systems was also covered in the US Executive AI Order, which defined the term to mean
“the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content”
The Minister’s letter did not identify the specific risks associated with GenAI systems it wanted to regulate. The US Executive AI Order addressed:
- external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;
- testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;
- reasonable steps to watermark or otherwise label output from generative AI;
- testing software used for the above purposes;
- auditing and maintaining synthetic content; and
- developing guidance on the use of generative AI for work by the Federal workforce.
What should be done with AIDA
As I argued in my prior blogs, AIDA is not ready for the prime time of a Parliamentary Committee. The INDU Committee has ordered the Minister to provide the AIDA amendments. However, getting the amendments during the INDU Committee process is too little and too late. First many of the witnesses that already appeared before the Committee wanted to comment on AIDA but rightfully complained they could not do so because they could not evaluate it without seeing the specific amendments. Secondly, and even more importantly, there are numerous policy and technical issues that need fulsome study and debate. This is not possible within the short time frame for review during the Committee hearings or in the regulation making process where the structure of the law cannot be challenged.
I previously summarized my many criticisms of AIDA as follows and they remain as unresolved after seeing the Minister’s proposed amendments in the letter as they were before.
AIDA lacks Parliamentary control over the regulation of AI systems. AIDA is like an algorithmic black box. It lacks transparency as to what is covered. It lacks explainability as there is no way of knowing how AI systems will be regulated. It lacks details which calls into question its robustness. There is no mechanism for assessing its effectiveness against its impacts on innovation, which calls into question its safety as a regulatory vehicle. It fractures the regulation of consumer products and discrimination potentially dissipating regulatory authority accountability. It lacks human oversight by Parliament. Should Parliament delegate away regulatory authority over AI systems under a regulatory model that that would not satisfy ethical principles for the AI systems that will be regulated?
In my respectful opinion, neither the Companion Document nor the Minister’s letter which were released since my first post was published has adequately dealt with these questions to warrant Parliament enacting AIDA, even with a few more technical amendments that the Government or other INDU Committee members may propose.
Ideally, AIDA should be split from the rest of Bill C-27 and be removed from the Bill in the clause by clause, or using another appropriate procedure. ISED then should launch a full consultation and develop its approach to AI regulation consistent with evolving standards and Canada’s local needs.
Splitting off AIDA would not necessarily delay enacting a new Bill following a fulsome consultation. The Companion Document suggested that regulations would not be in place until sometime in 2025. This gives the Government enough time to start its consultations, enact a new bill and potential introduce regulations without inordinate delays.
However, if the INDU Committee is only prepared to split AIDA from the rest of Bill C-27 and to continue its study of the Bill based on the Government proposed amendments, my recommendations continue to be those referred to in my prior blog including these:
- Amend AIDA to provide for the most important regulations to be to laid before and approved by resolution of both Houses of Parliament and for other regulations to be annulled pursuant to a resolution of either House of Parliament. This helps maintain Parliamentary sovereignty over the regulatory process. See my blog post, UK AI Regulation Bill.
- Amend AIDA to include the key principles that will govern the regulation of AI systems. See my blog post, UK AI Regulation Bill. This helps to define the goals of regulation.
- Amend AIDA to permit the designation of multiple Ministers e.g., Justice (courts, peace officers, Human Rights, content moderation), Health (health and emergency services), Labour (employment), and other Ministers, as appropriate. This helps to make AIDA more a decentralized and efficient framework.
- Amend AIDA to put all or significant regulatory authority over bias and discrimination under the Canadian Human Rights Act and the Commission.
- Amend AIDA so that the Artificial Intelligence and Data Commissioner reports to Parliament and give the Commissioner a coordination and oversight role across all government departments.
- Amend AIDA to include key definitions including especially the definition of “high impact” systems to include criteria that identifies what is a significant risk.
- Amend AIDA to refine and clarify the list of areas to be initially subject to regulation.
- Amend AIDA to clarify what new systems could subject to future regulation including consideration of what is “high risk” and the need to avoid overlapping regulation.
- Remove the provisions that will regulate the standards for anonymization, as these are adequately dealt with under the CPPA, would create conflicting regulatory requirements, and could impede access to use of data, which is essential to AI adoption and would not materially undermine the already robust provisions of the CPPA.
- That AI systems can be removed from being a high-impact system by regulation, as under the AI Act.
- Consider whether AIDA should apply to harms to organizations and to critical infrastructure.
- An enforcement order made by the Minister should be subject to a right of appeal on questions of law or mixed questions of fact and law.
- Align the offense penalties to accord with the fines under the CCPSA and remove the double jeopardy under AIDA, the CPPA and the CCPSA. Any AMPs should be assessed by an independent tribunal. All criminal offenses should require that the offending act be done knowingly.
* Updated Nov. 25, 2023
[i] The US Executive AI order lists these areas in the health care sector:
- “responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health)”
- “development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing — including quality measurement, performance improvement, program integrity, benefits administration, and patient experience;
- “long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector, including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users”;
- “incorporation of equity principles in AI-enabled technologies used in the health and human services sector…to identify and mitigate discrimination and bias in current systems”.
[ii] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023
Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.
The draft guidance takes a risk-based approach to managing AI harms to avoid unnecessary barriers to government innovation while ensuring that in higher-risk contexts, agencies follow a set of practices to strengthen protections for the public. AI is increasingly common in modern life, and not all uses of AI are equally risky. Many are benign, such as auto-correcting text messages and noise-cancelling headphones. By prioritizing safeguards for AI systems that pose risks to the rights and safety of the public—safeguards like AI impact assessments, real-world testing, independent evaluations, and public notification and consultation—the guidance would focus resources and attention on concrete harms, without imposing undue barriers to AI innovation.