Here is my submission to the OPC consultation.
______________________________________________
Thank you for the opportunity to provide input into the OPC’s consultation on artificial intelligence (AI) as it relates specifically to the Personal Information Protection and Electronic Documents Act (PIPEDA).
By way of introduction, I am a senior technology lawyer with McCarthy Tétrault. As part of my privacy practice, I regularly advise clients on privacy issues. I also teach privacy at Osgoode Hall Law School as part of an intellectual property law course. I have also written extensively about privacy issues including a major chapter in my eight volume book on Computer, Internet, and ecommerce Law. As such, I respectfully submit I am well positioned to provide both theoretical and practical input into the consultation. This is a personal submission and is not made on behalf of any client.
The OPC consultation has raised many issues of central importance to regulatory frameworks for AI. I intend to focus my submission on framework principles that I recommend the OPC apply in analyzing the issues it has asked about in the consultation.
Unintended consequences of sweeping regulation of AI
The OPC should be cautious in regulating AI. It should adopt an approach that, for the present time, focuses on clearly identified high risk activities that unmistakably cause more harm than good.
AI is transforming virtually every industry, organization, and government, along with the lives of individuals, as one might expect of this fourth industrial revolution. It’s use is pervasive and growing. It is being used by the largest organizations such as by way of example only
- social media giants Google, Facebook, LinkedIn, Instagram, Twitter, Alibaba, and Baidu
- online and offline retailers such as Amazon and Walmart
- luxury brands like Apple, Burberry, and Louis Vuitton
- producers of consumer products such as Kimberly-Clark, Unilver, Coca-Cola
- foodservices providers such as Dominos Pizza, Starbucks, and McDonalds,
- healthcare providers such as Google, IBM, Elsevier, Tencent
- entertainment companies such as Disney, Spotify, and Netflix
- news organizations such as Press Association
- financial services industries such as American Express, Mastercard, and many Canadian financial services compoa
- agricultural machinery manufacturers, such as John Deere
- developers of autonomous and safer vehicles such as Tesla, Mercedez, BMW, and Volvo.[1]
Ai is being used for law enforcement (such as facial recognition tools by the police and governments like in China), military applications such as for autonomous weapon systems, and civilian uses by governments such as to administer programs and provide services to the public.[2]
Yet, despite its pervasiveness, Ai is still in relative terms, nascent. While deep learning, a type of machine learning, is the form of AI that is extremely popular now, it is not the only type of AI technology. If the past is any guide to the future, one can expect new forms of AI to be developed and deployed.[3]
The law has always lagged behind technological developments. In many cases this was necessary because governments were not able to predict how new technologies – from steam engines and railways, automobiles, telephones, micro-chips, computers and software and interconnected networks that became the Internet – would be used or how best to address the myriad of impacts they or regulatory intervention would cause.
Governments have sometimes moved early to regulate specific high risk technologies, nuclear power plants, pharmaceuticals, and medical devices, being a few examples. In some cases, where governments have introduced regulation in fast moving technological areas too quickly or without adequately understanding the potential impacts, the laws have had unintended consequences; the Canadian anti-spam law (CASL),[4] being a recent example.
Governments have, however, generally eschewed regulating indiscriminately technologies with massively different uses. An example is “computer software” which is used ubiquitously. Recognizing this, governments with few exceptions, have not enacted laws that apply to all software on the assumption that it is all one thing or have equal potential to cause harm or risks of harm.[5]
AI has many of the same attributes as “software” which strongly suggests that attempting to define and regulate it as if it were one thing, all of which could cause significant harm, would have may unintended consequences.
The OPC has recognized that AI technologies have a plethora of beneficial uses.
It is clear that AI provides for many beneficial uses. For example, AI has great potential in improving public and private services, and has helped spur new advances in the medical and energy sectors among others. However, the impacts to privacy, data protection and, by extension, human rights will be immense if clear rules are not enshrined in legislation that protect these rights against the possible negative outcomes of AI and machine learning processes.
The consultation document does not, however, purport to assess the impact of its proposed regulations on the beneficial impacts. Data collected by innovative organizations has enormous value and is and can be used to vastly improve our country and its citizens’ economic, social, and personal health, among other things. Therefore, finding ways to help enable AI driven innovation and not impeding the use of data unnecessarily must be countervailing goals that have to be fully considered.
Canada is also a trading nation. It is both an importer and exporter of technologies, services and data (although it is by far a net exporter of data and a net importer of technologies). Any new regulatory frameworks must take into account how unique rules could impact trade flows and decisions of affected parties – from international organizations to Canadian start-ups – of where to conduct R&D, test, and launch products and services. The impacts on employment and access to innovative services and technologies by Canadians requires careful assessments.
If some aspects of AI are going to be subject to new and special restrictions, they should target high risk activities, be proportionate, narrowly focused, be clear and easily understandable. The EU White paper on AI recommended this approach stating:
A risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question whether or not they are ‘high-risk’. The determination of what is a high-risk AI application should be clear and easily understandable and applicable for all parties concerned.[6]
As the consultation document noted, ISI (formerly ISED) is currently examining what changes to privacy and other laws are required. It is respectfully submitted that ISI is better positioned than the OPC to make the assessments that take into account all of the necessary balancing considerations involved in regulating AI.
Enabling AI innovation
The consultation document accurately noted that certain PIPEDA principles, enacted 20 years ago, are causing uncertainty. This was adverted to in discussing the “specifying purposes” PIPEDA requirement.
As for another example, some have observed that organizations relying on AI for advanced data analytics or consequential decisions may not necessarily know ahead of time how the information processed by AI systems will be used or what insights they will discover.Footnote4 This has led some to call into question the practicality of the purpose specification principle, that requires on the one hand “specifying purposes” to individuals at the time of collecting their information and, on the other, “limiting use and disclosure” of personal information to the purpose for which it was first collected…
The Information Technology Association of Canada has conveyed to the ETHI Committee that “having access to broad and vast amounts of data is the key to advancing our artificial intelligence capabilities in Canada.” This objective is in tension with the important legal principles of purpose specification and data minimization, which apply to the development and implementation of AI systems under the current PIPEDA.
It may be difficult to specify purposes that only become apparent after a machine has identified linkages. For example, the Information Accountability Foundation argues that since “the insights data hold are not revealed until the data are analyzed, consent to processing cannot be obtained based on an accurately described purpose.” Without being able to identify purposes at the outset, limiting collection to only that which is needed for the purposes identified by the organization, as required by PIPEDA, is made equally challenging…
Purpose specification and data minimization remain complex issues and the potential challenges in adhering to these legal principles in an AI context merit discussing whether there is reason to explore alternative grounds for processing.
The consultation document also aptly noted the related challenges in obtaining consents for certain types of AI processing.
The concept of consent is a central pillar in several data protection laws, including the current PIPEDA. However, there is evidence that the current consent model may not be viable in all situations, including for certain uses of AI. This is in part due to the inability to obtain meaningful consent when organizations are unable to inform individuals of the purposes for which their information is being collected, used or disclosed in sufficient detail so as to ensure they understand what they are being invited to consent to. As noted in our Guidelines on Obtaining Meaningful Consent, clear purpose specification is one of the key elements organizations must emphasize in order to obtain meaningful consent.
In other laws, such as the GDPR, consent is only one legal ground for processing among many. Alternative grounds for processing under the GDPR include when processing is necessary for the performance of a task carried out in the public interest, and when the processing is necessary for the purposes of the “legitimate interests” pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject (in particular where the data subject is a child)…
The use of non-identifiable data, such as through the application of de-identification methods, could also be a factor in determining whether certain other grounds for processing such as legitimate or public interest should be authorized under the Act.
I agree that PIPEDA’s generally applicable principles have caused uncertainty. However, clarifying PIPEDA to enable beneficial uses of personal information should not necessarily be viewed as enfeebling privacy.
PIPEDA always envisioned balancing the privacy rights of individuals with the need to facilitate the use of personal information for appropriate commercial purposes.[7] This has been recognized on numerous occasions by the OPC.[8]
User interests are strongly influenced by considerations of reasonable expectations of privacy. This principle has, since the inception of PIPEDA, been an important element in assessing the type of consent required, and in limiting uses that would not be considered appropriate.[9] The reasonable expectation of privacy principle is also consistently used by the Supreme Court of Canada in balancing Charter rights of Canadians.[10]
These considerations militate in favor of relaxing the purpose specification and consent principles, especially to enable the use of de-identified data for research and AI training purposes. It is hard to see a strong case for individuals having reasonable expectations of privacy in de-identified information. Once de-identified it is not personal information. Individuals would, however, have reasonable expectations that such de-identified data not be re-identified and used for purposes they did not consent to. In this case, PIPEDA’s generally applicable principles would apply to such re-identified personal information. On the other hand, there are strong commercial and governmental interests in using de-identified data and diminished justifications for using re-identified information without obtaining appropriate consents.
Clarifying PIPEDA to foster research and to train algorithms using de-identified personal information fosters the balance that is inherent in our privacy law. Exceptions to rights to specifically foster research are well known. For example, fair dealing for research purposes is a well known exception to the exclusive rights of copyright owners in Canada. The Supreme Court has premised its expansive interpretation of the “research” fair dealing purpose as to promote the use of works and to balance the interests of copyright owners and users.[11] Since all such uses must, in principle, be “fair”, in theory (though not always in practice) the economic and other legitimate interests of copyrights holders should also not be undermined or unduly prejudiced.[12] Similarly, clarifications or exceptions to privacy rights that foster fair research and training of algorithms and that do not undermine individuals reasonable expectations of privacy should be permitted.
AI and human rights and automated decision making
Human rights
The OPC has asked whether amendments to the law should “Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights.”
In discussing this question the OPC appears to be asking two questions. First, whether privacy should be regarded as a human right. Second, whether privacy should protect human rights. For example, the consultation document states:
The purpose of the law ought to be to protect privacy in the broadest sense, understood as a human right in and of itself, and as foundational to the exercise of other human rights…
Likewise, the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) resolution on AI (2018) affirms that “any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values.”…
The need to firmly embed and clarify rights in PIPEDA is ever more pressing in a digital context where computers may make decisions for and about us with little to no human involvement…
Discussion question:
-
- What challenges, if any, would be created for organizations if the law were amended to more clearly require that any development of AI systems must first be checked against privacy, human rights and the basic tenets of constitutional democracy?
There is no question that the uses of AI raise many questions concerning how human rights should be respected. In this regard, there has been an explosion of research into regulatory frameworks for AI. While there is a certain degree of coalescence around normative principles for the uses of AI, there is much less agreement or international consensus on what specific regulatory approaches or laws should be enacted.[13] This cautions careful consideration before new regulatory “made in Canada only” frameworks are introduced in Canada.
The proposal to protect human rights and democratic values through privacy raises questions about what are the appropriate frameworks to bolster these values, as needed. There are already existing laws that protect human rights throughout Canada including, in Ontario, the Human Rights Code. There are also provincial and municipal elections laws throughout the country and a federal law, the Canada Elections Act. Each Act has its own set of rules and enforcement mechanisms. Perhaps their scope needs to revisited to assess whether they adequately protect human rights and democratic values in the AI age. However, even assuming that the federal government has the unilateral power under the Constitution Act to enact sweeping legislation to protect human rights and democratic values across all political landscapes (particularly after the Securities Reference case), questions remain as to whether a parallel set of rights and remedies should be established under our privacy laws that would apply only to decisions made using AI. This would be inefficient and would also result in a privacy regulator making decisions that, institutionally, other tribunals have more expertise in.
The proposal to protect human rights and democratic values under privacy law also raises the question as to whether privacy laws should be recalibrated to protect values other than privacy. There have been many reasons given for protecting privacy by courts in Canada and elsewhere.[14] However, despite the breadth of the rationales for protecting privacy and the potential overlapping principles and values, human rights and democratic values raise functionally distinct issues from privacy values. In fact, many human rights claims involving the use of personal information do not raise any issues associated with unauthorized, collection, use, or disclosure of personal information or any infringements on expectations of privacy or violations of other norms that underpin privacy laws.
A similar question was recently considered in the Final Report Law Commission of Ontario, Defamation Law in the Internet Age. There the LCO recommended that “Defamation law and privacy law serve different functions and should remain conceptually separate and distinct.” The explanation was given as follows:
Defamation is one of several legal claims that might apply in the case of harm caused by internet speech. Other possibilities include injurious falsehood, misappropriation of personality, cyberbullying, online harassment, revenge porn, hate speech, violation of data protection statutes and the new European “right to be forgotten”. There is also a quickly developing assortment of breach of privacy claims at common law, such as intrusion upon seclusion, public disclosure of More importantly, the LCO has concluded that, notwithstanding overlapping principles and values in certain respects, defamation law and privacy law continue to be functionally distinct and should remain so. Although defamation and breach of privacy both involve reputational interests, they remain conceptually separate causes of action. A claim of defamation targets false statements causing reputational harm. A claim for breach of privacy targets harmful statements made in circumstances of an expectation of privacy. The distinction between truth and falsity, along with an increased focus on opinion, remains crucial to the tort of defamation. In contrast, breach of privacy is focused on the violation of privacy interests. Therefore, although the overlap between some defamation/privacy claims is acknowledged, it is important to maintain a doctrinal distinction between them based on these differing functions.
Automated decision making, explanations, and transparency
The OPC also proposes “a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions”. The rationale, in part, is as follows:
If we are to meaningfully protect privacy as a human right in a digital context involving AI systems, one such right that needs to be considered is the ability to object to decisions made by computers and to request human intervention. A number of jurisdictions around the world include in their laws a right to be free from automated decision-making, or an analogous right to contest automated processing of personal data, as well as a right not to be subject to decisions based solely on automation…
The OPC also proposes to provide “individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing”. The explanation given is as follows:
Transparency is a foundational element of PIPEDA’s openness principle and a precondition to trust. However, as currently framed, the principle lacks the specificity required to properly address the transparency challenges posed by AI systems, as it does not explicitly provide explanation rights for individuals when interacting with or being subjected to automated processing operations.
The potential perils associated with automated decision making have been well documented in the literature associated with AI. So have issues associated with explainability and transparency.[15]
The issues associated with automated decision making and explainability, however, transcend the uses of personal information for making decisions (even though automated decisions using AI often result from the automated processing of personal information).
The problems with automated decisions also do not correspond with the problems or values that privacy is designed to protect. The complaints about automated decisions and the lack of explanations associated with automated decisions frequently arise from risks of using biased or discriminatory data sets or algorithms, lack of trust or fairness, or individuals not understanding why they were denied a benefit or were subject to an adverse decision. Further, the openness principle is concerned about an organization’s policies and practises with respect to the management of personal information.[16] The principle does not require disclosure of an organization’s practices, policies or algorithms for making decisions or require organizations to explain their decisions. Thus, trying to tackle these sorts of issues, like trying to protect human rights and democratic values through privacy, makes for an uneasy fit.
To be clear, I am not arguing against amendments to the law to protect human rights, democratic values or to avoid biased and opaque decisions that have significant impacts on individuals or organizations, or to address other new potential threats arising from AI. In my view, however, to the extent this is warranted, it may be better suited to be in sui generis legislation or in amendments to existing laws that focus specifically on protecting these values. In any event, trying to protect these values through privacy, will only result in a partial amelioration of the problems and could even result, in the long run, conflicting with more generally applicable laws.
I would like to thank the OPC for giving me an opportunity to provide input into this consultation on these important issues.
_____________________________
[1] See, Bernard Marr, Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems (John Wiley & Sons, 2019).
[2] See, Government of Canada publications available @ https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai.html including the Directive on Automated Decision-Making (Feb. 2019) available @ https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.
[3] For a description of the different types of AI technologies and their evolutions, see, Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (Farrar Oct, 2019)
[4] See INDU Report and Government Response re CASL online @ https://www.ourcommons.ca/DocumentViewer/en/42-1/INDU/report-10/response-8512-421-327.
[5] Examples of exceptions were to protect computer programs under copyright laws and, in some countries like Canada and territories such as the EU, specific exceptions to permit reverse engineering of computer programs. Another example in relation to regulating certain types of Internet platforms is the recent EU Copyright Directive.
[6] European Union , On Artificial Intelligence – A European approach to excellence and trust at P. 17
[7] S.3 states “The purpose of this Part is to establish, in an era in which technology increasingly facilitates the circulation and exchange of information, rules to govern the collection, use and disclosure of personal information in a manner that recognizes the right of privacy of individuals with respect to their personal information and the need of organizations to collect, use or disclose personal information for purposes that a reasonable person would consider appropriate in the circumstances.”
[8] See for example PIPEDA Report of Findings #2012-002 “Furthermore, in keeping with the purpose of the Act, there is a need to balance the privacy rights of individuals with the need to facilitate the use of personal information for appropriate commercial purposes.”
[9] See Principle 4.3.5 of: Schedule 1 “In obtaining consent, the reasonable expectations of the individual are also relevant.” and Section 5(3) of PIPEDA which states “An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.” See also Royal Bank of Canada v. Trang, 2016 SCC 50.
[10] See for example, R. v. Marakah, 2017 SCC 59.
[11] CCH Canadian Ltd. v. Law Society of Upper Canada, 2004 SCC 13, Society of Composers, Authors and Music Publishers of Canada v. Bell Canada, 2012 SCC 36
[12] Under the well known Berne Three Step test, an exception to copyright must be narrow in scope and reach, most not interfere with the normal exploitation of the work, or unduly prejudice the copyright owner.
[13] See, for example, European Union, Ethics Guidelines for Trustworthy AI, Singapore, Model Artificial Intelligence Governance Framework (second edition), IEEE Ethically Aligned Design, ICO, Big data, artificial intelligence, machine learning and data protection, Australian Government, AI Ethics Principles. See also, ITECHLAW, Responsible AI: A Global Policy Framework.
[14] See, for a summary of the rationales, David Mangan,“The Relationship Between Defamation, Breach Of Privacy And Other Legal Claims Involving Offensive Internet Content”, LCO Issue Paper (July 2017), 20,at p 22-31, online: http://www.lco-cdo.org//srv/htdocs/wp-content/uploads/2017/07/DIA-Commissioned-Paper-Mangan.pdf.
[15] See, for example, Council of Europe study Algorithms and Human Rights
[16] See Principle 4.8.1