Regulating AI sounds good in theory, but has many challenges in practice. The key is to protect the public from material risks of harm, but without over regulating technological progress and innovation. One of my major criticisms of AIDA, Canada’s proposed artificial intelligence law, was that it would do just that. My morning inbox re-enforced this concern with two articles. The first, an article by Florian Mueller OpenAI’s Sora latest major AI technology to be withheld from EU, UK markets — at least for now and presumably due to overregulation. The second, a Speech by the Master of the Rolls: AI and the GDPR. Both highlight the dangers of over regulation.
The Mueller article provides examples of innovations in AI not being launched in the AU likely because of the EU AIA, the GDPR or the EU Digital Markets Act (DMA).. Examples given are OpenAI’s new text-to-video model Sora, Apple’s Intelligence functionality, and offerings from Meta and Google.
The UK’s Master of Rolls’ speech focused on the EU AIA and the GDPR as creating impediments including perhaps unintended ones to the adoption and development of AI. His thesis is summarized as follows:
It is interesting, I think, to notice how, in the field of AI, the distinction between private law and regulation has already become blurred. That is something that has happened also in relation to digital assets and digital trading, where many countries have regulated and, in the course of so doing, have introduced changes to their private law by a sidewind. This is something that I regard with a little scepticism, but, in relation to digital assets and digital trading, will have to be left for another lecture.
As technology advances, it is important, I think, not to impede its beneficial adoption by premature regulation, before the dangers posed by those technologies are clearly understood. I am not saying that that has happened, but it is something to which the EU and its member states and other countries should be alive. This point was made specifically in relation to the EU’s AI Act in Mario Draghi’s report last month on the future of European competitiveness. [4]He said expressly that:
“the EU’s regulatory stance towards tech companies hampers innovation: the EU now has around 100 tech-focused laws and over 270 regulators active in digital networks across all Member States. Many EU laws take a precautionary approach … For example, the AI Act imposes additional regulatory requirements on general purpose AI models that exceed a pre-defined threshold of computational power – a threshold which some state-of-the-art models already exceed” (emphasis added). [5]
It is also important to ensure that impediments are not placed in the way of technologies that facilitate international commerce. Inadvertent changes to municipal laws that are widely used in international trade can create problems for frictionless technology-assisted trade, particularly where such changes do not align with each other and with internationally applicable regulatory regimes.
A first example he gives is with the GDPR’s rules related to automated decision making. (The GDPR and CPPA proposals are summarized here.) He notes that the GDPR has been given a prohibitive interpretation “that effectively prevents Governments, global corporations, and SMEs utilising automated decision-making wherever it would affect people’s individual rights, unless the process is specifically authorised by statute or consented to.” He goes on to point our that there “are likely to be many instances where these entities are already using AI in this way or in a way that comes very close to the article 22 situation”. The second related to the challenges of using publicly available data for training AI systems including problems with open source software code.
The Master of Rolls concluded his speech saying:
So, let me return to the thesis that I mentioned in my introduction. The two AI “problems” that I have highlighted this evening, are problems created in part at least by regulation getting ahead of private law. Domestic legislation can create exceptions to the article 22 problem, and allow automated decision-making about individuals in defined circumstances. I am not aware that it has yet done so in any of the jurisdictions likely to be represented here tonight.
We all need to be careful not to impede the development and adoption of new technologies, whilst also being astute to ensure that people’s basic human rights are not infringed by the new processes. The European Law Institute’s annual conference is being held here in Dublin for the rest of this week. The question I have identified will be the focus of our discussions. We are fortunate to have many distinguished and expert speakers with us to discuss these problems…
You may think that the two problems I have spoken about are nerdy and over-technical. The problem is that future generations will, I think, speaking for myself anyway, want to make use of AI, LLMs and automated decision-making to improve their everyday lives. The lawyers and legal systems perhaps owe it to consumers, present and future, that we protect them from real cyber-abuses without preventing or hampering innovation.
It appears as if Bill C-27 will not make it through Parliament before the election. This will give ISED and the government time to rethink its approach to regulating AI and privacy. It will also give them time to consider how best to avoid disadvantaging Canadians with over regulation or by unintended consequences of new privacy provisions like the automated decision making provisions in the GDPR.