The AI Act: Three Things To Know About AI Regulation Worldwide
As AI proliferates, countries and their legal systems are trying to catch up. AI regulation is emerging at industry level, city and county level, and at country and region level. The European Union AI Act could well serve as a template for AI regulation around the world. In this post, we describe three key things you should know about AI Regulation: Context – what is already around us, AI Act – the key elements of the upcoming EU legislation, and What all this is likely to mean to businesses and individuals.
Context – What is already here
First, some (recent) history. The AI Act is not the first piece of AI regulation. In 2018, the European Union introduced the General Data Protection Regulation (GDPR) which has clauses that impact AI – notably text indicating a “right to explanation” – an area that affects AI algorithms and has been the subject of much debate since its introduction. Elsewhere, local regulations have been attempted, ranging from bans on the use of certain types of AI (such as facial recognition), to committees to examine the fairness of algorithms used in resource allocation. Countries have also enacted nationwide AI regulations and frameworks – such as Canada’s recent regulations on privacy and the development of AI systems and the AI Governance Framework introduced by Singapore.
The European Union AI Act
The AI Act is a proposed European law on AI. It assigns AI usages to three risk categories: (a) systems that create an unacceptable risk, which will be banned, (b) systems that are high risk – that will be regulated, and (c) other applications, which are left unregulated. The exact criterion and specifics of the law are still being debated, with exceptions and loopholes having been identified by a number of institutions. That said, this upcoming law has the potential to shape AI regulations not just in the European Union but the world over, in the same way that the 2018 GDPR drove multi-nationals around the world to rethink their approaches to privacy and accountability in data management.
What you need to know
There are many regulations in development, and to make things even more complicated, they differ in their geographical or industry scope and in their targets. Some target privacy, others risk, others transparency, etc. This complexity is to be expected given the sheer range of potential AI uses and their impact. As a business, here are some of the critical things you need to know
- Many of these regulations contain components that intersect AI and privacy. As such, following the regulations will likely require a well-defined data practice where user information is very carefully managed.
- Regulations should be evaluated to determine if they impose a need for explainable AI, where decisions made by AI algorithms should be explainable to humans.
- Regulations may involve a verification or test phase, where the behavior of the AI has to be well documented or perhaps subject to external testing. This testing for example can involve whether the AI is subject to bias.
In all – ensuring compliance to current and emerging AI regulations will require businesses to maintain a data and AI operational practice (MLOps). Forming a cohesive practice will make it easier to see these regulations as connected entities that are addressed together.