AI Regulation white paper – The National Robotarium response
AI regulation: a pro-innovation approach – August 2023
The revised cross-sectoral AI principles
We agree with the overall underpinning principles of the AI regulatory framework, however, the framework must also address various gaps and omissions. Notably, it is important that environmental impacts are recognised from the outset. Over the course of their lifecycle, many AI technologies require a large amount of energy, becoming significant emitters of carbon dioxide. The current regulatory regime does not seem to account for the environmental impact of AI, which is believed to account for some 4% of global emissions. The AI regulatory framework should thus mandate prioritisation of the most efficient solutions or otherwise require the publication of their environmental impact.
We also believe that, in order to provide adequate regulatory framework, the definition of AI within the context of this paper must be clear. AI has a role to play in devices of all shapes and sizes – from apps in our mobile phones, to complex imaging devices on subsea robotics. It is therefore critical that the UK Government clearly defines AI and the application of regulation to AI and AI devices, in turn enabling developers and researchers to understand how regulation may impact their products in advance of development. It is particularly important to contextualise the use of AI and the use of data in society and across industrial sectors, with regulation targeting focused on addressing cases in which solutions may have critical impact on safety and security and personal health and wellbeing.
Within the current proposal, transparency is important in AI, but the UK Government needs to acknowledge that there is a range of standards already in place, including the IEEE Standards in Computer Networks, which the National Robotarium has been involved in developing. Therefore, it will be important to provide clarity on how this framework and regulators will relate to existing cross-sectoral frameworks. Additionally, it must be made clear when developers and researchers must declare their solutions are AI-enabled, as the stages from early development to testing to certification and widespread rollout may take many years. Given the potential for widespread and varied use of AI-enabled technologies, it is also critical to inform the general public in easily explainable terms, avoiding ambiguity and expert-only language.
The model of regulation in China perhaps offers a system that should be looked at in more detail. This asks AI developers to demonstrate how explainable their model is, followed by an explanation of the impact of the AI model and finally, seeks clarification about how the AI model will be managed and how it can be safely shut down, if required.
We also believe that some form of avenue for redress and appeal is important to any regulatory process. There should particularly be greater clarity on the different types of stakeholders involved and their needs, how and when stakeholders become informed about the AI technology and its potential harm, and how to easily access redress routes in adequate timescales. For example, it should be clear whether the regulator or the developer is responsible and what a proportionate and timely redress looks like.
Finally, the Government should consider the cost of regulation. For example, if industry is expected to bear the cost, there is a need to ensure costs will be proportioned and will not put unnecessary charges on final consumers. Additionally, many AI solutions are used in free apps the general public regularly uses, while placing additional regulatory costs on the industry may decrease their availability and stifle innovation.
A statutory duty to regard
We agree with the overall approach that is proposed to the regulation of AI. However, there is a danger that in being light-touch, the UK could be perceived as being out of step with other regulators. Giving regulators greater power over time may be required to ensure the UK is in lock-step with other global regulators.
New central functions to support the framework
It is important that regulators and the central function have a good knowledge and understanding of the AI industry and the application of AI. Every sector is different, so the central function will need a broad understanding of how AI technology can be adapted for particular industrial, commercial or public uses across sectors. Critically, it is important that the regulatory regime is compatible with existing regulations and does not instil unnecessary recertification requirements for current technologies. Additionally, there must be greater clarity regarding how the new regulatory framework will interact with technologies incorporating AI-enabled robotics solutions, particularly clarifying how and when new AI regulation would fit into the existing robotics regulation and certification schemes.
Regulator capabilities
There is a need to broadly improve the number of people being taught AI skills at school and university level and that these students are being taught not only core coding and AI skills, but also understand the regulatory and legal frameworks within which they operate. While the market will eventually regulate itself in terms of education requirements, there is currently only a small number of AI experts with the required overarching understanding in the UK, particularly in academia. Therefore, drawing on their knowledge needs to be done so in a way that is not too onerous, in addition to reviewing the need for a nationally-approved AI curriculum across all UK schools and universities which are helping to build the next generation of AI experts
Tools for trustworthy AI
Creating public trust is important. A simple requirement to notify the public of the use of AI on websites and apps would ensure the public is more aware of its use (and benefits). Consent could be managed in much the same way as cookies are currently consented. Further questions may however arise when we consider the particular technologies – such as vision technologies, which historically have the potential to be misused – highlighting the need to provide clarity about how and why such technologies could be trusted.
We also believe that the Office for AI should look at whether there is scope to convert best practice into tools. Commercial companies are not incentivised to create tools, while funding bodies largely support new research instead of investing in continued maintenance and improvement of existing tools. Government support to maintain and manage the improvement of regulatory tools, either via incentivisation of private tech companies or academic maintenance funds, would therefore be helpful to ensure robust and responsible tools.
Foundation models and the regulatory framework
The regulation of LLMs and open-source sites are particularly complex and there is a balance to be struck between supporting innovation and appropriate levels of regulatory oversight. Nevertheless, it is important that LLM outputs are deemed faithful and auditable, with clear referencing to material that is accurate and open to scrutiny as part of the regulatory process The same applies for open-source sites, whereby when they are in public domain as websites, or apps etc, it is important that their information is true and auditable.
Additionally, there must be greater clarity on the different potential uses of open-source and closed-code technologies in the final regulatory framework. The original author of an open-source code may not be the person who causes potential harm and the frameworks should clearly address and define what constitutes LLM misuse, and who is ultimately responsible.
AI sandboxes and testbeds
The proposals for sandbox methodology of single sector and multiple regulators, needs to be more clearly defined i.e. whether it is the sector developing the AI technologies or other sectors that use them.
There may be greater advantages in a single regulator for the sector, rather than multiple regulators. Multiple regulators may result in a lengthier approval process, potential conflicting views between regulators, and a lack of understanding about the technology. A single regulator, by contrast, may have a better understanding of technology and its potential impact.
Additionally, the UK Government should consider the need for additional regulatory guidance on healthcare and AI applications available to children and young people given particular sensitivity around the risk and the social impact of such technologies. Special consideration should be given to how the sandbox model might work in these circumstances. This should take account of the fact that apps that provide healthcare support are now classed as medical devices and open to regulation from the MHRA
The question of the frequency of sandbox testing should also be examined. Prototypes and proposals may be brought to the sandbox for consideration, but online learning means that AI devices may significantly change over time. This will create new complications for the regulation of AI. Additionally, if one product is applicable in multiple sectors, there needs to be greater clarity on whether it would need to be re-tested in every sector – making the process lengthy and expensive – or whether it should be allowed to carry forward approvals from other sectors.