Blog

AI and innovation: regulatory discussions at the House of Lords

4m read

Regulation is inevitable

The widespread use of Artificial Intelligence (AI) brings both tremendous opportunities and risks. As AI continues to revolutionise many sectors, the need for effective regulation becomes increasingly apparent. The recent legislative movement in this space demonstrates this, including the EU AI Act and The Automated Vehicles Act.

Recently, MDRx’s COO and Head of Emerging Technology were at the House of Lords with the Tony Blair Institute’s Benedict Macoon-Cooney and The Entrepreneurs Network to discuss the power of AI regulation and the importance of mitigating risks and protecting public interest while fostering innovation.

One of the key points highlighted was the success of flexible regulation, as exemplified by the Financial Conduct Authority (FCA) who talk about being a “technology-agnostic, principles-based and outcomes-focused regulator” – see this recent post by Tom Grogan if you want to find out more. This approach reflects a pro-innovation approach and allows for flexibility and evolution in response to technological advancements, ensuring that regulations remain relevant and effective.

Where predictable and easy-to-implement, regulation can encourage faster growth and innovation by opening the market, rather than closing it down. It can open markets, increase competition and protect consumers and the environment. The controversial – but sensible – focus of the UK government on Frontier AI (demonstrated at the 2023 AI safety summit) does mean that there is not a heavy regulatory burden on growing Narrow AI businesses, but the threat of what regulation might come is creating a state of uncertainty.

The challenge is finding the balance when heavy regulation, weak regulation and no regulation can all have negative impact on innovation.

Our discussion highlighted several concerns about the capability of regulators to effectively regulate AI. Without the capability, the potential benefit of attracting innovative companies wanting to grow and develop emerging technology in the UK and ensuring quality and safety would be limited.

Despite the recent positive track record of some regulators, namely the FCA, (1) the level of regulatory understanding and engagement with emerging technologies, (2) the practicalities of implementing new regulation and (3) debates over regulatory scope threaten to stall progress.

  1. Regulatory understanding and engagement

Across the regulatory and public sector there is limited AI expertise. Not only can these sectors fail to compete with the high salaries in the public sector, but their organisations’ skills frameworks are not designed to attract, develop and retain experts. This is slowing progress and requires a radical rethink to the type of backgrounds and skills frameworks the public sector and regulatory organisations need to recruit (spoiler alert, they do not need to be world leading experts!) so that regulators can consistently and valuably engage with AI.

To state the obvious, effective regulation requires ongoing dialogue between regulators and stakeholders to address these concerns and foster a collaborative environment. Many attendees expressed concerns about the current state of regulatory engagement. They felt they were not being given sufficient opportunities to meaningfully shape the regulatory landscape.

In defence of the regulators, small companies often lack the resources to engage with regulators, in a small organisation without policy departments who have time and effort to navigate regulatory bodies, engagement can feel like a barrier rather than an opportunity.

  1. Debates over regulatory scope

The focus on foundation models at the AI safety summit and through regulatory conversations that followed has led to frustration for businesses who are keen to understand how narrow artificial intelligence will be regulated to invest wisely in development.

There is an argument for broadening the scope of regulatory focus, after all many more businesses innovating in the UK are developing narrow AI, however given the lack of regulatory capacity to focus on these issues it is sensible to start somewhere or there will be a risk that there is no progress at all.

Perhaps this provides an understandable excuse for limited engagement from regulators with many businesses.

  1. The delays of implementation

Let’s keep in mind that the goal of flexible, proactive regulation with regards to AI does not create the right outcomes without implementation. The ability to implement regulation is often where the resource limitations of both the regulators and the regulated start to bite. Investing in the implementation set up and testing implementation approaches as part of the development cycle means it can be implemented without delay.

Our recommendations

To harness the power of AI while mitigating its risks, we propose the following recommendations:

Invest in a team that can enhance valuable regulatory engagement
  • They understand the technology, but are not world-leaders.
  • Update skills frameworks to develop and reward the knowledge and skills needed to develop regulation that can adapt to technological advancement.
Stay focused
  • Stick with a limited scope of work.
  • Set out the regulatory roadmap for the future. Including at what point narrow AI will start to be considered, or how it will be incorporated into the ongoing conversations.
Prepare for implementation
  • Develop and test implementation approaches alongside regulatory development.
  • Give businesses time to prepare and adapt to change, to avoid stifling innovation.