- Home
- Deputy governors' speeches
- A compass to guide us toward intelligent...
A compass to guide us toward intelligent AI supervision
Denis Beau, First Deputy Governor of the Banque de France
Published on the 27th of October 2025
Conference on Artificial Intelligence and Financial Stability
Lisbon, 27 October 2025
Opening remarks by Denis Beau
First Deputy Governor of the Banque de France and Designate Chairman of the ACPR
Ladies and Gentlemen,
Let me begin by thanking the Banco de Portugal and its Governor, Álvaro Santos Pereira, for their invitation to this event, which I am delighted to attend.
Artificial Intelligence (AI) is increasingly transforming the financial sector. A recent survey conducted by the ACPR shows, for example, that nearly all banks and insurance companies in France now operate AI systems. The stated objectives are to enhance operational efficiency, improve customer service, and help better manage risks.
However, the growing adoption of AI in the financial sector also carries risks. First, for financial stability – consider the dependency of financial institutions on major AI model providers, which are also key cloud service providers. Second, for the solvency of individual institutions, since a poorly managed use of complex systems can lead to systemic losses. And finally, quite obviously, for consumers.
These risks contribute to explaining our regulatory framework for AI use in Europe, to ensure it is developed in a controlled manner. This framework includes, of course, the European AI Act, but also – and this must be kept in mind – the sectoral regulation, which applies to AI as it does to any other technology used by financial players.
In this context, we, financial supervisors, face today the complex question of the “right” way to oversee AI: how to apply the AI Act and sectoral rules to this rapidly evolving technology? Which systems should be examined? How, and to what extent?
In my initial remarks, I would like to share with you the compass that guides us at the ACPR, to help us answer these questions, namely simplicity and the pursuit of efficiency. In terms of rules to refer to, this compass can help us build a coherent overall framework (I). From a supervisory perspective, it can help us define high-level principles for effective and efficient oversight of AI systems (II).
I/ As regards applicable rules, one key issue still to be clarified as we speak is how the requirements of the AI Act will integrate into the financial regulatory framework.
To shed light on this, a major mapping exercise has been underway at the European level for nearly a year, under the guidance of European supervisory authorities. Its initial findings are reassuring, as no major contradictions have been identified. However, after identifying the various rules applicable to AI, we still need to explain how they will be articulated in theory and how they will be implemented in practice.
The theoretical articulation of rules primarily falls to the European Commission, which will publish guidelines on this topic in the coming months. However, how financial supervisors implement these standards – and the choices they make – will be crucial in determining the actual impact of AI regulation on financial institutions.
In this regard, I believe that we must avoid a literal interpretation of the texts and instead favour a convergent and constructive one, emphasising commonalities with the objective of identifying what only needs to be verified or reported once.
To illustrate my point, consider the risk management system, for which the AI Act itself stipulates that its requirements may be integrated into or combined with the relevant EU legal provisions. The constructive interpretation of this should lead us to ask financial institutions to include only the ‘new elements’ of the AI Act, such as the risks of discrimination or algorithmic opacity. Everything else, such as the requirements for internal governance arrangements, processes and mechanisms provided for in the CRR/CRD framework, or the cyber risks covered by the DORA framework, would be considered as meeting the AI Act requirements once the sectoral requirements are met. It would then be up to the various supervisors to share the relevant information with each other, as there would be no question of carrying out redundant checks.
Our ultimate goal should be to organise the oversight of AI systems in the financial sector in such a way as to limit risks not only from the perspective of the AI Act, but also in terms of our other missions: financial stability and consumer protection. To this end, we must make the most of the synergies with our existing supervisory activities, in line with the simplicity and efficiency that I mentioned earlier.
II/ This brings me to the second part of my remarks, on how to supervise effectively and efficiently AI systems in the financial sector. First, we must apply the principles of “market surveillance” that underpin the AI Act. This does not mean continuously monitoring all AI systems in the financial sector; rather, we must adopt a risk-based approach, that enables us to identify and focus on systems that pose significant risks.
Being selective in the systems we examine does not mean settling for minimal oversight. Quite the opposite. This selectivity should enable us, when necessary, to conduct in-depth reviews of AI systems – not just administrative checks, but “under-the-hood” inspections of algorithms to examine and discuss their technical characteristics.
To conduct these selective yet potentially deep inspections, we will clearly need to develop an AI system assessment methodology. This should assess system governance, as well as characteristics such as performance, robustness, fairness, explainability, and cybersecurity. Some of these elements are relatively familiar in a sector where many processes have long relied on models. Others are entirely new, especially the challenges related to the explainability of increasingly opaque AI algorithms as technology advances.
And we need to work on this methodology without delay. It will have the advantage of enabling us to gradually refine our expectations regarding financial institutions, and thus to support them more effectively. In a shifting regulatory and technological landscape, we have a crucial role to play in helping institutions implement the “right” risk management tools.
This is certainly an ambitious programme. And it is an urgent one. It requires that supervisors build expertise across all AI-related topics. This involves recruiting specialised profiles, which is no small challenge. We will also need external support, particularly through partnerships with specialised research institutes. Supervisors will also face the pressing need to cooperate, nationally, at the European level, and beyond.
Finally, I believe we must aim to co-develop assessment methodologies with the financial sector, as supervisors and supervised entities share many challenges on these issues. At the ACPR, we have recently organised methodological workshops with volunteer institutions on complex topics such as algorithmic fairness and explainability. These help us move faster and more concretely toward what a “trustworthy AI” could look like in the financial sector.
In conclusion, I would like to stress that AI surveillance, beyond its intrinsic importance, can serve as a laboratory for our other missions, paving the way for new supervisory methods that are not only risk-based, but also incorporate the ever-growing technological dimension of financial processes. This naturally leads to another topic we may explore further in our discussions: the deployment of new technologies for our internal use – what we call the “SupTech” approach. This is indeed essential to maintaining our effectiveness in the future.
Thank you for your attention.
Download the full publication
Updated on the 28th of October 2025