Toronto Centre was invited by the Bank for International Settlements to contribute a chapter to their upcoming publication discussing the future of financial supervision.

Technological advancements, along with other global forces such as climate change and geopolitics, are driving rapid and fundamental changes in the economy and society. The financial system is no exception. Technology is changing the mode of delivery, the players, and the nature of financial services in both developed and developing economies. 

This Toronto Centre Note discusses what has changed for the supervisor of the future  with AI, key areas of supervisory focus, and the skillsets and mindset needed to meet these challenges.

Read the TC Note:

TABLE OF CONTENTS

Introduction. 2

The Supervisor’s Job: What Changes with AI?. 2

AI: Known and Unknown Risks and Opportunities. 2

Supervisory Life Cycle. 3

Data Collection and Monitoring: AI for Continuous Surveillance. 3

Risk Identification and Assessment: AI for Enhanced Analysis. 4

Supervisory Intervention: AI for Decision Support, Not Secision-Making. 4

Continuing Oversight and Market Adaptation: AI for Continuous Learning. 5

Essential Elements of the Job Remain the Same. 5

Key Areas of Supervisory Focus with AI 6

Supervising Technology-Related Risks for National Cybersecurity. 6

Supervising the Ethical Use of AI in Financial Institutions. 8

Supervising New and Emerging Players Offering Financial Services. 9

The Supervisor of the Future: Skillsets and Mindset 11

Challenges to Supervisory Skillsets and Mindset 11

Integrating Specialist Technology Supervision Skillsets. 12

Leveraging SupTech for Effective Supervision. 12

Taking Timely and Effective Supervisory Action. 13

Collaborating Effectively with Other Public Agencies. 14

Conclusion. 14

References. 16

 

 

THE SUPERVISOR OF THE FUTURE

NAVIGATING ARTIFICIAL INTELLIGENCE

AND TECHNOLOGICAL CHANGE

Introduction1

Technological advancements, along with other global forces such as climate change and geopolitics, are driving rapid and fundamental changes in the economy and society. The financial system is no exception. Technology is changing the mode of delivery, the players, and the nature of financial services in both developed and developing economies.

Technological advancements are nothing new to the financial system. However, Artificial Intelligence (AI)2 and in particular Generative AI3 – which simulate human learning and decision-making – accentuate known risks, generate new risks, and create opportunities. Toronto Centre (2022b) set out in detail the supervisory implications of AI. This Toronto Centre Note discusses what has changed for the supervisor of the future4 with AI, key areas of supervisory focus, and the skillsets and mindset needed to meet these challenges.  

The Supervisor’s Job: What Changes with AI?

AI: Known and Unknown Risks and Opportunities

One set of known risks arises from financial institutions5 increasing their reliance on technology. These are risks of system (software and hardware) and operational failures, increased exposure to cyberattacks, and outsourcing risks, heightened by the increasing reliance on technology. The risks of contagion and systemic failure within the financial system are also high, not least because global information technology (IT) services are dominated by a small number of providers.6 Financial systems are also connected across borders. While this offers speed and access in transactions, it makes financial systems more vulnerable to financial criminals and other contagion risks.

Another set of (partly) known risks, particularly from increasing AI use, arise from the opaque  “black box” nature of AI systems used by financial institutions to manage risks, interface with their customers, and make business decisions. While technology black boxes are known risks, AI black boxes can create new risks arising from intentional or unintentional biases, depending on the dataset on which they were trained. When financial institutions rely excessively on AI systems – for example, to make credit decisions, give investment advice to investors, price insurance coverage, or make pension fund investment decisions – they risk giving up human control and the ethical considerations that guide human action. There may also be some currently unknown risks that by definition cannot be described at this stage.

Another unknown is the extent to which new players, in particular the Big Tech firms,7 will increase their presence in providing financial products and services. These firms are not fully within the current regulatory perimeter and have a comparative advantage in using AI systems and accessing large datasets to train them. The full supervisory implications of these developments will challenge the supervisor of the future.

AI also presents opportunities for both financial institutions and supervisors: 

  • In financial institutions, the use of AI can increase the effectiveness and efficiency of operations, risk analysis in credit decisions, insurance underwriting and asset allocation, and the detection of fraud and financial crime.
  • Accessibility and convenience for users of financial services can promote financial inclusion. 
  • A supervisory authority can also use AI to improve supervisory processes, risk analysis, and speed of response in supervisory actions. 

Supervisory Life Cycle

AI systems can help supervisory authorities with enhanced and, in some cases, real-time data collection, and speed up risk assessment analysis. Yet that is only part of their function. The supervisory life cycle involves multiple stages: data and information collection, risk identification and assessment, supervisory intervention, and continuing oversight. AI and technology can help across this life cycle, but human judgment remains essential for interpreting findings, determining supervisory actions, and monitoring progress. 

Data Collection and Monitoring: AI for Continuous Surveillance

Supervisory authorities traditionally collect information from regulatory reporting, periodic data submissions from financial institutions, and on-site visits. AI-driven SupTech solutions can enhance this process by enabling:

  • Real-time data collection and anomaly detection. AI can analyze vast amounts of structured (balance sheets, capital ratios) and unstructured (news, social media, transaction patterns) data to detect early warning signals.
  • Natural Language Processing (NLP). Used to scan regulatory filings, financial disclosures, board and committee minutes, annual reports, and customer complaints to identify risks.
  • Machine Learning (ML) for predictive monitoring. Supervisors can use ML models to forecast financial distress by identifying patterns in financial institution behavior and macroeconomic conditions.

Risk Identification and Assessment: AI for Enhanced Analysis

Once data are collected, supervisory authorities must assess risks and prioritize interventions. AI can support:

  • Risk scoring models. AI can develop predictive risk scores for financial institutions, factoring in their financial health, governance structures, financial resources, and external risk indicators.
  • Interconnected risk mapping. AI can assess systemic risks by mapping the relationships between financial institutions, cross-border exposures, and third-party dependencies (e.g., cloud providers).
  • Automated stress testing and scenario analysis. AI enhances stress testing by rapidly simulating different economic and financial shocks.

Despite these advancements, risk assessment requires human interpretation. AI can signal abnormalities, but line supervisors must contextualize risks, identify false positives, and apply supervisory judgment.

Supervisory Intervention: AI for Decision Support, Not Decision-Making

Having arrived at a risk assessment, the supervisory authority has to follow through with supervisory intervention of appropriate timeliness, intensity, and focus.  AI can support this with: 

  • Recommendation engines. AI can suggest supervisory actions based on historical precedents and regulatory frameworks, but cannot account for political, legal, or ethical considerations.
  • Automated document review and case management. AI tools can flag compliance breaches by scanning documents under review and create a case management workflow for the supervisor to follow up with the financial institution to correct these breaches.
  • Behavioural analytics. AI can analyze past compliance behaviour of financial institutions to predict their likelihood of adhering to supervisory requirements.  

However, supervisory intervention requires professional skepticism and enforcement expertise. Supervisory authorities must navigate industry pushback, market stability concerns, and possible unintended consequences of supervisory actions.

Continuing Oversight and Market Adaptation: AI for Continuous Learning

Supervision is not a one-time process.  AI can play a role in adaptive learning and improving supervisory frameworks by:

  • Continuous learning models. AI can refine risk models over time based on new data and supervisory outcomes.
  • Dynamic policy adjustment. AI-driven analytics can help supervisory authorities refine capital requirements, risk thresholds, and compliance protocols in response to emerging risks.
  • AI for supervisory training. AI-powered simulations can help train line supervisors on complex scenarios, enhancing their decision-making capabilities.

Essential Elements of the Job Remain the Same

Even with the transformative power of AI, the supervisor of the future has the same broad mandate as the supervisor of today: maintaining public confidence and trust in the financial system that is underpinned by financial markets that are efficient, fair and transparent.

This mandate has many different dimensions. These include safeguarding depositors’ money; protecting investors; making sure financial institutions meet their promises to investors, the insured, and pensioners; financial stability; combatting financial crime; and (where they have the mandate to do so) promoting greater financial inclusion.

The core characteristics required in the supervisor of the future remain unchanged from the supervisor today – high integrity, commitment to public service, curious about latest developments, alert to risks in the financial sector, and able and willing to learn quickly on the job. The supervisor of the future will need to be even more nimble, agile, and innovative to respond to the challenges of AI. 

How do technological advancements and AI change the job of the supervisor of the future? We go back to the core elements of the supervisor’s job today, which are to: 

  • Evaluate financial institutions for risks that may compromise their safety and soundness, weaken consumer and investor protection, make them vulnerable to financial crime, or adversely affect financial stability and inclusion. 
  • Assess whether financial institutions have the governance, risk management, other internal control structures, and sufficient financial resources to manage and mitigate these risks and direct them to take remedial action if they do not.
  • Prepare for potential crises in the financial system that may arise from financial or other shocks – for example, climate change, Big Tech disruption, or geopolitical or cyber risks. 

All these require critical thinking and judgment. No two financial institutions have exactly the same business model and risks, or the same governance, risk management, internal control structures, and financial resources. Even with AI advancements, it is not possible to feed a financial institution’s data into an AI system and automatically generate a supervisory plan or remediation actions.

While AI systems can help supervisors with enhanced and, in some cases, real-time data collection and speed up the risk assessment analysis, that is only part of the supervisor’s job. Having arrived at a risk analysis, the supervisor must follow through with timely supervisory actions of appropriate intensity and focus.

We should not underestimate the judgment and stamina that supervisors need to push through corrective actions in financial institutions. For example, consider retail misconduct or the mis-selling of investment products due to inappropriate commission structures in financial institutions. It takes a human mind to understand and then correct incentives that drive human behaviour in financial institutions. The supervisor needs to detect those “perverse” incentives, direct the institution to correct them, and then monitor the follow-through. If the root cause of perverse incentives is not corrected, the same issues of misconduct will reoccur. It is unlikely this job can be done by AI.

Key Areas of Supervisory Focus with AI

Rapid technological change and AI have highlighted some key areas of focus for the supervisor of the future:

  • Supervising technology-related risks in the context of national cybersecurity.
  • Supervising the ethical use of AI in financial institutions.
  • Supervising new and emerging players offering financial services.

Supervising Technology-Related Risks for National Cybersecurity8

In the past, the most extensive technological use would typically be by the largest and most well-resourced financial institutions. Then, the supervisor under a risk-based approach might focus on technology risk mainly in these institutions. Now, with technology used increasingly by all financial institutions, including in emerging markets and in critical market infrastructures, the supervisor of the future has to focus even more on technology-related risks in the financial sector.

Increased technological use has moved operational risk management and operational resilience9 to the forefront. Technology-related operational failures in a less severe form may compromise service availability and reliability to consumers. However, they can be catastrophic enough to threaten the safety and soundness of financial institutions and financial stability. Drivers of these risks include:

  • The complexity of interconnected technology systems (including co-existing legacy and new systems) with weak or poorly understood links that compromise system reliability.
  • The outsourcing of functions such as data storage and processing, where financial institutions may not be able to exercise full oversight or control of these functions.
  • The domination of several large global providers of essential IT services to financial institutions, heightening the risk of concentration, contagion, and systemic failure.
  • The opportunities for financial criminals and other bad actors who can now achieve large financial gains with a remote cyberattack. 

The incentives for malevolent actors to carry out cyberattacks are growing. With a global rise in geopolitical tensions, cyberattacks by state and non-state actors are a tactical weapon in modern warfare. A cyberattack on financial systems may be one of the fastest ways to wreck an economy and destroy public confidence.

For financial criminals, the risk-benefit calculation has fundamentally changed with technology. Compared to physically robbing a bank branch, for example, transnational financial criminals can operate from anywhere with anonymity, masking their locations with an easy escape route. A successful cyberattack has the potential to achieve enormous gains. Generative AI has also given criminals new and low-cost tools to create deep fakes to scam victims. Proceeds of their crimes are often used to fund terrorism and other organized crimes, perpetuating a cycle of violence and instability.

The supervisor of the future stands with other government agencies at the front line of national defense and cybersecurity. The supervisor has to remain vigilant in supervising all financial institutions for technology risk. A successful cyberattack through the weakest link – perhaps a financial institution with weak internal controls – could cripple an interconnected system. This would quickly erode public confidence in the financial system and public institutions.

The supervisor of the future has to step up, and other government agencies will need to recognize the supervisor as a valuable partner in the national cybersecurity strategy. This has implications for the skillsets and resourcing of the supervisor, and how it cooperates with other government agencies: 

  • The supervisor of the future has to compete with the private sector to hire the specialist skills to supervise technology-related risks and incorporate technology risk supervision (including using supervisory technology or SupTech tools) as a core part of its on-site and off-site supervisory processes. This requires setting clear supervisory expectations for, and robust supervision of, financial institutions’ cybersecurity, business continuity, and disaster recovery arrangements.
  • Crisis preparedness exercises run by the supervisor with the financial sector should test scenarios of catastrophic failures in key national technology infrastructure due to operational failures, cyberattacks or geopolitical conflicts. 
  • The supervisor should improve its own cybersecurity defenses along with business continuity and disaster recovery arrangements. The supervisor is also a prime target for cyberattacks (both for financial gain and disruption). 
  • The supervisor has to be well-positioned to combat financial crime. It must also be plugged into the national cybersecurity and anti-crime structures, with established protocols of cooperation and information-sharing with other government agencies, both domestic and foreign. The usual partners for the supervisor would be the law enforcement agency, anti-fraud centres, and the financial intelligence unit. This partnership may need to extend further to cooperation with telecommunications authorities and national cybersecurity agencies. 
  • The supervisor must keep up with new international developments in cyberattacks and financial crime typologies through cooperation with other supervisors globally. 

Supervising the Ethical Use of AI in Financial Institutions

Along with the known risks of cyberattacks and operational failures, the supervisor of the future faces the rapid adoption of AI by financial institutions. The benefits may be significant for emerging markets and developing economies, particularly in financial inclusion. More accurate credit scoring and product pricing using cost-effective AI may expand access to loans to customers. Insurance underwriting and pricing may similarly be more accurate, with better AI-assisted underwriting models giving rise to innovation in new insurance products. Automated financial advice to investors can be cheaper than using human advisers, extending financial robo-advice services to a wider range of investors (Toronto Centre, 2022b).

However, the use of AI by financial institutions also presents challenges and known risks to the supervisor of the future, although the extent of their impact on supervisory objectives remains uncertain:

  • The opaque nature of AI makes it difficult to explain its workings and decisions and makes it difficult to remediate.
  • Ethical rules may not have been built into AI systems. It is up to humans to write these rules into the systems or to set appropriate “guardrails” and governance arrangements around the AI systems.
  • The predictability of AI systems’ decisions must be shown to be consistent in a defined range of outcomes through extensive routine and stress testing of processes. 

These challenges may lead to a serious governance issue within a financial institution when the board, senior management, and risk managers of the institution do not fully understand how the AI system works and cannot effectively set limits or controls around it.

In a sense, this is not a new supervisory issue. As computing power increased and financial institutions started using more computer risk models in their risk management and business decisions, supervisors were often confronted with situations where the board, senior management, and staff did not understand fully how these “black box” models worked.

However, the use of AI has brought new risks. With the previous black box risk models, there is a stable algorithm, and parameters were set by humans, who could correct them if needed. Now, the algorithm may be constantly evolving as the system engages in deep learning.10  It is unclear how a financial institution can examine and correct this algorithm if it leads to systemically biased or risky analysis and decisions.

In addition, AI systems do not yet have in-built ethical rules. AI systems are not neutral or objective, as they are trained on certain datasets that may be biased or discriminatory. Decisions made by an AI system in a financial institution may run counter to consumer and investor protection objectives and other supervisory objectives. For example, credit or underwriting risk scoring by AI trained on a biased dataset may discriminate on the basis of gender, age, race, geographic location, or other factors. Together with the opaqueness and evolving algorithms of the AI, this leads to a dangerous mix where it becomes difficult to pinpoint the problem and therefore difficult to correct the problem.

The supervisor of the future is not alone in facing this challenge. There are calls for a coordinated global response to the challenges of AI, including its ethical challenge. IMF (2023) has drawn a parallel to cooperation on the global issue of climate change, with the Paris Agreement serving as the global framework for coordinated action.

As with climate change, it may take time to forge a global consensus on the ethical use of AI and coordinate action on global standards. In the meantime, AI technology is proceeding rapidly, and financial institutions are motivated to adopt AI as quickly as possible to save costs and to remain competitive.

However, the supervisor cannot remain passive or noncommittal about the use of AI in financial institutions, even while the global standards are still being worked out. The supervisor has to signal to financial institutions that their use of AI is being watched and set supervisory expectations. The OECD (2019) and the European Union (2019) have set out high-level principles for trustworthy AI, and some supervisors have set out good practices and guidance on supervisory considerations for the use of AI.11 

With the widespread use of AI in the financial sector and in society, the supervisor of the future must prepare to find a suitable, national policy framework drawing on global standards12 to deal with the risks. In day-to-day supervision, supervisors could draw on their experience in supervising black box models and apply the principles to supervising AI models. For example, supervisors have required financial institutions to demonstrate that they meet requirements on data (accuracy, availability, comprehensiveness, etc.) before they are allowed to use models to calculate regulatory capital and solvency requirements. Financial institutions have to test and validate their models. These supervisory principles could be a starting point for supervisors aiming to supervise AI models used in financial institutions. 

Supervising New and Emerging Players Offering Financial Services

Technology has lowered the cost barriers to providing financial services. As a result, more providers – non-bank financial institutions such as investment funds, insurance companies, and pension funds – are undertaking more financial intermediation (Financial Stability Board, 2024). Also competing against these financial institutions are firms that come from the technology space, armed with vast customer datasets and data management and technological expertise:   

  • Big Tech firms with core businesses in e-commerce or social media have leveraged their customer base and expertise in data management and analysis to enter the financial services space.
  • Other technology firms such as start-ups that offer financial services primarily through digital means are competing with traditional financial institutions in certain market segments, such as retail payments. They may partner with telecommunication companies to extend their reach to customers without incurring the costs of building brick-and-mortar branch networks. 

These “new” technology players in the financial services space have brought innovation, efficiency, and competition to the market, improving financial inclusion in developing economies in particular. However, they have also challenged models of how financial supervision and regulation are structured.

Regulation and supervision are currently structured around sectoral lines – banking, insurance, pensions, securities. International standard setters are still grappling with how the current entity-based or activity-based authorization model can be effectively applied to these new players, in a way that creates a level playing field with existing financial institutions. Some models have emerged, each with its advantages and disadvantages: restriction, segregation, inclusion, or a combination (BIS, 2023).

The Big Tech firms are often global players that provide critical IT services to financial institutions. This adds concentration risk and financial stability implications to the mix of supervisory considerations. Their comparative advantage is based on IT expertise and access and control over large amounts of customer data, including extensive personal data. This also raises questions about data privacy, governance, and ethical use of data by these firms.

Culturally, these new players may be difficult for the supervisor to understand, not least because of their “move fast, break things” mindset, which emphasizes speed, experimentation, and technological disruption. This may not always sit well with the supervisor, tasked with maintaining public confidence and trust in the system, where steady development may be preferred and better understood.

Nevertheless, it would be a lost opportunity for financial innovation and financial inclusion if these new players were to be shut out just because they do not fit into the regulatory perimeter or traditional models of regulation and supervision. In fact, the moment may have already passed when they can be shut out of the financial system.

The challenge for the supervisor of the future is to continually balance multiple supervisory objectives in its approach to these new players. For example, what should be the approach for a new player with a new financial service that promotes financial inclusion but introduces more risk into the system? A trade-off like this may change over time according to the level of economic development and societal preferences. For example, developing economies may put a greater weight on the financial inclusion objective. Societies that prefer greater market innovation may allow more risky products into the market.

While being mindful of supervisory objectives and maintaining prudence, the supervisor of the future may need the confidence to experiment with different approaches towards these new players. For example, regulatory sandboxes are already an accepted way to test out new financial services, to try to ensure that financial institutions have the expertise, governance, controls, and financial resources to manage and mitigate the risks to a level that is acceptable to the supervisor. Another approach is communication and information-sharing between supervisors and their authorization and regulation colleagues. The supervisor may uncover risks in the business models of the new players that were not anticipated during the authorization process (Toronto Centre, 2024d). Giving that feedback to the authorization and regulation departments could improve the way these services and players are authorized and regulated.

The Supervisor of the Future: Skillsets and Mindset

Challenges to Supervisory Skillsets and Mindset

The fast-changing financial landscape poses challenges to the skillsets and mindset of the supervisor of the future. At a broad level, there is still a requirement for time-tested supervisory techniques; for example, face-to-face engagements with the financial institution’s board and senior management to assess the quality of governance, risk management, and internal controls.

While AI is transformative, not all the supervisory issues of the future will be related to technology. Many risks have stemmed from failures in financial institutions’ risk management and governance processes, and this will continue to be the case. The failures of Silicon Valley Bank and Credit Suisse, for example, were rooted in fundamental problems of weak financial practices and risk management rather than in the use of technology.

Nevertheless, the required skillsets in the supervisor of the future may look quite different from skillsets in the past. Supervisors today must have a higher level of accounting and legal knowledge due to the increased complexity of financial services. This knowledge is required for supervisors to be able to undertake business model analysis, balance sheet and financial analysis, and scenario analysis and stress testing of emerging risks.[13]

As financial institutions digitize their data, the supervisor has an opportunity to undertake quicker and more sophisticated risk analysis of the data than would be possible from paper-based reporting.  However, the supervisor of the future will need to build sufficient capacity in digital data analysis to make the most of these opportunities.

The supervisor of the future needs to build skillsets to assess both financial and non-financial (operational) risks in financial institutions. This applies to supervisors of the future in both developed and developing economies. Indeed, as financial institutions and new players in developing economies leverage technologies such as mobile phone delivery of financial services, leapfrogging developed economies in this area, the demands on supervisors in developing economies to develop supervisory skillsets in technology may even be more urgent.  

Technological interconnectedness in the financial sector also demands swifter supervisory responses from the supervisor of the future. Financial crises and market turmoil have developed and been transmitted quickly across borders, and it will be no different with greater technological use. Supervisors will need to be more agile in their decision-making processes. This can create tension within the supervisor. The supervisor has to operate within clear legal boundaries and due processes, yet is now called upon to be more nimble, agile, and innovative in responding to the supervisory challenges of technology, including AI.

Integrating Specialist Technology Supervision Skillsets

Supervising technology risks, AI, and new players will require enhanced and specialist technology skills in the supervisor of the future. While these skillsets are not all that are needed for effective supervision, they are necessary. Building expertise in technology and hiring and retaining staff with specialist technology skillsets will be a challenge, as these same skillsets will be in demand in other government agencies and in the private sector. It will likely require additional resources and headcount.

With budget and resource constraints, the supervisor today has to be strategic in building technology skillsets in its staff. All staff need to be equipped with a working knowledge of technology, but it is not realistic to train all staff to be IT specialists (who may end up being poached by industry). Instead, it would be more practical to hire IT professionals and turn them into supervisors through robust organizational assimilation, instilling in them a strong sense of public mission. Strategic collaboration with universities in training a pipeline of these skillsets, offering graduates a clear career path in the organization, could increase the supply for both the supervisor and the industry.

Leveraging SupTech for Effective Supervision

The supervisor of the future can leverage technology in its supervisory processes through suitable SupTech applications. SupTech in itself is not enough for effective supervision, but where implemented, it can help improve the efficiency and accuracy of supervisory analysis to support decision-making (Toronto Centre, 2022b). Some examples include:

  • Efficient collection of supervisory information in a digitized format, supporting timely and more sophisticated analysis of the information in supervisory risk assessments.
  • Data visualization of trends to help supervisors quickly pick out heightened risks in specific financial institutions for supervisory follow-up.
  • Analysis of market data to detect collusive behaviour, price manipulation, and fraud in securities and other traded markets.
  • Analysis of customers and customer transactions (using data supplied by financial institutions) to detect potential money laundering and financial crime.
  • Alerts of potential mis-selling and retail misconduct through monitoring social media complaints (non-structured data) about financial institutions.
  • Stress testing and scenario analysis of risks with complex connections within the financial sector, and between the financial sector and the real-world economy; for example, climate- and biodiversity-related risks.
  • SupTech that is compatible with regulatory technology (RegTech) solutions used by financial institutions to manage regulatory compliance, streamlining regulatory and supervisory processes.

A change management process will also be necessary to ensure that technology and SupTech applications are accessible to and accepted and used by supervisory staff.

The supervisor, constrained by time and limited available and qualified staff, may be tempted to rely excessively on SupTech and AI to make supervisory decisions. The supervisor must guard against this temptation. Just as the supervisor holds the board and senior management of financial institutions responsible for ethical use of AI, the supervisor must be accountable for its supervisory decisions and actions. The supervisor has to be able to demonstrate to the financial sector, the public, oversight bodies, and other stakeholders that its supervisory decision-making, aided by SupTech and AI, is reasonable and consistent with its legislative and regulatory framework.

Another consequence of the constraint on qualified staff is that a supervisory authority will need to carefully choose which SupTech applications to introduce, based on a cost-benefit analysis. The benefits of improved and more timely analysis of supervisory data (supporting more timely supervisory action) would have to be weighed against the investment in software, hardware, and specialized staff resources amid competing demands on supervisory resources.    

Taking Timely and Effective Supervisory Action

With technology lowering the marginal cost of collecting more data, supervisors may be tempted to collect more data, and more granular data more often, just because they can. However, having more data and more SupTech applications does not necessarily lead to more efficient or timely decision-making.

More technology may facilitate quicker analysis of large amounts of data from financial institutions, leading to quicker decisions and supervisory interventions. However, different SupTech applications using the same data from financial institutions may lead to different conclusions, potentially delaying timely supervisory action. A related risk is that supervisors place undue reliance on what is measured – in this case, on whatever metrics and warning signs are generated by a SupTech analysis of regulatory returns. They may then ignore other signals from other sources (for example, concerns arising from on-site interviews with senior management and directors). SupTech applications should supplement, not replace, other sources of information.    

For example, in the crisis simulation exercises run by Toronto Centre for supervisors, one common issue was “analysis paralysis” (Toronto Centre, 2022a). Participants asked for more and more data to gain more clarity on the situation, and in the meantime, the crisis evolved further. In a fast-changing financial landscape, new technological developments, new players, and new risks are emerging. It is important to take timely and appropriate action, not to stay at the level of analysis.

It is worthwhile for the supervisor to take a hard look at what is preventing timely supervisory action. It could be structural impediments such as the lack of authority to act, or an inadequate framework for action leading to unwillingness or inability to act. These issues would not be addressed by more technology, but by a human commitment to correct these impediments.

Collaborating Effectively with Other Public Agencies

The supervisor of the future cannot work effectively in a silo. AI, cybersecurity, and transnational crime are all issues that demand a coordinated response from government agencies, including the supervisor. The response goes beyond national boundaries. Cooperation with supervisors in other jurisdictions will be critical. 

However, rising geopolitical tensions and economic competition may impede the willingness of countries to cooperate with one another, even when the benefits of collaboration are obvious. The supervisor of the future will have to navigate the politics of a divided multi-polar world with a changing landscape, as new economic powerhouse countries press their influence or as new political blocks develop and others weaken.

Against this backdrop, participation in international and regional supervisory forums becomes even more valuable for the supervisor of the future to forge that sense of community and shared purpose in jointly tackling these emerging global challenges.

Conclusion

This Note has set out a number of challenges for the supervisor of the future, with a particular focus on those arising from the use of AI and other technology by both financial institutions and supervisory authorities.  These challenges include: 

  • Supervising financial institutions whose increasing reliance on technology exposes them to growing risks of systems and operational failures, cyberattacks, outsourcing failures, and contagion. 
  • Supervising financial institutions’ increasing use of opaque black box AI systems and requiring these institutions to exercise sufficient governance and controls over these systems. 
  • Supervising new players in the financial sector, in particular the Big Tech firms not fully within the current regulatory perimeter.
  • Making the best use of AI by the supervisor as part of SupTech, within the resource constraints on the supervisory authority. 

Faced with these challenges, what would be the attributes or characteristics of an effective supervisor of the future? We start with the characteristics of good supervision identified by the IMF (2010) – good supervision is “intrusive,” “skeptical,” “proactive,” “comprehensive,” “adaptive,” and “conclusive,” supported by the two pillars of the ability and willingness to act.

These characteristics of good supervision are consistent with the critical thinking, judgment, and ethics emphasized in this Note. They apply to effective supervision of all financial institutions’ activities, not only their use of AI. Elaborating on these characteristics, an effective supervisor of the future should be:

  • Adaptable, curious, innovative, and a life-long learner: An effective supervisor is curious and willing to learn about the latest technological developments in the financial sector and has an adaptable and innovative mindset to leverage SupTech to enhance supervisory processes. 
  • Continually exercising critical and strategic thinking: An effective supervisor is able to process the deluge of complex information and data (leveraging SupTech where appropriate), exercising critical and strategic thinking to identify and focus on the key risks requiring action.
  • Decisive in taking leadership and timely action, underpinned by ethics and personal integrity: An effective supervisor follows through on supervisory issues, taking the lead to identify and solve problems that impede the supervisor’s ability or willingness to act. An effective supervisor is aware that not acting until one has all the information thought needed would delay a timely response and allow risks to grow in the financial sector. 
  • Able to communicate well and collaborate with stakeholders: An effective supervisor is able to explain complex, technical concepts to stakeholders, including non-experts, and garner support for effective supervisory interventions.   
  • Taking a broad, global perspective: An effective supervisor understands the interconnections within financial systems, including cross-border dependencies. The supervisor leverages evolving global standards and international forums for cooperation to set supervisory standards. 

These attributes would be key in the individuals aspiring to be effective supervisors of the future. From an organizational perspective, these individuals would need to be supported by the organization investing in internal capacity building, specifically in enhancing technological proficiency, and financial supervision expertise.

The supervisor of the future will operate in an increasingly uncertain and potentially fractured world, beset with global climate change and geopolitical risks. AI is transforming the financial system, the economy, and society in developed and developing economies in ways that are not yet fully apparent.

The supervisor’s role of maintaining public confidence and trust in the financial system will therefore become even more critical. As AI makes the workings of the financial system more opaque, the supervisor of the future has to keep looking out to make sure the financial system continues to work in a safe and fair manner for consumers and investors.  In fulfilling this mandate, critical thinking, and effective judgment, guided by ethics, will always continue to be the core skillsets of the supervisor of the future. 

 

References

Autorité des marchés financiers. Best practices for the responsible use of AI in the financial sector: issues and discussion paper. February 2024.

Bank for International Settlements. "Big techs in finance: forging a new regulatory path" Speech by Agustín Carstens, General Manager, BIS, in BIS conference “Big techs in finance – implications for public policy.” Basel, Switzerland, February 8-9, 2023.

Bank for International Settlements. "Sharper supervision in an era of technology races" Keynote speech by Cecilia Skingsley, Head of the BIS Innovation Hub, at the InnovateFinance Global Summit, London, April 15, 2024a.

Bank for International Settlements. Peering through the hype – assessing suptech tools’ transition from experimentation to supervision. FSI Insights No. 58. June 2024b.

Basel Committee on Banking Supervision. Literature review on financial technology and competition for banking services. Working Paper 43. June 2024c.

European Union High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI. April 2019.

Financial Stability Board. Artificial intelligence and machine learning in financial services. November 2017.

Financial Stability Board. Enhancing the resilience of non-bank financial intermediation: Progress report. July 2024.

International Monetary Fund. The Making of Good Supervision: Learning to Say "No". May 2010.

International Monetary Fund. Harnessing AI for global good. December 2023.

International Organization of Securities Commissions. The use of artificial intelligence and machine learning by market intermediaries and asset managers. September 2021.        

International Organization of Securities Commissions. The Use of Innovation Facilitators in Growth and Emerging Markets. July 2022.

Monetary Authority of Singapore. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector. 2018.

Organization for Economic Co-operation and Development. Scoping the OECD AI principles: Deliberations of the expert group on artificial intelligence at the OECD. December 2019.

Toronto Centre. SupTech: Leveraging Technology for Better Supervision. July 2018a.

Toronto Centre. Supervision of Cyber Risk. December 2018b.

Toronto Centre. Cloud Computing: Issues for Supervisors. November 2020.

Toronto Centre. Operational Resilience: The Next Frontier for Supervisors? April 2021a.

Toronto Centre. Using SupTech to Improve Supervision. November 2021b.

Toronto Centre. Lessons for Supervisory Authorities from Crisis Simulation Exercises. March 2022a.

Toronto Centre. Supervisory Implications of Artificial Intelligence and Machine Learning. July 2022b.

Toronto Centre. Regulation and Supervision of Retail Payment Systems. January 2023a.

Toronto Centre. Cyber Risk: Determining and Delivering a Supervisory Strategy. July 2023b.

Toronto Centre. The Risk-based Supervision of Liquidity. February 2024a. 

Toronto Centre. Supervisory Stress Testing: A Primer. March 2024b.

Toronto Centre. Supervision of Stress Testing by Financial Institutions. March 2024c. 

Toronto Centre. Supervision of Financial Institutions’ Business Models. December 2024d.

Toronto Centre. Principles for Effective Supervisory Intervention. June 2025.



1  This Toronto Centre Note was prepared with contributions from Toronto Centre experts and members of the Board. Please address any questions about this Note to This email address is being protected from spambots. You need JavaScript enabled to view it..


2 The Financial Stability Board (2017) defines artificial intelligence as the “theory and development of computer systems able to perform tasks that traditionally have required human intelligence.” Machine learning, a subset of AI, is described as “a method for designing a sequence of actions to solve problems – known as algorithms – that optimize automatically through experience with little to no human intervention." For simplicity, this document uses "AI" to encompass both terms.


3 Generative AI refers to the use of AI to create new content such as text, images, music, sounds, and videos.


4 “Supervisor of the future” is used in this Note to denote both a supervisory authority and an individual supervisor, depending on the context.


5 Throughout this Note, “financial institutions” refers to all financial sectors and includes intermediaries.


6 For example, on July 19, 2024, about 8.5 million Windows systems around the world temporarily ceased operations when an update for CrowdStrike's Falcon sensor product went wrong.


7 The top five Big Tech firms by market capitalization as of October 1, 2024, are Apple, Microsoft, NVIDIA, Alphabet (Google), and Amazon. 


8 See Toronto Centre (2018b) and (2023b) on supervising cybersecurity risks.


9 Toronto Centre (2021a).


10 Deep learning is where algorithms such as artificial neural networks work in “layers” inspired by the structure and function of the brain, simulating human comprehension, learning, and decision-making. 


11 For example, Quebec’s Autorité des marchés financiers (2024) and the Monetary Authority of Singapore (2018). 


12 International efforts are underway, including the Global Financial Innovation Network, formally launched in January 2019 by an international group of financial regulators and related organizations. https://www.thegfin.com/ 


13 See Toronto Centre (2024a, b, c, d) on supervision of these issues.

In order to provide you with the best online experience this website uses cookies.

By using our website, you agree to our use of cookies.

Learn more