Financial regulators around the world are stepping up risk monitoring related to artificial intelligence (AI) as banks and financial institutions increase the application of this technology.
In particular, the United States, China and many other countries are racing to take the lead in developing revolutionary machine learning technologies, but regulators warn of potential risks to financial stability.
The Financial Stabilization Board (FSB), the G20's risk monitoring agency, said that too many organizations using the same AI model and specialized hardware could lead to flock behavior, creating loopholes if there are no replacements.
The FSB report emphasized: This over-reliance could create systemic risks in the financial industry.
In addition, the Bank for International payments (BIS) warned that central banks, financial regulators and supervisory institutions need to improve their AI capabilities.
According to BIS, it is necessary to upgrade the ability to observe and use technology to clearly understand the impact of technological advances on the financial system.
While AI is expected to help banks work more efficiently, the FSB notes that the technology could also increase market tensions.
However, there is currently little practical evidence that AI- driven market correlation directly affects market outcomes.
In addition, financial institutions also face the risk of cyber attacks and AI-related fraud.
Some regions have begun to establish AI management mechanisms. In particular, the European Union is an example when implementing the Digital Activities Resolution (DORA) effective from January this year, to enhance the ability to monitor and control technology risks in the financial sector.
In general, AI is considered an important innovation tool for the financial industry, but at the same time poses a major challenge in risk management, requiring regulators around the world to take timely action to ensure the safety and stability of the system.