Artificial intelligence. Economic and financial stakes in a technological breakthrough
Glossary of acronyms
Introduction
Setting the stage
Definitions
General risks
Regulation
Economic aspects
Possible consequences
Related policies
Could AI help improve economic policies?
Financial aspects
Possible consequences
Related policies
Could AI help improve financial policies?
Conclusion
Summary
Artificial intelligence (AI) is still relatively new, so its economic and financial consequences are still to be assessed. This is all the more so the case for related policies, for which recommendations can only be tentative. However, in this note, we make an attempt in both directions, first setting the stage, then distinguishing between economic and financial aspects.
We do not see AI as being able to trigger an upheaval in the economic or financial environment. This goes against two common tales. The first tale is the “nightmare” one, in which a large part of the working population could be replaced by machines, causing a surge in unemployment and inequalities, and large financial crises, with robots freely implementing algorithms which would amplify market movements. The second tale is the “fairytale” one, in which robots would replace humans in most tedious and physically exhausting tasks. In turn, this would allow to reduce working hours, both at daily frequency and over the entire lifetime, especially for the less-skilled, and to manage portfolios in a totally passive manner, reducing risks but not returns.
We recommend that a favourable environment be provided to AI for its potential to be fully exploited. In an environment increasingly reliant on AI-powered devices and software, competition policies should ensure that rents are not fully captured by a few dominant firms, while the regulatory environment should not stifle innovation. Furthermore, labour regulation should allow enough flexibility, while education and training, tax policy, and the management of human resources in the public sector should be adapted. Crucially, funding for innovative firms should be abundant and allocated by the most competent persons and institutions, which should also be individually liable for the decisions they take. This implies the development of venture capital and the establishment of a Capital Market Union (CMU), complemented by a Savings and Investment Union (SIU). AI does not call for specific policy instruments, be it in the economic or financial spheres. Rather, AI is both an indicator of fault lines and limitations in current public policies and a tool to partly remedy them, together with the implementation of long overdue structural reforms.
Françoise Drumetz,
University of Paris - Nanterre - EconomiX.
Christian Pfister,
University of Orléans - LEO
Glossary of acronyms
AI: Artificial Intelligence
BIS: Bank for International Settlements
CFIA: French Commission on AI
CMU: Capital Markets Union
CSF: Financial Stability Council (France)
ECB: European Central Bank
EIOPA: European Insurance and Occupational Pensions Authority
ETF: Exchange-Traded Fund
EU: European Union
GDPR: General Data Protection Regulation
GenAI: Generative AI
GFSR: Global Financial Stability Report
GPAI: General Purpose AI
GPU: Graphics Processing Unit
HLPE: High-Level Panel of Experts
ICT: Information and Communication Technologies
ILO: International Labour Organization
IMF: International Monetary Fund
LLM: Large Language Model
NCAs: National Competent Authorities
OECD: Organisation for Economic Co-operation and Development
UBI: Universal Basic Income
UEI: Union of Savings and Investment
UN: United Nations
Rack equipped with standard Cat. 5e Ethernet cables, located in a large corporate data center

The authors are speaking in their personal names. Their comments do not commit the University of Paris – Nanterre – EconomiX or the University of Orléans – LEO.
Artificial intelligence (AI) is still relatively new, so its economic and financial consequences are still to be assessed. This is all the more so the case for related policies, for which recommendations can only be tentative. However, in this note, we make an attempt in both directions, first setting the stage, then distinguishing between economic and financial aspects. We conclude by reviewing our main takeaways.
Setting the stage
After characterizing AI, we examine associated general risks and existing regulations.
Definitions
BIS, “Artificial intelligence and the economy”, BIS Annual Economic Report 2024, June 2024 [online] ; Iñaki Aldasoro, Leonardo Gambacorta, Anton Korinek, Vatsala Sheeti and Merlin Stein, “Intelligent financial system: how AI is transforming finance”, BIS Working Papers, n°1194, June 2024 [online].
BIS, op cit, p. 93.
Ibid.
BIS, op cit, p. 94.
Ibid.
Ibid, p. 119.
China has adopted a multi-pronged approach to establish a substantial presence in the AI market, combining extensive government investment, a domestically-led tech ecosystem (Huawei’s cloud AI solutions, Baidu, Tencent, Alibaba, SenseTime, iFlytek, DeepSeek…) and sector-wide AI integration.
Andrei Hagiu and Julian Wright, « Artificial intelligence and competition policy », International Journal of Industrial Organization, January 2025 [online].
Mario Draghi, op cit, p. 79.
HLPE, High-Level Panel of Experts to the G7, Artificial Intelligence and Economic and Financial Policymaking, December 2024 [online].
BIS, op cit, p. 97.
Financial Stability Board, “The Financial Stability Implications of Artificial Intelligence”, November 2024, p. 5 [online].
BIS, op. cit., p. 97.
We define AI and examine if and how it differs from other technological innovations.
What is AI ?
AI is a field of computer science that refers to computer systems performing tasks associated with human-like intelligence. The term “artificial intelligence” was first introduced by John McCarthy in 1956 during a conference at Dartmouth College to describe “thinking machines2”. However, major progress in the field did not occur until the 1990s, with the development of machine learning, underpinned by advances in data availability (the more data a model is trained on, the more capable it typically becomes), and in the 2000s, with the increase in computer power and storage capability. AI comprises a broad and rapidly growing number of technologies and fields3:
– Machine learning refers to techniques (algorithms and statistical models) “designed to detect patterns in the data and use them in prediction or to aid decision-making4”. Machine learning systems can learn and adapt without following explicit instructions;
– Deep learning uses neural networks modelled on the brain and composed of multiple layers which can capture increasingly complex relationships in the data. As underlined by BIS, “a key advantage of deep learning models is their capacity to work with unstructured data” (words, sentences, images…)5 ;
– Generative AI (GenAI) “refers to AIs capable of generating content, including text, images or music, from a natural language prompt6”, containing instructions in plain language or examples of what users want from the model. The BIS (2024, page 94) specifies that “large language models (LLMs) are a leading example of GenAI applications because of their capacity to understand and generate accurate responses with minimal or no prior examples. Therefore, LLMs and GenAI have enabled people using ordinary language to automate tasks that were previously performed by highly specialized models7”;
– AI agents are the next frontier in AI. These agents are AI systems that increasingly take on agency of their own. “They build on advanced LLMs and are endowed with planning capabilities, long-term memory and access to external tools such as the ability to execute computer code, use the internet, or perform transactions on the stock market8”. What distinguishes them from the autonomous trading agents, already deployed in high-frequency trading for example, is that they have the intelligence and abilities of cutting-edge LLMs, with the capacity to autonomously analyse data, write code to create other agents, trial-run it and update it as they see fit.
As stressed by the Draghi Report (2024)9, the European Union is in a weak position in the development of AI and lags behind the U.S. and China10. AI applications are built on a stack11 that begins with the specialized hardware used to train and run GenAI models. The hardware layer of the stack is dominated by Graphic Processing Units (GPUs) provided by the American company Nvidia which also provides a software framework for GPU utilization. At the next level are companies which offer cloud computing services which are crucial for AI development and deployment. This layer is dominated by Amazon Web Services, Microsoft Azure and Google Cloud Platform (other contenders are Nvidia’s cloud, IBM’s cloud, Alibaba’s cloud, etc.). According to the Draghi Report (2024), these three American operators account for over 65% of the EU market while Deutsche Telekom, the largest EU cloud operator accounts for only 2%. Regarding data used to train models, all five U.S. Big Tech companies (Apple, Amazon, Google, Meta, Microsoft) are potential data providers. Finally, GenAI models are largely dominated by American ones: the leading players – Open AI (Chat GPT), Google DeepMind (Gemini), xAI (Grok), Anthropic (Claude), Meta (Meta-Llama) – are all based in the U.S. Since 2017, 73% of GenAI models have been developed in the U.S. and 15% in China.
The Draghi Report12 notes that “the few companies building GenAI models in Europe, including Aleph Alpha and Mistral, clearly need large investments to become competitive alternatives to U.S. players”.
Are conventional macroeconomic indicators, such as GDP, adapted to accurately measure the economic impact of AI, increasingly adopted by households (i.e. ChatGPT), corporations and the financial services industry? It is worth noting that the financial services industry has been quicker to adopt AI in its processes than non-financial firms.
According to the report of the High-Level Panel of Experts to the G713, “much like the ‘productivity paradox’ seen during the early days of computing, AI’s contribution to productivity and economic growth might not immediately appear in GDP” for three main reasons:
– “AI creates value in non-traditional ways such as quality improvements and efficiency gains that are not recognized by conventional indicators or recognized with a long lag”;
– “The creation of value may also be invisible because the service offered is for free and produces no monetary transaction, and, consequently, is not recorded in GDP”;
– The development of AI is “driving the emergence of new activities” and businesses that “do not fit neatly into existing statistical frameworks, further complicating efforts to measure its impact”.
– Therefore, it would be useful to develop alternative measurement approaches to complement usual methods and metrics.
Does AI differ from other technological innovations?
AI is often regarded as a potential general-purpose technology, like electricity or the internet, namely a technology that becomes pervasive, improves over time and generates spillover effects that can improve other technologies. However, BIS underlines two differences between AI and typical general-purpose technologies14:
– AI’s J-curve is steeper. “The adoption pattern of general-purpose technologies typically follows a J-curve, slow at first” -it took decades for electricity or the telephone to be widely adopted, “it eventually accelerates”. AI is different in this respect, displaying a “remarkable speed of adoption, reflecting ease of use and negligible cost for users, and a widespread use at an early stage by households as well as firms in all industries”;
– However, there is substantial uncertainty about the long-term capabilities of GenAI. Current “LLMs can fail elementary logical reasoning tasks” and fail at “counterfactual reasoning”. Moreover, “LLMs suffer from a hallucination problem: they can present a factually incorrect answer as if it were correct, and even invent secondary sources to back up their fake claims”. BIS (2024) stresses that “hallucinations are a feature rather than a bug in these models” because, as noted by the Financial Stability Board -FSB- (2024, page 5)15, “their outputs are the result of a stochastic process” (i.e. a statistical probability) “rather than a deep understanding of the underlying text”. The BIS (2024, page 97) asks a still open question: are these problems due to “limits posed by the size of training data sets and the number of model parameters or do they reflect fundamental limits to knowledge that is acquired through language alone16” ?
All in all, it appears that the case for AI being a general-purpose technology is not totally well-grounded yet.
General risks
Financial Stability board, op. cit., p. 28.
Marinela-Daniela Filip, Daphne Momferatou and Susana Parraga Rodrigues, “European competitiveness: the role of institutions and the case for structural reform”, ECB Economic Bulletin, issue 1/2025 [online].
Richard May, “Artificial intelligence, data and competition”, OECD Artificial Intelligence Papers, May 2024. [online].
Three general risks are associated with AI: climate risks, data protection issues and competition concerns.
Climate risks
The FSB notes that “AI-related energy consumption is estimated to account currently for about 1% of global energy consumption, expected to increase further in the future and could have effects on energy demand”. Furthermore, “training, developing, and running large AI models and applications require large amounts of reliable and competitive energy”. In turn, “sustained growth in AI-related energy consumption could impact climate change risks if it does not come from clean energy sources”. However, “potential mitigating factors exist”, such as “data centre-centric clean energy innovations as well as the development of a more energy efficient model training architecture17”.
Data protection issues
The importance of large quantities of data for delivering reliable AI outcomes has enhanced policymakers’ attention to safeguarding personal data such as individuals’ identities, locations and habits. Additionally, AI systems can be used to mislead and manipulate individuals through, for instance, deepfakes and psychological profiling, resulting in complex and increasingly convincing forms of fraud and disinformation. Efforts to promote safety of personal data are of the utmost importance but may need to be balanced against other types of considerations, such as competitiveness concerns. Filip et al. (2025)18 underline that the regulatory landscape in the U.S. is generally considered more business-friendly and focused than that of the EU on minimising bureaucratic hurdles to encourage innovation and investment. For example, the U.S. have a less stringent data protection framework compared with the EU’s General Data Protection Regulation (GDPR). This can make it easier for U.S. companies to operate and invest in new technologies.
Competition issues
Potential risks to competition in the supply of GenAI include19, independently of anticompetitive conduct, economies of scale or scope and network effects which could “provide first-mover advantages and make it more difficult for new entrants to compete”, given also the inertia in user behaviour, leading to markets “tipping” irrevocably towards certain firms. Market tipping occurred on the digital platform market in the early 2000s, initially driven by a first mover, followed by rapid entry and fierce competition that resulted in significant losses among the players involved. Ultimately, the losses became unsustainable, and only a small number of platforms survived, becoming the dominant players and starting to earn significant monopoly rents. According to Korinek and Vipra (2024)20, who analyse the changing structure and competitive dynamics of the rapidly expanding market for LLMs, there are similarities with the digital platform market of the 2000s and therefore reasons for concern. For example, while competition dynamics are currently fierce, the risk of market tipping is actually high. The cost structure of LLMs, compared to digital platforms, gives rise to similarly large economies of scale and scope. Moreover, both the costs of frontier models and their capabilities are rising far more rapidly, therefore the growing investment requirements for state-of-the-art models imply that the number of players that a market of a given size can support is shrinking fast. However, mitigating this force towards natural monopoly, is that the market for GenAI is expected to rise as well. Many competition authorities have launched initiatives considering competition in GenAI. Korinek and Vipra stress the complexity of the authorities’ challenge in a medium-term perspective: efforts to promote competition will have to be carefully balanced against other considerations such as AI safety: for example, releasing AI models in an open-source manner, which is desirable from an economic perspective, may impair efforts to regulate AI safety.
Regulation
With the adoption of the IA Act, whose impact will be felt from August 2025, the European Union has positioned itself at the forefront of AI regulation from an international perspective.
The AI Act21 classifies AI according to its risks: unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI); high-risk AI systems are regulated (including two issues related to banks and insurance companies, see below); limited-risk AI systems are subject to lighter transparency obligations (for example, end-users must be aware that they are interacting with a chatbot); minimal risk is unregulated. The majority of obligations fall on providers of high-risk AI systems, whether they are based in an EU country or not. General-purpose AI (GPAI) model providers must provide technical documentation, instructions for use, and publish a summary about the content used for training. All providers of GPAI models that present a systemic risk must also conduct model evaluations, and track and report serious incidents. Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary.
Korinek and Vipra (2024) provide a nuanced view of the AI Act, underlining that the EU finds itself in a challenging position as it seeks to regulate AI while lacking a homegrown strong AI industry, contrary to the U.S. The Act may drive away foreign companies if the rules are judged too burdensome. According to the authors, the delayed releases in Europe of the first version of AI models by both Google and Anthropic underscore this risk. Moreover, overly stringent regulations could further hinder domestic AI producers in their efforts to catch up with global leaders. However, according to the authors, the AI Act is particularly noteworthy for its provisions on General Purpose AI (GPAI) systems.
These provisions may help increase competition by requiring GPAI providers to be more transparent about their models’ capabilities and limitations, thereby allowing smaller companies and new entrants to better understand and potentially compete with established models. On the other hand, the stringent risk management requirements and the need for extensive documentation might pose a significant burden that reduces competition and increases the entry costs of smaller companies or startups.
Economic aspects
We first study the possible consequences of a growing use of AI, then how policies could address the potential negative consequences of AI, while preserving or even enhancing the positive ones.
Possible consequences
See, e.g., the report of the French Artificial Intelligence Commission, AI: Our Ambition for France, March 2024. [online]. One can also refer to Antonin Bergeaud, The past, present and future of European productivity, Contribution to the ECB forum on central banking “Monetary policy in an era of transformation”, 1-3 July 2024 [online].
Erik Brynjolfsson, Danielle Li and Lindsey R. Raymond, “Generative AI at Work”, NBER Working Paper No. 31161, April 2023, revised November 2023 [online].
Shakked Noy and Whitney Zhang, “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence”, Science, july 2023 [online].
Fabrizio Dell’Acqua, Edward McFowland III, Ethan R. Mollick, Hila Lifshitz-Assaf, Katherine Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R. Lakhani, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality”, Working Paper, 2023 [online].
Ibid.
HLPE, op. cit., p. 29.
Daron Acemoglu, “The Simple Macroeconomics of AI”, NBER Working Paper No. 32487, May 2024 [online].
HLPE, op. cit., p. 35-36.
See, e.g., Gilbert Cette and Christian Pfister, “Challenges of the “New Economy” for monetary policy”, 2004, International Productivity Monitor, 8, 27-36 [online].
BIS, op. cit., p. 112.
Iñaki Aldasoro, Sebastian Doerr, Leonardo Gambacorta and Daniel Rees, “The impact of artificial intelligence on output and inflation”, BIS Working Paper No 1179, April 2024 [online].
This analysis was carried out using the European Commission’s European Classification of Skills,Competences, Certifications and Occupations (ESCO) [online]. The study used a medium level of granularity, distinguishing occupations by sub-sub-categories. e.g. 111: Members of legislative bodies and senior public administration staff.
Stefania Albanesi, António Dias da Silva, Juan F. Jimeno, Ana Lamo, Alena Wabitsch, “New technologies and jobs in Europe”, Working Paper Series, No 2831, 2023, ECB., “New technologies and jobs in Europe”, ECB Working Paper Series, No 2831, 2023 [online].
French Artificial Intelligence Commission, op. cit.
Paweł Gmyrek, Janine Berg and David Bescond, “Generative AI and jobs: A global analysis of potential effects on job quantity and quality”, ILO Working Paper 96, August 2023 [online].
Antonin Bergeaud (2024), “Exposure to generative artificial intelligence and employment: an application to the French socio-professional classification”, Working Paper.
BIS, op. cit.
French Artificial Intelligence Commission, op. cit. ; HLPE, op. cit.
BIS, op. cit., p. 109.
Ibid, p. 110.
Stefania Albanesi, António Dias da Silva, Juan F. Jimeno, Ana Lamo, Alena Wabitsch, “AI and Women’s Employment in Europe”, NBER Working Paper No. 33451, February 2025 [online].
BIS, op. cit., p. 112.
HLPE, op. cit., p. 37.
Molly Kinder, Xavier de Souza Briggs, Mark Muro, and Sifan Liu, “Generative AI, Generative AI, the American worker, and the future of work”, Brookings, 10 October 2024 [online].
BIS, op. cit., p. 113.
BIS, op. cit., p. 113.
The possible consequences of AI on the economy are examined in three steps. Firstly, AI can impact productivity and economic growth. Secondly, it can also impact employment, which is connected to productivity and growth by a simple relation (in changes for employment and productivity, growth being itself a change: Δ Employment = Growth – Δ Productivity/Person employed). Thirdly, the impact of AI on inflationary pressures in the economy will depend on how aggregate demand evolves in comparison with supply.
As will appear in the following, the degree of uncertainty on the consequences of AI for the economy increases when moving from one step to the other. Overall, this uncertainty is fuelled by the methods used by authors to come up with estimates of the impact of AI on the economy. Since evidence of this impact is so far rather scant and can only be grasped at the level of tasks performed by employees or at that of firms, observations are extended to the economy as a whole. As a second method, authors also often draw a parallel with previous technological breakthroughs, such as the steam engine, railways, electricity or information and communication technologies (ICT). De facto, AI can be seen as a new episode in the ICT revolution. However, the parallel with previous technological revolutions is not without difficulties since, as seen above, the case for AI being a general-purpose technology is not totally well-grounded yet.
Productivity and growth
It is usually acknowledged that AI could contribute to an acceleration of economic growth through two main mechanisms.22 Firstly, AI enables to robotize and increase the production of goods and services, and thereby to increase productivity. This mechanism is expected to materialise progressively in the medium term and to be transitory. Secondly, in a probably more distant future, AI could facilitate the production of new ideas, thus contributing to a more permanent acceleration of growth. However, as was the case for previous technological breakthroughs, such as electricity, the latter effect would very likely be conditioned to an adaptation of work and firms’ organisation.
AI and productivity
There is some evidence of AI’s positive role in enhancing productivity at the level of tasks and firms. Assessments of AI’s impact on broader economic growth, although also positive, are however more uncertain.
– At the level of firms, in one of the most often cited articles on the subject, Brynjolfsson et al.23 study “the adoption of a generative AI tool that provides conversational guidance” for 5,179 customer support agents. They find that “AI assistance increases worker productivity, resulting in a 14% increase in the number of chats that an agent successfully resolves per hour”, although with significant heterogeneity across workers, since this improvement is mainly concentrated on novice and low-skilled workers (see below). In higher-skilled professions, Noy and Zhang24 conducted “an experiment that recruited college-educated professionals to complete incentivized writing tasks”. They find that “participants assigned to use ChatGPT were more productive and efficient, and that participants with weaker skills benefited the most from ChatGPT”. In the same vein, Dell’Aqua et al.25 conducted an experiment with 758 consultants from the Boston Consulting Group and found that consultants using AI were significantly more productive, completing 12.2% more tasks on average, and finishing tasks 25.1% more quickly26. At a more aggregate level, FAIC and HLPE refer to a 2023 survey by the French employment office (France Travail, formerly Pôle Emploi), based on a representative sample for France of 3,000 businesses of 10 or more employees. According to this survey, “72% of employers who have integrated AI into their operations reported a positive impact on employee performance”, primarily through “the reduction of tedious, repetitive tasks (cited by 63% of employers) and a decrease in error rates (51%)27”;
– Regarding the impact of AI on broader economic growth, there is a wide dispersion in economists’ assessments. At the lower end of the spectrum, Acemoglu28, using existing estimates on exposure to AI and productivity improvements at the task level, estimates that total factor productivity (TFP, the part of output growth that cannot be imputed to an increase either of the volume of labour or to that of capital) could increase by 0.66% over the next 10 years. This estimate could even be lowered to 0.53% over the next 10 years, because AI has so far been implemented in easy-to-learn tasks, and it will be more difficult to use it in hard-to-learn tasks. However, Aghion and Bunel29 reach much higher figures, using two different approaches. In the first approach, they draw a parallel with previous technological breakthroughs and find that productivity could increase annually by 1.3% (parallel with the electricity wave in the 1920s in Europe) or 0.8% (parallel with the digital technology wave of the late 1990s and early 2000s in the U.S.). Of course, that Europe has not been able to reap the same productivity gains as the U.S. in the latter case raises issues as regards its capacity to make a more fruitful use of AI over the next decade. In the second approach, they follow Acemoglu’s task-based framework but incorporate more recent empirical data. They find that AI could increase aggregate productivity growth by between 0.07 and 1.24 percentage points per year, with a median estimate of 0.68 percentage points over 10 years. HLPE also advocates that, in comparison with electrification, the adoption curve for AI could be shorter because generative AI tools are relatively easy to integrate into various sectors (the report gives the example of the video game industry).
AI and the generation of new ideas
According to FAIC, “AI could automate the generation of new ideas, or at least make it easier. It will thus help us generate new inventions and solve complex problems (…) The impact of AI on science and innovation is difficult to quantify (…) At the very least, AI will make researchers’ work easier (…) Nearly one in ten research articles already mentions the use of AI. (…) AI opens up a field of possibilities difficult to imagine. These effects lead to a permanent increase in the rate of productivity growth. The magnitude of this effect, however, is impossible to quantify30”.
AI and growth
In a similar process to that of the “New Economy”31, productivity gained from recourse to AI should fully translate into higher growth in the long run. In the short to medium term, two caveats are in order:
A favourable environment should be provided to AI. In particular, competition policies should ensure that rents are not fully captured by a few dominant firms, but instead that innovation benefits users, thus supporting demand for AI technology and its spreading in the economy. Furthermore, the regulatory environment should not stifle innovation and labour laws should allow enough flexibility. In addition, education and training should be adapted. Crucially, funding for innovative firms should be abundant and allocated by the most competent persons and institutions, which should also be individually liable for the decisions they take;
Household expectations could play an important role. As shown in a study by members of the Bank for International Settlements (BIS)32, “if households and firms fully anticipate that they will be richer in the future, they will increase consumption at the expense of investment, slowing down output growth”. Conversely, if they do not foresee the boost to productivity from future AI developments, AI will significantly raise “output, consumption and investment in the short and long run”, as shown by Aldasoro et al.33.
Employment
A concern often expressed about AI is that robots could replace humans, leading to mass unemployment. However, this concern appears largely unfounded, as shown by studies both at the disaggregate and aggregate levels.
Disaggregate level
The main contribution from studies at the disaggregate level is to cast light on heterogeneities across sectors, firms, occupations, age and gender.
Using data for occupations at the 3-digit level34 in 16 European countries over the period 2011-2019, an ECB paper35 finds that, on average, employment shares have increased in occupations more exposed to AI, in particular for occupations with a relatively higher proportion of younger and skilled workers. Furthermore, the authors find the link between the changes in employment shares and the degree of exposure to AI-enabled automation is heterogenous across countries. This could be accounted for both by the pace of technology diffusion and education and by the level of product market regulation (competition) and employment protection laws, in the sense that more diffusion, better education and fewer rigidities in the economy favour the positive impact of AI on employment;
In the same vein, FAIC reports the results of a survey carried out annually by Insee on the effects of AI adoption by companies in France. Insee finds that total employment in companies that have adopted AI is increasing more than in companies that have not, whereas these two groups were previously following a similar trend. The survey also finds that this relationship mainly results from the creation of new jobs and that there are no differentiated effects on jobs held by men compared to those held by women. However, the impact of AI is not homogenous across professions, with the volume of jobs in “intermediate administrative and commercial professions36” negatively impacted;
Indeed, using a task-based approach, a study by the International Labour Organization (ILO)37 shows that the proportion of jobs with augmentation potential (13% globally, and 13.4% in high-income countries), and thus which could be enriched by the use of AI, is much higher than those with automation potential (respectively, 2.3% and 5.1%), which could be replaced by the use of AI. In the case of France, Bergeaud38 shows that, among occupations more exposed to AI and which include more tasks that can rather easily be automated, are accountants, telemarketers, and secretaries. Jobs in these professions could thus be replaced by AI. Conversely, occupations such as interpreter, journalist, architect, lawyer, graphic designer, or medical doctor, which combine high exposure to AI with a high proportion of tasks unlikely to be automated, look set for important changes but could take advantage from AI. There are also many professions, such as photographer, hairdresser, childminder, house help, roofer, or cook, which are not highly exposed to AI;
One of the most important factors of heterogeneity in the impact of AI on employment could be age. Along these lines, the 2024 Annual Report of the BIS39 cites the results of a recent collaborative study by the BIS with Ant Group, in line with previous studies40. This study finds that “productivity gains are immediate and largest among less experienced and junior staff41”. The BIS infers that: “The ‘digital divide’ could widen, with individuals lacking access to technology or with low digital literacy being further marginalised. The elderly are particularly at risk of exclusion42”;
Discrepancies in the effects of AI according to gender could also be important. In that regard, HLPE notes that whereas, in the past, men were more vulnerable to automation, which took place mainly in industries such as manufacturing, things could be different with AI. Indeed, women tend to perform tasks in the services industries, such as clerical support and retail, which could be automated by AI. Indeed, the study by the ILO cited above highlights that the percentage of jobs likely to be automated is double for women (3.5% globally and 8.5% in high-income countries) compared to men (respectively, 1.6% and 3.9%). More specifically, HLPE underlines that women are underrepresented in the AI industry, especially in coding, engineering, and programming, and make up only 22% of AI professionals globally, whereas this industry is expanding very rapidly. However, Albanesi et al.43, using the same approach as in the ECB paper cited above, find that, on average, female employment shares increased in occupations more exposed to AI.
Aggregate level
One can distinguish between volumes and prices (i.e. wages and more broadly, income). Regarding volumes, would total employment increase or would firms using AI intensively increase their market share and employment only at the expense of other firms? Indications on the effects of AI on employment at the level of companies or tasks (see above) give motives for hope, since overall positive impacts overwhelm negative ones. However, much will depend on how much productivity increases generated by the AI translate into higher growth (see above). Additionally, even when the impact on jobs and tasks is likely to be positive, employees will frequently face challenges in adjusting their skills to a new environment (see 2.2). According to the 2024 BIS Annual Report: “If AI is a true general-purpose technology that raises total factor productivity in all industries to a similar extent, the demand for labour is set to increase across the board (…). Like previous general-purpose technologies, AI could also create altogether new tasks, further increasing the demand for labour and spurring wage growth (…). If so, AI would increase aggregate demand44”;
Regarding wages and income, the impact of AI on the overall level of wages will depend on its impact on growth. Regarding the structure of wages, workers whose job is complemented by AI will be in a position to get a higher salary, provided they master the technology well enough, whereas those whose jobs can be substituted by AI will likely see their relative salary decline. Furthermore, as noted by HLPE, “the productivity gains AI brings to firms are likely to boost returns to capital, further benefiting high earners and contributing to a shift in income away from labour45” and high-level experts in AI technology will likely be sought for in the labour market in the coming years. On the other hand, a study by the Brookings Institution46 shows that AI is likely to disrupt an array of “cognitive” and “non-routine” tasks, especially in middle-to-higher-paid professions. Furthermore, the use of AI especially increases the productivity of the young and least qualified. Consequently, whether income inequality will increase as a result of AI is an open question. Rather, employees in middle-income clerical work will likely see their relative position deteriorate.
Inflation
AI could impact the level of inflationary pressures and thus, in the absence of a monetary policy reaction, the level of inflation, as well as its volatility (to recall, monetary policy can control the volatility of inflation only in the medium term – 2 to 5 years –, by stabilising it around its objective – the inflation target ; in the short term – up to 2 years –, it has to accept that inflation can be volatile).
Regarding the level of inflationary pressures, as hinted above when discussing the short to medium term impact of AI on growth (see above), it will firstly depend on the anticipations of economic agents. If they do not fully anticipate the gains in productivity allowed by AI, output will grow more than aggregate demand and inflationary pressures will abate. On the contrary, if they fully anticipate that the acceleration of growth, and thus that they are permanently richer, they will consume and invest more. This could fuel inflationary pressures if the increase in aggregate demand outpaces that of supply.
Such mechanisms were already at work in the late 1990s-early 2000s, with the “Goldilocks Economy” or “New Economy”. In the past, expectations have adapted only partly to the acceleration in productivity induced by the rise of general-purpose technologies, so this could again be the case for AI. Furthermore, as noted by the BIS in its 2024 Annual Report, mismatches in the labour market could play a role, with an unclear overall impact on inflationary pressures: “The greater the skill mismatch (other things being equal), the lower employment growth will be, as it takes displaced workers longer to find new work. It might also be the case that some segments of the population will remain permanently unemployable without retraining. This, in turn, implies lower consumption and aggregate demand, and a longer disinflationary impact of AI47”. Cipollone48, member of the Executive Board of the European Central Bank (ECB), also underlines a potential impact on energy prices, the sign of which is overall ambiguous. On the one hand, enhanced grid management, more efficient energy consumption and better tools for price comparison (with the latter factor also at play for other goods as well as for services) could put pressure on energy prices. On the other hand, AI, as a heavily energy-consuming industry, could push energy prices up;
Regarding the volatility of inflation, the BIS 2024 Annual Report notes that: “Large retail companies that predominantly sell online use AI extensively in their price-setting processes. Algorithmic pricing by these retailers has been shown to increase both the uniformity of prices across locations and the frequency of price changes (…) This can ultimately change inflation dynamics”. However, the BIS also underlines that: “An important aspect to consider is how these effects could differ depending on the degree of competition in the AI model and data market, which could influence the variety of models used49”.
Related policies
Acemoglu, op. cit., p. 48.
FAIC, op. cit., p. 48.
HLPE, op. cit., p. 1.
The author examines the regulation of technological innovation direction under uncertainty about potential harms, also taking into account potential benefits. He finds that ex post regulatory instruments generally outperform ex ante restrictions. He concludes by suggesting the need for informationally-responsive regulatory frameworks in regulating emerging technologies like AI. Joshua S. Gans, “Regulating the Direction of Innovation”, 7 January 2025 [online].
Bergeaud, op. cit., p. 50-51.
In that regard, Bergeaud (2024) elaborates: “First, it has a large market and a rich, educated population whose savings should be redirected towards financing innovation, particularly for young firms, through a more integrated capital market. Second, Europe has a strong capability to generate important ideas and crucial knowledge that has been the foundation of significant innovations developed elsewhere. Strengthening the link between universities and firms and redirecting public R&D expenditures towards riskier, long-term projects would help capitalize on this pool of scientific excellence. Third, Europe holds a relatively leading position in producing green innovations and reducing CO2 emissions” (p. 51).
FAIC, op. cit., p. 51.
HLPE, op. cit., p. 36.
Ibid, p. 37.
Jason Furman and Robert Seamans, “AI and the Economy”, in Innovation Policy and the Economy, University of Chicago Press, vol. 19(1), p. 161-191, 2019 [online].
Ibid, p. 181.
Eva Vivalt, Elizabeth Rhodes, Alexander W. Bartik, David E. Broockman, Patrick Krause, and Sarah Miller, “The Employment Effects of a Guaranteed Income: Experimental Evidence from Two U.S. States”, NBER Working Paper No. 32719, July 2024, Revised January 2025 [online].
Ibid.
Francesca Borgonovi, Flavio Calvino, Chiara Criscuolo, Julia Nania, Julia Nitschke, Layla O’Kane, Lea Samek, and Helke Seitz, “Emerging Trends in AI Skill Demand Across 14 OECD Countries”, OECD Artificial Intelligence Papers, No. 2, October 2023 [online].
FAIC, op. cit., p. 91-92.
Solal Chardon-Boucaud, Arthur Dozias and Charlotte Gallezot,
“The Artificial Intelligence Value Chain: What Economic Stakes and Role for France?”, Ministère de l’Économie, des Finances et de la Souveraineté Industrielle et Numérique – Direction générale du Trésor, Trésor-Economics, Trésor-Economics, No. 354, December 2024 [online].
“Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the digital single market and amending Directives 96/9/EC and 2001/29/EC (Text with EEA relevance.)”, Eur-Lex [online].
FAIC, op. cit., p. 50.
Ibid, p. 104.
Ibid, p. 105.
Olivier Redoulès, “La surfiscalisation du travail qualifié en France”, Document de travail, Rexecode, 13 January 2025 [online]
We first indicate some general orientations which have been suggested in the literature on AI, then examine how they could apply to specific policies or sectors, and finally ask how AI could help improve government policies. In other words, we respectively envisage the potential roles of Government in relation to AI as a strategist, as a policymaker and as a user.
General orientations
Authors who have contributed to the literature on AI find there is a need for public intervention. This is particularly the case in Europe, which is found to be lagging behind in the take-up and development of AI technologies. In the following, we present and discuss different positions which have been publicly held and supported by economists, starting with the most heavy-handed suggestions.
– Acemoglu advocates a “‘precautionary regulatory principle’ to slow down the adoption of AI technologies, primarily when they impact political discourse and democratic politics50”. In our view, this approach is not the correct one: as shown above, AI is a potentially very powerful technological innovation. There is no clear reason why its adoption should be slowed down, or even how this could be achieved, since the technology is available on a global scale. Furthermore, trying to slow down the adoption of AI at a regional level – e.g. that of Europe – would imply that the gap with the leading economies in that field – primarily the U.S. – would widen. Finally, from a more political or ethical point of view, making judgements on when and to what extent AI technologies impact on political discourse and democratic politics could be a dangerous path to follow. After all, AI is an instrument and what matters is not so much the instrument used to disseminate a political discourse, but rather the discourse itself;
– FAIC insists on the importance of competition and institutions. In that regard, FAIC notes: “The difference between the ICT revolution and the AI revolution is that this time, the GAFAMs are dominant from the outset, and can therefore immediately discourage the entry of new, innovative companies (…) Hence the importance of adapting our institutions, and in particular our competition policies, so that the AI revolution can fully act as a growth driver51”. While we fully support the idea that competition should be supported, in the field of AI as elsewhere, the notion that institutions should play a role is in our view ambiguous.
To be clear, we do not think that our institutions should support restricting competition in Europe, in response to insufficient competition in the U.S. or in the global market. This would be a lose-lose game. Above all, FAIC (2024) exposes a comprehensive set of economic policies in relation with AI (see below);
– HLPE suggests: “Governments have three roles to play in AI development: AI enablement (R&D, education, infrastructure, and financing); the use of AI in government itself; and the enactment of laws and regulations for the private sector ensuring that the use of AI technologies facilitates governments’ objectives of economic growth, stability, equity and well-being52”. As shown below, we do agree that AI could be used by government, in order to run more efficient policies. However, we have more qualifications regarding its role in the regulatory field, to the extent that it would aim at many different objectives, such as “economic growth, stability, equity and well-being”, without setting priorities. Furthermore, as seen above, whether AI could impact stability, equity and well-being is rather unclear at this still early stage, making it of little use to set objectives. Hence, we think priority should be given to the adoption of AI technologies and that regulation should primarily encourage this adoption by avoiding ex ante restrictions, which risk stifling innovation, as shown theoretically by Gans53. Finally, regarding enablement, we think that funding should remain the prerogative of the private sector, except of course for uses by the public sector which derive from regalian functions (police, justice, army and diplomacy). The same applies in part to R&D, education and infrastructure. Public policies in those fields should be adapted to facilitate the adoption of AI, reallocating existing public resources with a view to reducing them. This is both because such action is necessary, particularly with respect to the situation of public finances in France and Europe, but also because, as will appear in more detail further down, AI technologies provide an opportunity to overhaul economic policies;
– Bergeaud54 proposes four main broad orientations, two of which pertain to a European approach to AI. He first suggests to strengthen coordination among European countries in the field of innovation, and specifically of AI technologies. As a second orientation, he proposes to rethink the allocation of R&D subsidies and focus on mission-oriented projects. He also mentions the need to enhance the adoption and generation of AI technologies, including by investing in AI education and training and by fostering an environment that supports AI startups and encouraging venture capital investment are crucial steps. He finally proposes to focus on Europe’s comparative advantages55. Also keeping in mind, the need to preserve competition in the market for AI technologies, we are in full agreement with such orientations, especially because they consider AI policies in a broader perspective. Indeed, in our view, public policies should not serve as a pretext to develop new, “targeted” tools, and substitute public sector supply for that of the private sector. To the contrary, the adoption of AI technologies provides an opportunity to overhaul many Government policies, giving markets a larger role in the allocation of resources.
Consequences for specific policies or sectors
Proposals can be split into three main areas: labour market policies, education policies (including ongoing training), and industrial policies. Most of them can be found in FAIC and HLPE.
Labour market policies
– On labour law, FAIC (2024) notes, that: “the legal framework defines an inescapable foundation of rights (labour law, personal data protection law, etc), which for the moment appears sufficient to ensure a worker-friendly deployment of AI56”. Thus, no new regulation appears necessary at this stage;
– On the impact of AI on employment by gender and by age, HLPE stresses that: “It is crucial that policymakers take immediate action to remove barriers to education and skills development for women57”. Regarding employment by age, it notes that, as indicated above: “Older workers could experience disproportionately negative impacts from AI-driven automation without timely reskilling efforts. As G7 nations grapple with aging populations, AI presents risks and opportunities for managing labour market challenges. Countries such as Japan and Italy, where more than 37% of the population will be 65 or older by 2050, must examine how AI can support, rather than displace, older workers58”. While both orientations can be seen as positive, we would find it risky to try to direct the effects of technological progress, as noted above, be it in favour of older workers. Furthermore, older workers can benefit from ongoing training and use AI, even if they are displaced by it at first;
– HLPE also expresses concerns about the potential impact of AI on inequality, in particular in scenarios where AI “could lead to widespread displacement of workers across skill levels, as machines become perfect substitutes for human labour. Although output and productivity would likely grow rapidly in such scenarios, the benefits may accrue primarily to owners of capital and AI technologies, potentially creating unprecedented levels of income concentration (…) This suggests that current social insurance and income distribution mechanisms, which are largely tied to employment, might need to be fundamentally reimagined”. We see this scenario as highly futuristic. Furthermore, there is no strong reason that existing policies, in the field of competition or taxation, would not suffice to tackle such issues;
– In that regard, the position expressed by HLPE seems to echo proposals to create a universal basic income (UBI). This new social benefit would partially or completely replace existing programs with a single, unconditional cash transfer to every adult. It is discussed by Furman and Seamans59. The authors note that UBI raises numerous issues. Among them is “the argument that UBI may stimulate entrepreneurship and innovation. But there is little evidence of heightened entrepreneurship and innovation in regions with UBI like programs, such as oil rich areas that provide in come to most residents, including in Alaska, Norway, and some Gulf states60”. A study also leverages an experiment sponsored by Sam Altman, the chairman of OpenAI and a staunch supporter of UBI, “in which 1,000 low-income individuals were randomized into receiving $1,000 per month unconditionally for three years”61. The study finds that “the transfer caused total individual income excluding the transfers to fall by about $2,000/year relative to the control group and a 3.9 percentage point decrease in labour market participation62”. It is notable that the transfers had only a small impact on labour market participation. Consequently, they would not discourage wage-earners displaced by AI to look for a job and would rather amount to an increase in unemployment benefits, unless these benefits are reduced correspondingly. Furthermore, to our knowledge, no study has yet taken into account the cost of financing UBI in terms of increased taxation and corresponding loss of growth.
Education
– On initial training, FAIC expands on the results of an OECD study63 to come to the conclusion for France that job vacancies in AI development and AI deployment (so-called “X + AI” profiles) should represent 1% and 0 5% respectively of all vacancies in 2034, with a need for around 56,000 positions per year in AI development and 25,000 positions per year in AI deployment at this horizon. In turn, this implies that the number of places in specialized AI training courses at graduate level would have to at least triple over the same period and that around 15% of all higher education students would have to be trained each year in “X + AI” skills. Furthermore, FAIC notes that it would be necessary to train all students on specialized IT courses in the AI issues relevant to their activity, so that they can best deploy AI solutions within companies. Finally, in order to make specialized courses accessible and attractive, it recommends to generalize the deployment of AI in all higher education courses and acculturate students in secondary education. While we are not in a position to confirm the precise figures, all these orientations seem quite reasonable to us: there will not be a widespread adoption of AI in the economy unless a very significant effort is made to train future workers;
– On ongoing training, the FAIC conducted a survey from mid-December 2023 to mid-January 2024 to better understand expectations and fears in the public regarding AI. FAIC notes that one result of the survey which stands out is the need for information and training on AI in the professional environment. The Commission thus very logically recommends investing in continuing vocational training for the workforce and in training schemes around AI. We can only support such a conclusion; while noting that is does not imply that more resources should be dedicated to ongoing training, but rather that existing resources should be redeployed. In particular, ongoing training would be very useful in the public service (see below).
Industrial policies
While recourse to direct protectionism is rightly rejected by all authors, since it would discourage the adoption of AI technologies in which many countries or areas, including Europe, already lag, FAIC (2024) advocates the implementation of forceful industrial policies in France. Such policies would comprise four “pillars”: the financing of private firms, the building of sovereign computing power, the access to data, and the attraction of talents.
1) Financing of private firms: FAIC suggests developing venture capital, but regrets that venture capital investments in AI are currently insufficient. The report cites a volume $2.8 billion in 2022, against $56.8 billion in the U.S., and suggests that $15 billion should be targeted for France. According to the report, this could be achieved by redirecting part of private savings (changing the tax incentives for life insurance policies or the way supplementary pensions are managed). The report also mentions the establishment of a true capital market union (CMU) in Europe, enhancing the attractiveness of foreign investment funds, so that they establish themselves in Paris and not just in London, and the creation of a “France & AI” investment fund. This fund would mobilize €7 billion in corporate private equity and €3 billion in public support. Such a cocktail of measures does not seem fully consistent to us. Indeed, some measures, such as the development of venture capital and the establishment of a CMU, complemented by a Savings and Investment Union (SIU), seem to place trust in market mechanisms. We fully support them, as general orientations, and not specifically in the context of AI. However, playing with the rules for asset allocation by life insurance companies or supplementary pensions or setting up a “France & AI” investment fund would imply a return to policies in vogue in the post-WWII period, which led to a fragmentation and under-development of French capital markets and the misallocation of capital under government’s guidance. Finally, while we think that foreign investment funds are welcome in France, we do not view this as a precondition for them to invest in the country;
2) Sovereign computing power: FAIC is of the opinion that: “Given the scale of the investments required, the associated risks and the timeframes inherent in the development of a semiconductor industry optimized for AI, public intervention would appear to be necessary (…) we propose to act simultaneously on the supply and demand sides of computing. On the supply side, we recommend speeding up the expansion of French and European “exascale” supercomputers, launching a group purchasing operation for the ecosystem in the short term, and setting a target for the establishment of computing centres in Europe, with a public guarantee for the use of computing power, as well as support for implementation and electrical connection (…) On the demand side, an AI tax credit would support research and development projects in the rental of computing power, on condition that they use a computing centre established in France64”. The report also provides estimates of €7.7bn for the developments of semiconductor components in Europe and €1bn of installed computing power, seemingly with public money. While we think the public sector can have a role in coordinating private sector initiatives, as this has been the case with the announcement on 7 February 2025 of an investment of €30-50 billion by the United Arab Emirates (UAE) in a data centre in France, we do not believe that public intervention should go further, except for the hosting at the European level of sensitive data, in case no assurance of protection against intrusion by foreign countries can be provided by a private contractor. Previous initiatives in that direction, such as “Plan Calcul”, have delivered very slim results in the past for a pretty cost in terms of public and private spending;
3) Access to data: FAIC makes a well-grounded distinction between private data (and more specifically health data), heritage data and data protected by literary or artistic property rights. Access to data raises many ethical and legal issues, which are not in the scope of this note. It also raises issues in relation to property rights, with economic consequences. As noted by Chardon-Boucaud et al.65, there is currently a tension between on the one hand, the need for a wide access to copyright-protected content to train AI systems, which is regulated by Directive (EU) 2019/79066, on the other hand the possibility offered by the same text refuse permission for the use of their content. In the same vein, FAIC notes that numerous lawsuits have been filed in the USA for the unauthorized and therefore unpaid use of copyrighted content when training generative AI67. However, neither Chardon-Boucaud et al. nor FAIC provides clear indications on how to reduce this tension;
4) Talents: FAIC considers that: “of the three to five thousand highly qualified international profiles likely to have a significant impact on the growth of the AI ecosystem, France needs to attract between 10 and 15%68”. It is also of the opinion that “the State must create the conditions to facilitate the installation of qualified profiles, notably through assistance with administrative formalities: visas, children’s schooling, information on the tax regime for inpatriates, etc69”. Finally, regarding public employment of researchers, it recommends that an “AI Exception” be set up, inter alia, in order to make it possible to raise remuneration, and to at least double funding for public research in IA. We think that the issues raised here are more general and that there are two main problems to be solved in order to attract not just foreign but also domestic talents into research. The first problem has to do with the over-taxation of skilled work in France70. The second one is the below-market remuneration of skilled work in the French public sector, as exemplified by the sharp fall in the number of applicants for teaching in high schools in the past 35 years71, showing that this is an ancient and structural problem. Solving these problems, instead of further fragmenting the French labour market, should in our view be a priority, to foster economic growth in the medium to long term.
Could AI help improve economic policies?
Chiara Osbat, “What micro price data teach us about the inflation process: web-scraping in PRISMA”, SUERF Policy Brief, No 470, November 2022 [online].
Michele Lenza, Ines Moutachaker and Joan Paredes, “Forecasting euro area inflation with machine learning models”, ECB, Research Bulletin, No 112, 17 October 2023 [online].
FAIC, op. cit., p. 78.
HLPE, op. cit., p. 44-45.
HLPE, op. cit., p. 39.
There are two main ways in which AI could help improve economic policies: by better understanding, and making the public understand, economic developments, and by increasing the productivity of government employees and efficiency of public actions. We also ask whether some coordination would be warranted at the global level.
Better understanding of economic developments
Cipollone gives the following examples regarding the use of AI at the ECB, which also apply to government missions, especially in the economic and financial areas:
– In the field of statistics, AI enables identification and prioritisation of anomalous observations and outliers that require further attention, assessment and potential treatment. LLMs also enable the use of unstructured data like text, image, video or audio, to complement and enhance existing data collections;
– In the field of economic analysis, ECB staff use AI to web-scrape data and nowcast inflation72. They also use machine learning models, which are able to capture non-linearities, to forecast euro area inflation, with performances close to those of conventional forecasts73;
– In the field of communication: AI allows to analyse vast volumes of media reporting and market commentary very rapidly, to facilitate and speed translations to and from the different languages of the euro area, and to broaden the ECB’s reach by simplifying key messages for targeted communication in direction of less financially educated audiences.
Increasing the productivity of government employees and the efficiency of public actions
HLPE notes that there are barriers to AI adoption in Government and public administrations. Among them are a skills deficit, the difficulty of attracting and retaining AI talent as civil servants’ wages are too low, infrastructure is often outdated, and regulations are complex. However, this will have to change, not just because of AI, but also more generally to adapt the public services and make them more efficient. For example, rules on public procurements could be simplified.
Indeed, the use of AI provides a good opportunity to increase the productivity of public administrations, provided an effort is made to train civil servants and workers in local government. In that regard, FAIC rightfully recommends to “Strengthen the technical capacity and infrastructure of public digital in order to define and scale a real transformation of public services through digital and AI, for agents and at the service of users74”. HLPE gives the example of the management of government benefits, for which “AI could increase the productivity of government employees in processing applications, verifying eligibility, and ensuring compliance with program rules, as AI systems equipped with natural language processing and advanced data analytics can automate significant portions of these workflows. AI can review applications for completeness, cross-reference data with other government records to verify eligibility, and even answer routine inquiries from applicants through AI-powered chatbots. This would allow governments to handle growing caseloads with fewer staff while improving service speed and accuracy75”. Other examples given by HLPE are in tax design and collection where “AI can analyse large datasets to identify patterns indicative of fraud or non-compliance76”, and in expenditure efficiency where it allows to track and evaluate the performance of public programs in quasi real time.
Coordinating at the global level
As noted by Meyers and Springford): “international efforts to establish ‘rules of the road’ for AI, will go some way to creating more confidence in the technology. But these measures primarily rely on trust, so they benefit well-established firms with good reputations. Regulation has an important role in making globally-agreed rules enforceable77”. Thus, some degree of regulation, the definition of which would be coordinated at the global level, could support the spread of AI technologies in a competitive environment. This could in particular be the case with the adoption of standards, which would facilitate interoperability. A coordinated approach could also help limit the cost of monitoring the use of AI technologies and their producers and facilitate the exchange of information between the relevant domestic authorities, thereby making these public services more efficient. An existing international organisation, such as the United Nations Science and Technology Organization (UNSTO), could be entrusted with that mission.
Financial aspects
Historically, the financial sector, which relies heavily on big data and process automation, has been at the forefront of technologica advancements and, correlatively, has been among the first to experience challenges posed by them. Machine learning (ML) techniques have been prevalent in the financial sector long before the emergence of GenAI. Even with limited capabilities, computational advances arising from standardized ML models may have had important consequences for financial stability as exemplified by the U.S. stock market crash of 198778. The marked decline in stock prices was attributed in large part to the dynamics created by rule-based algorithms that placed automatic sell orders when security prices fell below pre-determined levels. The crash led regulators to develop new rules, known as circuit breakers, allowing exchanges to halt trading temporarily in instances of exceptionally large price declines79. We first study the possible consequences of a growing use of AI in the financial sector, then how policies could address the potential negative consequences of AI.
Possible consequences
FSB, op.cit., p. 1.
International Monetary Fund, “Powering the digital economy”, IMF Departmental Papers, October 2021 [online].
Regulatory technology (RegTech) refers to the use of technologies by regulated financial entities to digitalize compliance and reporting processes to meet their regulatory requirements, among others to calculate regulatory capital and to support AML/CFT – anti-money laundering and combatting the financing of terrorism – compliance, thus improving compliance quality and reducing costs.
Juan Carlos Crisanto, Cris Benson Leuterio, Jermy Prenio and Jeffery Yong, “Regulating AI in the financial sector: recent developments and challenges”, FSI Insights n°63, December 2024 [online].
International Monetary Fund, Global Financial Stability Report, Chapter 3, “Advances in Artificial Intelligence: Implications for capital market activities”, October 2024 [online].
IMF, op.cit., p. 83.
Ibid.
Systemic risk refers to the possibility that an event at the financial institution level could trigger severe instability or collapse in the entire financial system.
HLPE, op.cit. ; CSF, op.cit. ; GFSR, op.cit. ; Crisanto et al., op.cit. ; Aldasoro et al., op.cit. ; Georg Leitner, Jaspal Singh, Anton van der Kraaj et Balasz Zsamboki, “The rise of artificial intelligence: benefits and risks for financial stability”, European Central Bank Stability Review, mai 2024 [online].
Herding behaviour refers to the tendency of investors or traders to follow the actions of their peers rather than making independent decisions based on their own analysis and information.
Procyclicality refers to the dynamic interactions between the financial and the real sectors of the economy. These mutually reinforcing interactions tend to amplify business cycle fluctuations and cause or exacerbate financial instability.
FSB, op.cit., p. 15.
HLPE, op.cit., p. 54.
CSF, op.cit., p. 15.
HLPE, op.cit., p. 53.
Ibid.
HLPE, op.cit., p. 56.
FSB, op.cit., p. 25.
Ibid.
Ibid, p. 26.
Leitner et al., op.cit.
The wider use of AI has the potential to bring transformative benefits to financial services firms and to capital markets in terms of increased productivity, greater efficiency, improved risk assessment and lower costs for consumers, but may also amplify existing risks.
AI use cases in the financial sector
As noted by FSB, “the lack of comprehensive data on AI adoption by financial services firms complicates an in-depth assessment of use cases. Available evidence suggests a notable acceleration in the adoption of AI in recent years80” in the banking sector which was lagging behind insurance companies and the investment management industry for reasons related to uncertainties surrounding regulatory expectations (accountability, ethics, and opacity of AI models, particularly for consumer-related applications) and to the proprietary nature of banking data. By contrast, the investment management industry has used the technology for decades in trading, client services, and back-office operations to manage large streams of trading data and to execute high-frequency trading81. Most use cases in the banking sector seem currently to focus on enhancing internal operational efficiency and improving regulatory compliance (RegTech)82, rather than for core business or high-risk activities. However, GenAI technologies’ growing accessibility could facilitate more rapid integration.
Banks and insurance companies
AI can improve credit underwriting – by increasing the accuracy of predictive models that can help process credit scoring, enhancing lenders’ ability to calculate default risk –, improve client relations (chatbots, AI-powered mobile banking), back-office support, risk management and product placement. According to Crisanto et al.83, who provide a point-in-time snapshot of the use of AI based on feedback from selected industry players and through industry surveys, banks have accelerated their investments in AI within their organizations, notably due to the expected wider adoption of GenAI. Much of the increased spending relates to IT infrastructure and AI talent headcount, while they are cutting headcount elsewhere, suggesting that expected AI productivity gains may replace human resources. Banks’ reported use cases are currently related predominantly to back-office and operational perspectives. Reported in-production use cases for core-external facing business activities are less frequent and concern mostly larger banks. Insurance companies appear to be more advanced than banks: they have already been using AI to facilitate processes such as underwriting, risk assessment and claims management.
Capital markets and financial intermediaries
The use of AI could have considerable benefits for the efficiency of capital markets. According to an outreach conducted for the Global Financial Stability Report (GFSR)of the IMF84, financial institutions (dealers, asset managers, investment funds, hedge funds, market infrastructure firms (…) have been actively using ML and other AI-related computation methods in their investment processes for 20 years, while “mainstream use of GenAI only dates back a few years”. However, if investment strategies driven by GenAI such as robo-advising or AI-based exchange-traded funds (ETFs) – “where AI is used to construct and adjust an ETF’s portfolio” – are still in their early phases, they have grown explosively in the case of ‘robo-advisors’ assets under management, and are projected to grow further. Participants in the outreach expect, in a 3-to-5-year horizon, “an increase in the use of AI in trading and investment, and a higher degree of autonomy of AI-based decisions, especially in the equity market85”. However, “although the trend is towards less human interaction, complete autonomy is not expected soon: more significant changes are a medium to long-term concern86”.
Risks arising from the financial sector’s use of AI
Systemic risks and other types of risks, related for example to novel usage by malicious actors or to customer discrimination, could harm financial stability.
Systemic risks87
Five AI-related vulnerabilities are especially noteworthy for their potential to raise systemic risk88. These five vulnerabilities relate to concentration and competition between third-party providers and/or between banks, the opacity of AI models, the risk of automated herding89 and procyclicality90, market manipulation and cyber risks.
1) Concentration, competition and AI risks: (i) overreliance of financial institutions on a limited number of AI suppliers (i.e. producers of accelerated chips, cloud services, pre-trained third-party AI models and large financial datasets critical to train these models) would increase the financial system’s dependency on AI-related third-party providers. In turn, as noted by the FSB, this “market concentration among technology and AI services providers could increase domestic and international interconnections as major service providers are only located in a few jurisdictions, exposing financial institutions to losses arising from operational impairments and supply chain disruptions affecting key vendors91”; (ii) it may be easier for larger financial firms with well-established data infrastructure and third-party networks than for smaller ones to make the necessary investments to integrate AI in their business structures, “leading to a concentration of AI capabilities in a few large financial institutions. This technological concentration could create a new form of systemic risk where the failure of a single institution’s AI system could have outsized effects on the entire financial system92”; (iii) ultimately, this could result in fewer financial institutions remaining on the market, increasing too-big-to-fail concerns;
2) Opacity of AI models: (i) the black box nature of AI models is due to their complexity and non-linearity. Algorithms may uncover unknown correlations in data sets that may not be easily understandable because the underlying causality is unknown; (ii) these models may perform poorly in the event of major and sudden movements in input data resulting in the breakdown of established correlations (for example, in response to a crisis). This could potentially motivate inaccurate decisions, with adverse outcomes for financial institutions or their customers; (iii) the “limited explainability of some AI methods and the difficulty of assessing data quality underlying more widely used AI models could increase model risk for financial institutions93”. Correlatively, this might make it difficult for supervisors to assess their appropriateness or to spot systemic risks in time;
3) Automated herding and procyclicality: (i) if increasingly similar models are used to understand financial market dynamics, this could contribute to increased market volatility, and to illiquidity during times of stress; (ii) herding risk would be exacerbated with the use of AI agents that can make rapid large-scale decisions leading to unintended market movements. The “speed, combined with the potential of AI models to react similarly to market signals create a risk of automated herding behaviour and greater procyclicality94”. However, circuit breakers would in any case halt trading in the case of exceptionally large price fluctuations.
4) Market manipulation: HLPE notes that “AI systems could enable more sophisticated forms of market manipulation. Their ability to process vast amounts of data and identify subtle patterns could be exploited to create or exploit market inefficiencies at a scale and speed that are difficult to detect and counter95”;
5) Cyber risks: (i) cyber incidents can pose systemic threats to the financial system if many financial institutions are affected at the same time (when a widely used program or service provider is involved) or if an incident at one entity propagates to the broader system; (ii) HLPE notes that on the one hand, AI technologies can enhance cyber defence by “analysing vast amounts of data to identify anomalies and potential security breaches more quickly and accurately than traditional methods”, while on the other hand, AI technologies also enable more sophisticated cyberattacks. “AI can be used to create more convincing phishing attempts, automate the discovery of software vulnerabilities, or launch more effective distributed denial-of-service (DDoS) attacks96”. Systemic risks will evolve with the level of supplier concentration, the pace of innovation and the degree of AI integration in financial services. Regarding the pace of innovation, machine learning has already added new dimensions to financial stability concerns, mostly since there are generally few AI suppliers, hence a higher risk of uniformity and procyclicality, and to the black box nature of AI models. GenAI’s characteristics – its ability to operate and make decisions independently, its speed and ubiquity compared to machine learning – are likely to exacerbate financial stability concerns that derive from the uniformity of datasets, model herding and network interconnectedness. The use of AI agents, characterised by direct action with no human intervention, may amplify these risks, implying that goals relating to the applicable regulations have to be explicitly spelled out.
Other risks
1) Four other risks could impact financial stability: fraud and disinformation, customer discrimination, customer data leakage and macroeconomic conditions resulting from a rapid uptake of AI.
2) Fraud and disinformation: (i) AI has already facilitated fraud schemes moreover, “GenAI’s capabilities in voice and video-based generation97” could be used to “generate deepfakes, to bypass security checks, defraud customers or to create false insurance claims98”; (ii) “GenAI could enable more sophisticated disinformation campaigns that have financial stability implications if they cause acute crises, such as bank runs99”;
3) Customer discrimination: AI in customer-facing operations (communication, complaint management, advisory functions – using digital assistants or robo-advisors – or customer segmentation and targeting) may improve the product-to-customer match, but its use could also lead to customer discrimination if it goes unchecked. Algorithmic bias may lead to discriminatory customer treatment and be difficult to identify and monitor100;
4) Customer data leakage: the issue of data leakage is particularly sensitive in the case of AI trained on customer-specific data, raising consumer protection considerations, and could also expose institutions to increased reputational or legal risk;
Macroeconomic conditions: in a medium to long-term perspective, advances in the penetration of AI could drive changes in the economy, affecting the sources of income of certain categories of workers and firms. In turn, this may amplify weaknesses in the financial sector by increasing corporate delinquencies and debt-to-income ratios, leading to financial stability risks.
Related policies
FSB, op.cit., p. 29.
Crisanto et al., op.cit.
Denis Beau, “The foundations of trustworthy AI in the finance sector”, Speech by Demis Beau, First Deputy Governor of the Banque de France, February 2025 [online].
Crisanto et al., op.cit., p. 31.
Beau, op.cit.
We first indicate the state of play regarding AI regulation and regulatory guidance in the financial sector, then consider the need for a European/international coordination/cooperation. Finally, we ask how AI could help improve supervisory and regulatory policies.
To regulate or not to regulate? AI regulation and regulatory guidance in the financial sector
“Existing financial policy frameworks address the vulnerabilities associated with AI adoption101” and no ad hoc tools are needed.
Crisanto et al. note that national authorities in many jurisdictions have introduced cross-sectoral AI-specific policies but financial authorities have been less active in developing specific regulations, perhaps because
1) they generally follow a technology neutral approach due to the evolving character of technology
2) and/or because the risks AI poses are already familiar to financial authorities, even if the use of AI may heighten them (see above). Therefore, the common themes of cross-sectoral AI-specific guidance are already broadly covered in existing financial regulation. Of course, additional work may be needed to ensure that existing financial policy frameworks are sufficiently comprehensive. By contrast, governments, with the EU at the forefront, are coming up with legislation or regulations to ensure that AI is safely used, given its societal implications (equality, privacy and the environment)102;
The EU has positioned itself at the forefront of AI regulation from a global perspective, adopting the world’s first legal framework whose impact will be felt from August 2025103. The AI Act will impact the financial sector in a number of ways. The regulation distinguishes between several levels of risk, within which “high risks”, which form the core of the text, apply to the financial sector in at least two respects. AI-based customer creditworthiness assessments by banks when granting credit to individuals, as well as pricing and risk assessments in life and health insurance are considered high-risk and will therefore have to comply with heightened requirements for such AI applications. These requirements are expected to be further developed by European standardisation bodies.
Subsequently, national competent authorities (NCAs) will need to ensure that financial institutions comply with the new AI governance and risk management requirements and standards, while assessing the extent to which more detailed sectoral guidance may be required. The remaining uses of AI in the financial services sector would be largely developed and used under existing legislation without legal additional obligations arising from the AI Act. However, given that the use of AI in claims management, anti-money laundering or fraud detection in the financial services industry is already extensive, supervisors need to assess the extent to which existing rules are sufficient and where additional guidance may be needed.
Is there a need for a European/international cooperation/coordination?
Cooperation and coordination needs arise at the regional and the international levels.
Thus, “the presence of various AI definitions across jurisdictions needs to be addressed by international collaboration. The lack of a globally accepted definition of AI prevents a better understanding of AI use cases in the global financial sector and the identification of specific areas where risks may be heightened104”. As such, international public-private collaborative efforts can be geared towards agreeing on a lexicon for AI and continue working towards regulatory and supervisory frameworks that can adapt to the rapid advancements in AI technology.
By its very nature, the regulation of AI is a global issue105. This underlines the value of the many international initiatives (FSB, OECD, UN, etc.), which must now be brought together.
There is a need to develop effective coordination to provide a supervisory framework for the use of AI, at the European level with the creation of a common methodology for auditing AI systems in the financial sector to reduce microprudential risks.
Could AI help improve financial policies?
Salman Bahoo, Marco Cucculelli, Xhoana Goga and Jasmine Mondolo, “Artificial Intelligence in Finance: a comprehensive review through bibliometric and content analysis”, Springer Nature Business and Economics, January 2024 [online].
Andrea L. Eisfeldt and Gregor Schubert, “AI and Finance”, NBER Working Paper n° 33076, October 2024 [online].
Bahoo et al., op. cit.
BIS, op. cit., p. 5.
Kenton Beerman, Jermy Prenio and Raihan Zamil, “SupTech tools for prudential supervision and their use during the pandemic”, FSI Insights on policy implementation, December 2021 [online] ; Aldasoro et al., op. cit.
Detecting potential anti-money laundering (AML) and combating the financing of terrorism (CFT) violations is one field where SupTech seem more advanced according to Rodrigo Coelho, Marco de Simoni and Jermy Prenio, “Suptech applications for anti-money laundering”, August 2019, FSI Insights on policy implementation n°18 [online].
Financial Stability Board, “The use of supervisory and regulatory technology by authorities and regulated institutions: market developments and financial stability implications”, October 2020 [online].
Elizabeth McCaul, “From data to decisions: AI and supervision”, Article by E. McCaul, member of the Supervisory Board of the ECB for Revue Banque, February 2024 [online].
BIS, op.cit. ; McCaul, op.cit.
Ibid.
AI can help improve financial policies by helping to better understand financial developments and to improve the design of policy decisions. However, leveraging AI comes with a set of challenges.
By better understanding financial developments
Using the tools of bibliometric analysis, Bahoo et al.106, conduct a review of the already extensive research literature on the use of AI in finance. AI is applied to the stock market (e.g. stock price prediction), to trading models (to build intelligent automated trading systems), to volatility forecasting, to portfolio management (asset allocation designs), to performance, risk and default evaluation (prediction of financially distressed companies, of mortgage and loan default), to credit risks in banks (bank failure prediction, financial fraud detection and early warning systems), investor sentiment analysis, foreign exchange management. Eisfeldt and Schubert107 analyse the more recent GenAI tools as a technology shock to research in finance, liable to lower the time and monetary costs of existing research designs in finance108 and enable new types of analyses. BIS109 provides examples of current avenues of central bank research in the banking, financial and payment systems domains: AI “is used to research standards and innovations that can enhance the resilience and robustness of payments systems, analyse the impact of banking regulations, calculate complexity measures of prudential and banking regulations etc”.
By better designing policy decisions
AI has the potential to help supervisors identify anomalies in real time and help regulators to better anticipate the impact of changes in regulation. Therefore, financial authorities’ investment in skills and resources must keep pace with developments so as to be able to critically assess AI developments and financial institutions ‘use of AI. However, as noted by IMF, BIS110, or Aldasoro et al., leveraging AI for micro and macroprudential policies comes with a set of challenges.
As regards microprudential policy, which concentrates on the supervision of individual financial institutions, the use of AI-powered SupTech – the application of financial technology by authorities for regulatory, supervisory, oversight and AML-CFT111 purposes – is making prudential supervision more efficient by enabling more sophisticated risk assessment models, including credit and liquidity risks, and improving the prediction of emerging risks for financial institutions. SupTech tools are used by a majority of authorities112, for example, the ECB113. However, as underlined by BIS and McCaul114, authorities have to be mindful of associated risks. This implies that they should be able to explain how their SupTech tools work, given their potential black-box nature, just as banks should be able to explain how their internal models work and provide clear guidelines for the use of AI in supervision. Furthermore, with the operationalisation of tools that target complex risk assessments that entail judgment, and to avoid supervisors relying less on their own judgement, BIS and McCaul115 emphasise that SupTech tools support, rather than replace, supervisory judgment, so as to avoid supervisory blind spots and a broader loss of institutional knowledge. The effectiveness of supervision will always ultimately depend on human judgement and on an organization’s risk culture;
As regards macroprudential policy, which focuses on the supervision of the financial system as a whole, there would be three main limitations to the recourse by the supervisor to AI in its current state of development116. Firstly, the Lucas critique, i.e. the fact that agents modify their behaviour when their environment changes. In that regard, the introduction of a “macro AI” would be a change of environment and lead agents to change their decision rules. This would render the consequences of the adaptation of financial institutions to the use of AI unpredictable. Secondly, the uniqueness of financial crises – each crisis has its own specific triggers/risk factors which are not understood ex ante. Thirdly, the rare-event character of financial crises, leading to incorrect predictions, when extrapolating from a few data points. These shortcomings seem to imply that, at least at the current stage, macroprudential policy has little, if anything, to say about the consequences of the spreading of AI for its remit. Looking forward, Cristado et al. hope that future advances – such as AI models being able to engage in counterfactual reasoning and causal inference – may help to enhance the speed, scope, and precision of macroprudential regulation, although the routine use of such methods is still far away. Indeed, AI gives the capability to analyse vast amounts of supervisory and market data, can help conduct more rigorous risk assessments to identify vulnerabilities faster and consider a broader range of potentially disruptive scenarios. It can thus improve the authorities’ capacity to model the financial stress that such scenarios can generate to ensure timely prudential responses to new threats. These considerations suggest that, while AI will assist in collecting information and modelling parts of the problems, crisis decision-making will likely remain a human domain for the foreseeable future.
We do not believe that AI will trigger a major disruption of the economic or financial environment. This analysis differs from two commonly held beliefs. The first is the ‘nightmare’ scenario, in which a large part of the working population could be replaced by machines, leading to higher unemployment and inequality, as well as major financial crises, with robots freely implementing algorithms that would amplify market movements. The second idea is the ‘fairy tale’, in which robots would replace humans in most tedious and physically demanding tasks. This would reduce working time, both on a daily basis and over a lifetime, particularly for the least skilled. According to this scenario, this would enable portfolios to be managed entirely passively, reducing risk but not returns.
Overall, a favourable environment should be provided for AI. Competition policies should ensure that rents are not entirely captured by a few dominant companies, while the regulatory environment should not stifle innovation.
In addition, labour regulation must allow for sufficient flexibility, while education and training, tax policy and human resource management in the public sector must be adapted. It is essential that funding for innovative companies is abundant and allocated by the most competent individuals and institutions, who should also be individually accountable for the decisions they take. This implies the development of venture capital and the creation of a CMU, complemented by an EUI 118. AI does not call for specific policy instruments, either in the economic or financial field. On the contrary, AI is both an indicator of the flaws and limitations of public policies and a tool to remedy them in part, alongside the implementation of long-overdue structural reforms.
No comments.