Report – Artificial intelligence in finance: how to trust a black box? | Finance Watch

Report – Artificial intelligence in finance: how to trust a black box?

05 March 2025

Report

A server room data center

The use of AI in financial services conflicts with the core principles underlying decision-making in finance and financial regulation – accountability, responsibility and transparency. This report calls on policymakers to reassess the regulatory framework, ensuring the protection of consumer interests and safeguarding financial stability.

A disconnect between AI and the principles of financial regulation

Artificial intelligence is here to stay. But while the potential to increase productivity and profit is clear, there is a fundamental tension between the way AI functions – detecting correlations in data – and the principles of causality and accountability that underpin financial regulation. 

AI systems generate outputs without clear explanations of their reasoning. This black-box nature places their decisions beyond human analytical ability and renders oversight impractical, if not impossible.

Black-box logic threatens consumers, supervision and market stability

  1. Consumer protections are under threat. In the area of retail finance, the deployment of AI could lead to opaque credit assessments, pricing discrimination, discriminatory lending, and misleading financial advice, resulting in financial exclusion that disproportionately affects marginalised consumers.
  2. Supervisors face AI challenges. Supervisors tasked with enforcing regulation face challenges in keeping pace with the deployment of AI by financial institutions and delivering on their mandates.
  3. Market stability is at risk. Increasingly dependent on third-party AI providers, financial institutions face operational risks from unregulated external systems, as well as concentration risks – where a handful of dominant AI firms control critical models and infrastructure, creating systemic vulnerabilities. Data manipulation and the inevitable exhaustion of human-generated data to train AI models are added threats, leading to nonsensical results and the undetectability of falsehood.

Despite the dangers AI-powered finance poses, the shift in focus by EU leaders away from safety towards a wholesale embrace of AI sends a message that profits and competitiveness come first – consumer protections and market stability last.

Call to policymakers

Without clear regulatory guardrails and accountability mechanisms, the use of AI in financial services introduces risks that are difficult to detect and control, threatening consumer protection and market stability while undermining trust in the wider financial system. 

To mitigate these challenges and vulnerabilities, Finance Watch calls on policymakers to navigate the trade-off between maximising AI’s efficiency gains and ensuring broader financial and societal safeguards.

Key recommendations

  1. The European Commission should expand the scope of the AI Act to cover all financial services.
  2. The European Commission should reintroduce the AI Liability Directive proposed in 2022 and withdrawn in 2025. The directive would establish a clear liability regime that holds providers of AI-powered services accountable for damages caused by outputs of AI systems.
  3. The European Commission should evaluate the legal and technical potential for financial supervisors to enforce existing EU regulation for AI-powered financial services.
  4. The European Commission should conduct a gap analysis identifying necessary updates to existing financial regulations, as well as amend the relevant legislative texts to ensure investor, consumer, and societal protections in an AI-driven financial sector.

Read the report