What do banks, insurers and other financial institutions need to know about AI compliance under EU law?
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) (the "AI Act") entered into force on 1 August 2024 (with phased implementation beginning 2 February 2025 and general application from 2 August 2026), establishing a legal framework based on a proportionate risk-based approach to AI. This article examines recent developments concerning the interplay between the AI Act and the EU financial sector, as reported by EU authorities and institutions. For background on the AI Act, including its risk-based classification framework, prohibited AI practices, and obligations for high-risk AI systems, readers may refer to our July 2024 article 'Ground-breaking worldwide Artificial Intelligence Act'. The implementation of the AI Act into Luxembourg law, including the designation of notifying and surveillance authorities under Draft Law No. 8476, is discussed in our January 2025 newsletter, 'Luxembourg draft law implementing the EU Artificial Intelligence Act'.
The AI Act classifies certain AI applications in financial services, particularly those evaluating creditworthiness or pricing insurance as "high-risk", subjecting them to stringent requirements for data governance, transparency, human oversight and risk management. The AI Act adopts a four-tier risk classification: unacceptable, high, limited and minimal risk, with high-risk AI systems subject to mandatory conformity assessments and specific obligations regarding data governance, technical documentation, human oversight and cybersecurity. In the context of existing extensive EU regulation covering risk management, consumer protection, data governance and operational resilience covering the activity of financial institutions, the risk of duplicated obligations and overlapping compliance frameworks mandated further assessment.
Two key developments in November 2025 shed light on how these frameworks interact: the European Banking Authority's (EBA) mapping exercise results published on 21 November 2025, and the European Parliament's resolution on AI's impact on the financial sector, issued on 25 November 2025.
EBA AI Act Mapping Exercise: key findings for credit scoring and creditworthiness assessment
The EBA established a dedicated workstream in January 2025 to map AI Act requirements for AI systems used in creditworthiness assessment or credit scoring of natural persons, classified as high risk in Annex III(5)(b) of the AI Act. To promote a unified understanding of the AI Act's implications for the EU banking and payments sector, the EBA assessed these requirements against existing sectoral frameworks (such as, among others, the Capital Requirements Directive, Capital Requirements Regulation, the Digital Operational Resilience Act, Consumer Credit Directive etc.).
The results have been published as a factsheet and transmitted by a letter addressed to the European Commission (EC). The key takeaway is reassuring: the AI Act complements rather than conflicts with existing financial regulation. It has been concluded that no significant contradictions have been found between the AI Act and EU banking and payment legislation. The AI Act is complementary to EU banking and payment sector legislation, which already provides a comprehensive framework to manage risks stemming from the use of technologies, including AI.
The EBA identified three ways in which AI Act requirements interact with existing EU financial sector law:
- First, derogation applies where EU sectoral obligations replace relevant AI Act obligations, notably for quality management systems and post-market monitoring for deployers.
- Second, integration or combination applies where risk management and governance arrangements under EU banking law provide a framework to integrate AI Act obligations, though some adaptation may be required.
- Third, where no regulatory synergy is explicitly envisaged, existing frameworks like DORA and CRR/CRD still provide a solid base for implementation.
For now, the EBA sees no immediate need to introduce new guidelines or revise existing ones. However, the EBA emphasises that supervisory cooperation will be crucial, as financial entities will be overseen by multiple authorities under both regimes.
European Parliament Resolution on AI in Finance: no new legislation needed
On 25 November 2025, the European Parliament adopted a resolution on the impact of artificial intelligence on the financial sector spearheaded by Arba Kokalari (the "Resolution").
The Resolution acknowledges the broad adoption of AI across the EU financial services sector, recognising its significant potential to boost efficiency, innovation and competitiveness. Beneficial applications include fraud detection and prevention, anti-money laundering checks, transaction monitoring, personalised financial advice, ESG data analysis and regulatory compliance assistance. It is emphasised that benefits of AI use should be passed on to end customers through lower prices, improved financial advice, greater financial inclusion and enhanced financial literacy.
The Resolution identifies significant risks associated with AI deployment in financial services, including data quality issues leading to discriminatory outcomes, model opacity, privacy concerns, cybersecurity vulnerabilities and explainability challenges. The Resolution also flags concentration risk arising from dependency on a limited number of third-party AI providers, with potential systemic risks in case of disruptions.
A call for guidance and innovation
A central idea put forward by the Resolution is that no new legislation is needed for AI use in the financial sector. The Parliament, in its majority, believes that existing sectoral legislation is mainly sufficient to cover AI deployment in its current form, without additional legislation adding complexity, uncertainty and risking depriving the sector of the benefits of AI.
Instead, the Resolution calls on the Commission to provide clear and practical guidance on, among other points, applying existing financial services legislation to AI and also exploring how AI could be used for automation in strictly regulated areas, such as intermediation, portfolio management and compliance.
Share on