Barry Duffy | CalypsoAI’s Director of Commercial Product
WHAT IS THE LEGISLATION?
On April 21st, the European Commission issued a framework for regulating AI systems. The proposal provides a nuanced regulatory framework for the use of AI systems. The draft language from the EU’s executive commission proposes taking a four-level “risk-based approach” to quantify AI trust and accordingly regulate systems.

Unacceptable risks include AI systems that manipulate human behavior to circumvent users’ free will (e.g. government ‘social scoring’ systems, and toys using voice assistance to encourage dangerous behavior of minors) and the use of AI surveillance in public by law enforcement (certain exemptions are available here). Unacceptable risks will be banned under new legislation.
The High Risk category covers AI systems that pose significant risks to the health and safety or fundamental rights of persons (e.g. AI applications in robot-assisted surgeries, or autonomous systems that determine which citizens qualify for a loan). Systems in this category will be compelled to have a robust risk management framework.
“The proposal lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons. Those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market. Predictable, proportionate, and clear obligations are also placed on providers and users of those systems to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle.”
Limited risk systems (e.g. chatbots) are subject only to minimal transparency obligations, while minimal risk systems (e.g. spam filters) are free to use.
WHAT ABOUT AI USED IN INSURANCE?
Looking to bring AI in line with all peoples and processes covered by the EU Charter of Fundamental Rights, including the rights of the consumer, the Commission’s proposed AI legislation takes aim at highly regulated activities. For instance, the proposed legislation specifically references AI systems utilized by the financial sector for credit risk scoring, as well as the algorithms used by hiring managers to promote or terminate employees. Thus the regulation targets those applications of AI systems that have the potential to directly impact human life.
At present, insurance activities are not explicitly referenced in the EU’s AI legislation. However, as part of the EU’s continuous regulatory monitoring regime, one can expect that AI systems utilized in the insurance sphere shall be regulated under these guidelines in the near future. AI is increasingly used across the insurance industry, from determining one’s accessibility to insurance coverage, the cost of one’s coverage, through the claim determination process. Thus, it is vital for insurance companies to look ahead, ensuring their AI is auditable, secure, and reliable.
WHAT WILL BE REQUIRED FOR HIGH-RISK AI SYSTEMS?
Leaders in the insurance sector and other highly regulated industries seeking to accelerate AI across their organization, while adhering to the EU’s AI framework, must adopt a robust model risk management system to ensure that AI is trustworthy and secure.
“High-risk AI systems will be subject to strict obligations before they can be put on the market:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to access its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimize risk;
- High level of robustness, security, and accuracy”– European Commission, “Europe Fit for the Digital Age: Commission proposes new rules and actions for rules and actions for excellence and trust in Artificial Intelligence.”
Providers must ensure their AI is transparent, enabling users to understand and “control how the high-risk AI system produces its output.” High-risk AI users must also provide disclosures such as the general logic and design choices of the system, the potential circumstances where the AI could create risks to safety and rights, descriptions of the system’s training data, and “the level of accuracy, robustness, and security.”
A key obligation of providers is to ensure resilience against “attempts to alter [an AI systems] use or performance by malicious third parties intending to exploit system vulnerabilities.”
Companies that fail to comply with the proposed rules face fines of up to 6% of their global turnover or 30 million euros ($36 million), whichever is the higher figure.
HOW CAN COMPANIES ADHERE TO THESE NEW STANDARDS?
By implementing a robust Model Risk Management (MRM) process backboned by a product like CalypsoAI’s VESPR, mission owners can ensure compliance and accelerate the development and deployment of AI/ML models.
VESPR provides advanced AI testing capabilities with a streamlined workflow to ensure that every machine learning algorithm put into production has been verified, maintaining the integrity, trust, and security of the algorithms. The systems’ record keeping allows for a deep audit of all artifacts considered and decisions made throughout the system’s lifecycle. The end result is a set of trustworthy AI systems that you can have high confidence in and are easy to understand.

Barry has designed and delivered software solutions for some of the world’s largest insurers. For over 15 years, his work has spanned early engagement, pre-sales, design, delivery, and go-live support. Before joining CalypsoAI in 2021, he was the Global Product Manager for the FINEOS Claims system.