New Toolkit Launched to Help Businesses Navigate High-Risk AI Compliance Under the EU AI Act

The logo of HEAT with logos of the University of Cambridge, the Leverhulme Centre for the Future of Intelligence, Accenture, and Ammagamma.

Cambridge, UK – As the first rules of the EU Artificial Intelligence Act take effect, an academic-industry partnership between the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge and Accenture’s Center for Advanced AI in Italy  has launched the beta version of the High-risk EU AI Act Toolkit (HEAT)—a groundbreaking compliance and ethics resource designed to help AI providers meet regulatory obligations while advancing responsible AI development.

With high-risk AI systems now subject to strict EU regulations, HEAT offers product managers and AI project leads step-by-step, software-based guidance to make all aspects of compliance clear, straightforward, and actionable. It facilitates collaboration across teams and departments, helping companies document and evaluate risk, implement robust data governance strategies, and ensure compliance across their AI workflows.

“This toolkit ensures AI compliance isn’t reduced to a bureaucratic burden but instead supports fairness, transparency, and accessibility," said Dr Eleanor Drage, Senior Research Fellow at LCFI. "We connect users with the most up-to-date responsible AI methods, ensuring they not only document compliance but also integrate cutting-edge ethical AI practices into their development process."

Beyond compliance, HEAT is designed as a pro-justice, step-by-step guide that encourages AI teams to critically assess whether AI is the right solution for their problem, fostering a culture of responsible innovation.

"HEAT embeds participatory AI methods and environmental sustainability considerations into the workflow," said Dr Tomasz Hollanek, Research Fellow at LCFI. "It helps companies future-proof their AI products by considering what the Act suggests as desirable today—and what may become compulsory in the near future, such as assessing AI’s environmental impact."

The beta release of HEAT coincides with the Artificial Intelligence Action Summit in Paris (February 10-11, 2025), where world leaders, business executives, and researchers will discuss the future of AI governance. Dr Hollanek will also present the toolkit at the Participatory AI Symposium on February 8, an official Summit fringe event.

"HEAT is the result of a unique academia-industry collaboration," said Cosimo Fiorini, AI Decision Science Manager at Accenture’s Center for Advanced AI in Italy. "It balances research-driven insights with the practical realities of AI product development. We wanted to create something that moves beyond compliance while remaining incredibly hands-on—so businesses can take immediate action without getting lost in legal complexity."

As the EU AI Act comes into force, HEAT provides companies with a future-proof, ethically grounded approach to AI compliance. By embedding responsible AI principles at the core of AI development, the toolkit helps organizations meet both regulatory requirements and societal expectations.

The HEAT beta version will continue to evolve as new guidance from the European Union emerges.

Diagram illustrating the 7 interconnected 'spaces' of HEAT.

🔹 For more information about HEAT and the team behind it, visit: www.lcfi.ac.uk/research/project/eu-ai-act-toolkit


🔹 To start using the beta version of HEAT, go here: https://aiact.cloud.ammagamma.com/





Previous
Previous

New Report Examines AI Companions in Health and Mental Wellbeing

Next
Next

Call for participation: Social AI Policy Futures