Ethical EU AI ACT

As an ethically-focused AI scale-up, implementing robust governance and transparency standards is a hugely important objective for Citibeats. The accuracy and trustworthiness of its underlying AI systems is critical to enable it to deliver valuable social insights to its customers through the Citibeats platform. This can be achieved through effective privacy controls, bias-analysis and transparency procedures, and although many of these safeguards are covered under the European Union’s draft Artificial Intelligence Act (the “EU AIA”).


The EU AIA represents the world’s first attempt to implement horizontal regulation of AI systems. Due to the ‘Brussels Effect’, it is predicted to become the de facto standard setter globally. Analysts currently expect the EU AIA to come into force towards the end of 2024, and ethically focused businesses are now keen to get a head start. Many of the requirements under the act, such as the obligation to establish a risk management system and keep technical documentation, make sense regardless of the regulatory imperative. As the AI space matures, these controls will become essential for any organisation building AI. You can find out more about the EU AIA, along with an overview of the latest proposed updates, here.

Conducting the conformity assessment

Articles 16 and 43 of the draft EU AIA require organisations building certain types of AI to conduct a conformity assessment. The goal of a conformity assessment is to ensure that a given system is legally compliant, ethically sound, and technically robust, which is often easier said than done. The regulatory AI landscape is growing increasingly complex, with a myriad of global standards now emerging. These can often be confusing and difficult to translate into actionable insights.

Through a partner’s platform, Citibeats has been able to navigate through an array of global standards (such as the CapAI assessment, the NIST risk management framework, the UK’s algorithmic transparency standards, etc.) to assess its system against the five key lifecycle phases: design, development, evaluation, operation, and retirement. The platform has helped Citibeats translate these ethical principles into actionable insights, which can be used to verify whether a system aligns with fundamental EU values and rights.

Drawing on leading frameworks to complete the analysis, Citibeats has been able to conduct an important compliance review, and generate three key governance outputs as a result of this evaluation process:

1. an internal review protocol, which provides it with a well-defined governance framework for quality assurance and risk management along with the relevant technical documentation;

2. a summary datasheet; and

3. a model card, which may be distributed to customers and stakeholders as evidence of good practice and conscious management of ethical issues.

There are many clear advantages to implementing such a governance process. Firstly, it goes beyond ‘responsible AI’ rhetoric and translates illusive ethical principles into tangible criteria which can be used to verify whether a system aligns with fundamental European values and rights. Moreover, it is sufficiently comprehensive and holistic, spanning all stages of the AI lifecycle. Finally, this ethically guided approach allows Citibeats to continue driving trust and transparency in AI technologies for its customers.