Integrating Ethics into AI Development: How VBE Supports Compliance with the EU AI Act

Discover how Value-Based Engineering (VBE) helps organizations align AI system design with ethical principles and EU AI Act compliance.

It is without a doubt that AI is infiltrating nearly every aspect of life: from healthcare to finance and public services. Along with the great potential of AI, voices are being raised on the risks of these systems such as undermining democracies and perpetuating societal biases. To respond to these concerns, regulations like the EU AI Act aim to ensure the safety of AI systems. However, implementing these regulations remains a challenge as organizations are struggling to translate broad principles into concrete design choices. In practice, compliance is too often treated as an afterthought and is addressed only at the final stages of development. Such an approach can lead to costly redesigns or regulatory breaches. A more effective approach is needed, and Value-Based Engineering (VBE) offers a solution.

VBE makes ethical compliance an integrated part of the system development life cycle. Instead of viewing compliance as a final hurdle, VBE embeds ethical considerations into the early stages of the system design. With system requirements being the endpoint of the VBE efforts, the resulting systems are built that minimizes the probability of stakeholder values to be harmed. The following blogpost outlines 5 arguments on how Value-Based Engineering not only helps organizations in the path of compliance with regulations such as the EU AI Act but also tackles some of the issues and ambiguities in these regulations.

Insight 1: VBE Provide a Context Specific Approach in System Analysis

One of VBE’s most important qualities is that it is context specific. This means that the system requirements we end up with after a VBE analysis are tailored to address the specific context in which the AI system operates. This is vital because AI systems are not one-size-fits-all. Although certain characteristics are shared across systems, they often function within unique environments, serve different user bases, and address distinct challenges.

VBE tells us that we cannot impose standardized requirements across all systems. Instead, we need to identify the risks, ethical concerns and technical adjustments in direct relation to how the AI is used in practice. For example, an algorithm designed to improve fairness in hiring recommendations on LinkedIn won’t necessarily translate to Tiktok’s content recommendation system. Both are social media networks; however, users engage with them differently. The challenges of fairness, bias and transparency take on different forms in each case and so applying a uniform approach would be ineffective.

Insight 2: VBE Provides a Bottom-Up Approach to Risk Assessment

Determining risk levels is one of the key challenges in aligning AI systems with the EU AI Act. The act takes a top-down approach by classifying AI systems into risk categories such as unacceptable risk, high risk and limited risk systems. While the AI Act does give examples of the types of systems that belong to each category, the categorization is not always straightforward.

Consider a bible chatbot that provides spiritual conversations based on biblical texts. Under the EU AI Act, its risk level is ambiguous: does it qualify as high-risk? Or is it an acceptable risk? Alone, these predefined categories cannot provide a clear answer. However, VBE Analysis would approach this question differently and examine how users engage with these chatbots and identify the potential risks that emerge from these interactions. Could users become emotionally dependent? Could misinformation about religious texts be spread? Could privacy concerns arise due to the sensitive nature of user inputs?

By using VBE’s bottom-up approach, these questions are asked during the analysis, and risks can be identified and mitigated through defining system requirement. VBE can complement the EU AI Act by providing a more nuanced and context aware framework for AI risk assessment.

Insight 3: VBE Derives Values Bottom-up through Stakeholder Engagement

At its core, a compliant AI system is one that respects its users’ values and rights. The EU AI Act illustrates this point through defining the EU Charter of Fundamental Rights as a reference point of values that need to be protected by AI systems. This approach of using predefined lists as a starting point for trustworthy AI is used by many international organizations and high-level expert groups. AlgorithmWatch has compiled a list of 173 guidelines from different international organizations such as the OECD and Microsoft. These lists aim to provide guidance to the innovative efforts of such organizations. But do these fixed lists of guidelines scale well with the increased complexity of systems?

Relying on fixed lists presents several challenges. First, there is no universal agreement on a unique list as different organizations define and prioritize values differently. Values are highly context-dependent; for example, “trust” in the humanities differs from “trusted computing” in cybersecurity. On top that, many predefined lists suffer from logical inconsistencies by combining fundamentally different types of values such as beneficence (an ethical principle) with privacy (a legal right). And finally, perhaps most importantly, having strict lists of values freezes potential in unveiling new context-specific values that relevant to users.

VBE acknowledges these challenges and does not have a predefined list of values as a starting point. Instead, VBE utilizes philosophical approaches such as virtue ethics and duty ethics to engage with stakeholders and help them identify the values that are relevant to them. Then, at a later stage, VBE cross-references stakeholder-derived values with predefined lists, such as the EU Charter of Fundamental Rights. This step helps uncover any missing values that didn’t come up in stakeholder conversations. VBE’s holistic approach to value definition ensures that system requirements encapsulate both stakeholder-defined context specific values and legally required values.

Insight 4: VBE Takes a System-Wide Approach to Ethical Risks

Many AI risk assessment approaches focus only on the technical aspects of an AI system: data, models, and algorithms. However, these AI systems are usually one component within a broader “System of Systems.” Thus, the AI component is just one source of value risk, and a true compliant system is one where risks are addressed on a system-wide basis.

VBE addresses this by treating compliance at a systemic level rather than as a matter of fixing isolated technical flaws. It takes into account organizational processes, decision-making processes and societal impact. This results in a more robust, context-aware approach.

Insight 5: VBE as a Pathway to AI Governance with ISO42001

Finally, Value-Based Engineering naturally supports the process of AI Governance when looking at standards such as ISO42001, an internationally recognized standard that specifies the requirements for building and maintaining AI Management Systems. The parallels between VBE and ISO42001 highlight how VBE can serve as a practical tool for organizations to define and implement AI governance strategies. More specifically, VBE components resemble many of ISO42001 checklist items for achieving trustworthy AI systems. For example, VBE’s “concept and context exploration” mirrors ISO42001’s requirement to assess the context of the organization and VBE’s stakeholder analysis responds to ISO42001’s understanding of interested parties.

By following VBE’s structured methodology to a system development lifecycle, organizations can build AI systems that are not only compliant with regulations like the EU AI Act, but that are robust enough to ensure long-term accountability and trustworthiness.

Share the Post:

Related Posts

VBE at the AMS Berufsinfomat

Discover how Value-Based Engineering (VBE) was applied in a project between Aschauer IT, AMS Berufsinformat, and WU Wien, led by Dr. Sarah Spiekermann. The project analyzed AI-driven counseling services, ensuring technology aligns with values like fairness and attentiveness.

Read More