© Adobe stock
Legal, social, and ethical compliance requirements for AI systems are ever increasing. Market requirements relevant to European AI manufacturers have been expressed by the European Commission's High-Level Expert Group on Artificial Intelligence (HLEG AI) and, at Germany’s national level, by the German government's Data Ethics Commission (DEK) and the German Bundestag's Commission of Inquiry on Artificial Intelligence (EKKI). The requirements issued by commissions or working groups are non-binding as long as they are not being transformed into sovereign law - for example, as the EU AI Act, which is expected to be passed and come into force around 2025. This brings the issue of translating values into technical functions of an AI into sharper focus. But how does AI “learn” values?
Known Software quality attributes do not allow for "ethical" AI
Since 2017, there have been issued c. 80 documents on AI compliance worldwide. They all converge against similar requirements. What is demanded is "robust" or "trustworthy" or "ethical" AI. Thus, in addition to compliance with applicable law, consideration of ethics and morality is also required, with morality, whose "questions of what gives our lives meaning or finds fulfillment" are considered as subjective individual emotion rather than an objective element, according to prevailing legal opinion. Accordingly, implementation of “emotions” becomes difficult. "Technical robustness and security" or "transparency", three requirements stated by the AI Ethics Checklist of the HLEG AI, are thereby only self-evident quality attributes of technical systems as they were described in software specifications before agile software development with its very different documentation requirements began to prevail. Definitely they are not suited for casting core values such as peace or justice or virtues such as truthfulness, restraint or courage into system functions.
A new Engineering standard Brings together Humanities and Engineering
The difficulty of transforming values into system functions is addressed by the IEEE 7000™-2021 standard, which was released in September 2021. It creates the world's first fusion of humanities and engineering using the social innovation of a new profession in systems engineering, the so-called Value Leads, and provides the foundation for collecting, connecting, translating, and tracing values into system functions. Value Leads not only bring technical understanding to the table, but more importantly they are trained in issues of axiologies and theories of ethics. These form the basis of their training, which is followed by three hands-on systems engineering projects - two theoretical projects executed together with computer science students and supervised by a Value Lead coach and one practical implementation of a commercial AI system for consumers or industry. The goal is to create better systems and to ensure that future AI systems also meet the requirements which technology investors have for ESG compliance of their investments.
21strategies is the first company in Germany to seek IEEE 7000™-2021 certification for a dedicated AI. But the company's efforts do not stop there: the experience gained along the way will be compiled and made available as part of a university's efforts to promote young talent in better system engineering.
For more information, contact: firstname.lastname@example.org
Author: Yvonne Hofstetter