By Tanja Zimmermann
Photo: Andi Schmid
In autumn 2020, OECD counted c. 300 documents issued by governments, NGOs, and corporates dealing with AI, among them c. 80 tackling AI risk and threats and laying down principles of ethical AI. At the Handelsblatt KI Summit on October 1, 2021, Prof. Yvonne Hofstetter, CEO 21strategies, spoke on the issue of legitimation of AI ethics boards. In her keynote, she addressed normative AI ethics claimed by AI ethics councils and AI working groups.
The Responsible Use of AI
Prof. Yvonne Hofstetter is Honorary Professor for Digitization and Society at Bonn-Rhein-Sieg University of Applied Sciences and member of the Centre for Ethics and Responsibility (ZEV). In her role as CEO of 21strategies, she assesses the importance of ethical aspects in AI practice to a high degree. AI developed by 21strategies will follow the new IEEE 7000TM standard of value-based design to limit risk and threats occurring with AI. 21strategies has become known for its use of artificial intelligence and algorithmic decision support systems. Examples of applications include optimal decision making under uncertainty, especially in the business sector, and optimal hedging of financial market risks.
Ethics, AI, and algorithmic decision Making – Use Case Facial Recognition
Algorithmic decision systems are already part of our lives. Well-known examples here are facial recognition or the determination of school-leaving grades in the United Kingdom. Due to the Corona pandemic, school-leaving exams were cancelled in the UK in 2020. Instead, algorithmic decision systems were used to determine possible final grades for students. But it became apparent how unsophisticated the system used was: students received lower grades than expected, which caused enormous protests. Some posters exposed the problem: Rate my performance, not my zip code. But the protest was not only directed against the system used, but against the basic problem: social justice. This can be bypassed or even undermined using algorithmic decision-making systems. The students' protest had an effect, and the deployment was prevented.
Facial recognition technologies are used differently in many countries. At the first moment, one may think that only criminal citizens are to be identified. But the applications are constantly becoming widely spread. They not only include border surveillance, but also controls in stadiums, schools, or casinos. Some of the technologies are already being expanded to include voice recognition.
Ethics in AI – How important is a participatory society?
The examples show that the skepticism of citizens about the use of artificial intelligence and algorithmic decision-making systems is justified: ethical principles may be defined, but legal rules or even certification are still missing. Not the technology as such is the problem, but its fair use. Therefore, it is necessary to discuss how the algorithms should be used. The prerequisite is thus a well-informed and participatory society, because responsibility lies with the citizens themselves. Artificial intelligence, and algorithmic decision-making systems, should be designed in such way that they contribute to sustainability, transparence, human control, safety and security. Or, as Angela Merkel said, demystify artificial intelligence. Why? Because the responsibility lies with us!
Find out how 21strategies deploys of artificial intelligence and algorithmic decision support systems by visiting our website.