Responsible AI: Is Artificial Intelligence trustworthy?
Artificial intelligence has long been part of our professional and personal everyday lives. However, in addition to the many benefits, the risks associated with AI are increasingly coming into focus. In order to prevent harm and prejudice and to create trust in the technology, a clear definition of trustworthy AI is therefore needed.
Under the term Responsible AI, teams of experts worldwide are currently working on a guideline for the requirements of artificial intelligence. In its ethical guidelines for the EU, the European Commission’s High Level Expert Group, or HEG-KI, defines two necessary components as characteristics of trustworthy AI: ethical purpose and technical robustness.
In general, it is about respect for fundamental rights, applicable law and societal principles and ethical values. The second component, technical robustness, also describes the requirement for reliability and protection against harm caused by technical defects and the associated lack of technical control.
Based on these cornerstones, there are fundamental requirements for the development of trustworthy AI that must be met throughout the entire life cycle:
· Data quality management
· Human oversight
of AI autonomy)
|· Respect for human
· Privacy protection
In order to actually test compliance, appropriate (end-to-end) testing strategies are needed, from the identification of the need for a system, through its development, to its deployment and ongoing monitoring. For example, researchers working with Google and the Partnership on AI have developed the so-called SMACTR framework (Scoping, Mapping, Artifact collection, Testing and Reflection), which can be used to conduct audits before an AI model is delivered to customers. Companies that create AI products should be aware of these frameworks and establish appropriate auditing strategies in their companies.