Trustrorthy AI
Knowledge Base

Accountability

Accountability ensures that organizations and individuals are responsible for the outcomes and responses produced by their AI workloads. This principle emphasizes the need for clear lines of responsibility and mechanisms for addressing issues that arise from AI deployment. Organizations must be prepared to acknowledge and take corrective action when AI systems cause harm or operate incorrectly.

Further, corrective actions must be timely. In other words, organizations must be resourced such that they are able to resolve harmful, incorrect, or other issues that run counter to the RAI principles which ought to be treated with the urgency of a critical error in a Tier 1 core business system. The programmatic and technical proficiency to diagnose, triage, and act on these resolutions is an absolutely core competency for any organization deploying AI tools.

The need for accountability can be seen, for example, in self-driving cars. If an autonomous vehicle is involved in an accident, there must be a framework to determine responsibility. The car manufacturer would need to have processes in place to investigate the incident, address any faults in the system, and provide appropriate remedies to those affected.