Trustrorthy AI
Knowledge Base

Responsible AI

I Science fiction abounds with tales of the computer surpassing and, eventually, dethroning the human. In the real world, though, the need to regulate and moderate artificial intelligence is enacted through the discipline of Responsible AI (RAI).

Microsoft has established a series of RAI principles that guide the ethical development and deployment of AI. These principles - Reliability and Safety, Privacy and Security, Fairness and Inclusivity, Transparency, and Accountability - are essential to ensure that AI is used safely within an organization. These principles join the AI Strategy Framework as dimensions in our Responsible AI pillar.

We need to be unequivocal, here, lest organizations foolishly treat RAI as the first thing to be cut when budgets tighten:

Responsible AI is not optional. Omitting it from your AI strategy is, in fact, irresponsible, and exposes the organization to intolerable levels of risk. You must either take RAI seriously or walk away from AI altogether.

This is not to say that RAI is more important than the other four pillars, rather to say that organizations failing to take - for example - workload prioritization seriously are likely to waste time and money. Organizations that don’t take RAI seriously face the possibility of being sued and regulated out of existence.

Microsoft is researching and analyzing various RAI scenarios with the goal of defining risks and then measuring them using annotation techniques in simulated real-world interactions. It’s product and policy teams, and the company’s AI Red Team area group of technologists and other experts who poke and prod AI systems to see where things might go wrong.