Putting Responsible AI Into Practice

Putting Responsible AI Into Practice

As awareness grows regarding the risks associated with deploying AI systems that violate legal, ethical, or cultural norms, building responsible AI and machine learning technology has become a paramount concern in organizations across all sectors. Individuals tasked with leading responsible-AI efforts are shifting their focus from establishing high-level principles and guidance to managing the system-level change that is necessary to make responsible AI a reality.

Ethics frameworks and principles abound. AlgorithmWatch maintains a repository of more than 150 ethical guidelines. A meta-analysis of a half-dozen prominent guidelines identified five main themes: transparency, justice and fairness, non-maleficence, responsibility, and privacy. But even if there is broad agreement on the principles underlying responsible AI, how to effectively put them into practice remains unclear. Organizations are in various states of adoption, have a wide range of internal organizational structures, and are often still determining the appropriate governance frameworks to hold themselves accountable.