How to Build a Framework Around Responsible AI?

In our last blog post, we discussed the various risks around AI and the ways of understanding them. This piece will talk about building a program around responsible AI and will help you in drafting a framework for your organization.


Building a program around responsible AI requires careful planning and scrutiny. Responsible AI is a complex subject that requires input and collaboration across wide groups of people within a company. It takes time for businesses to understand, consider, analyze and then implement a program around responsible AI. Making this program a core competency instead of just a compliance exercise requires foresight and planning.


Framework

One key component of a responsible AI program is developing or adopting an existing framework. As AI and ML become increasingly ubiquitous, it is crucial to establish a framework that would aid in the creation, adoption, and usage of AI.


There are several efforts to establish reference frameworks, principles, and layout recommendations for addressing and mitigating the risks around AI. These frameworks or reference models are constructed by experts from diverse industries and are a good place to start with.


The European Commission has proposed a legal framework for better understanding the risks around AI in order to promote the trustworthy application of technology. The plan also includes the establishment of a European Artificial Intelligence Board for managing regulatory requirements in the region. In the US, the National Institute of Standards and Technology (NIST) recently requested public input for drafting a guidance document to help developers, users and evaluators improve the credibility of AI systems.


</