top of page

How to Build a Framework Around Responsible AI?

In our last blog post, we discussed the various risks around AI and the ways of understanding them. This piece will talk about building a program around responsible AI and will help you in drafting a framework for your organization.


Building a program around responsible AI requires careful planning and scrutiny. Responsible AI is a complex subject that requires input and collaboration across wide groups of people within a company. It takes time for businesses to understand, consider, analyze and then implement a program around responsible AI. Making this program a core competency instead of just a compliance exercise requires foresight and planning.


Framework

One key component of a responsible AI program is developing or adopting an existing framework. As AI and ML become increasingly ubiquitous, it is crucial to establish a framework that would aid in the creation, adoption, and usage of AI.


There are several efforts to establish reference frameworks, principles, and layout recommendations for addressing and mitigating the risks around AI. These frameworks or reference models are constructed by experts from diverse industries and are a good place to start with.


The European Commission has proposed a legal framework for better understanding the risks around AI in order to promote the trustworthy application of technology. The plan also includes the establishment of a European Artificial Intelligence Board for managing regulatory requirements in the region. In the US, the National Institute of Standards and Technology (NIST) recently requested public input for drafting a guidance document to help developers, users and evaluators improve the credibility of AI systems.



However, the numerous use-cases of AI make it very tough to come up with a framework that can work for all threats and risks. The unique set of challenges and risks would make it difficult to just borrow and adapt these frameworks. Organizations need to produce their own set of guidelines that are tailor-made based on their culture, ethical framework, risks, and needs.


The requirements for an AI model will differ based on the industry it is intended for, the type of data being fed to it, its purpose, etc. As customers, regulators, etc., ask questions around the organization's use of AI, it is important to be able to provide the answers and demonstrate ongoing governance and regulatory compliance. The framework fundamentally enables oversight as it defines clear roles and responsibilities and articulates requirements or goals for the program.


The Framework needs to be grounded in these principles:


1. Interoperability and Explainability:

The AI system should be transparent and allow the users to understand, inspect, and reproduce the mechanisms through which the decisions were made. Good governance mechanisms are needed to ensure that data used to train algorithms is collected and managed in a fair manner, and any shortcomings in this data are addressed to minimize bias. Obscurities in the system will prevent it from being assessed objectively. To trust AI, organizations need models that can be trusted.


2. Robustness and Security:

All systems, including AI systems, could have inherent vulnerabilities. However, AI systems should be designed with adequate safeguards to operate even when faced with uncertainty and in the presence of some major threats. AI models can be hacked to make them learn and behave erroneously. Manipulating training data or training methods can also cause AI systems to learn and exhibit the wrong behaviors.


Attackers can also use various mechanisms to extract information that was used to train the AI models to reveal potentially private or confidential information from them. The probability and the impact of any adversarial attacks will depend on your organization and the type of AI system. The overall goal should be to minimize security risks and build trust in the derived results.


3. Fairness and Equity (or lack of bias):

When used for critical tasks such as recruitment or loan processing, any existing bias within the system can get amplified, resulting in wide-scale implications. A fair and all-inclusive dataset is necessary to consider social, cultural, political, legal, and other attributes to ensure equitable treatment of individuals.


It is important to ensure that the training datasets contain diverse examples and are free from bias. In order to avoid bias and prejudice, the data sets upon which AI is built must not exclude information from a specific demographic or region. The use of flawed data could result in discrimination based on race, gender, or other protected classes. Conducting rigorous tests to make sure the model is working as intended will help eliminate bias and ensure fairness in the overall process.


4. Social Impact:

If applied incorrectly, AI could pose challenges around privacy, freedom of expression, discrimination, etc. Biased AI can negatively impact vulnerable groups when used in criminal justice, education, credit approvals, etc. Such applications can result in increasing the social divide and could potentially lead to social unrest.


5. Privacy and Accountability:

As AI models learn from data, it is crucial to pay attention to the kind of data being used and that it does not infringe upon the privacy of individuals. The use of sensitive data should be checked in view of legal, regulatory, and social limitations and the data collected should be utilized only for the intended purpose. Appropriate steps should be taken to ensure that the AI model does not reveal underlying data while making decisions.


When it comes to privacy and risks, the organization needs to demonstrate accountability in order to gain the trust of all its stakeholders. Set up teams that would be accountable for the different types of business risks anticipated for the model. The program should address the requirements for oversight, challenge, and assurance during development as well as deployment.


Conclusion

With continual innovations in AI and ML, it is difficult to regulate technology from a privacy perspective. The line between the real and the digital world is becoming more obscure.


Recently, Apple faced a privacy breach lawsuit alleging that its Siri voice assistant intercepted users’ private conversations. Such cases highlight the fact that we are at a point where we need to reflect on what our AI model is capable of and when it should learn to stop.


Privacy is undoubtedly the next big challenge in the development of AI. As private information is publicly available at the click of a button, we need to ask ourselves- where do we draw the line? How much is too much information? When do we consider it an invasion of privacy?


We’ll talk more about privacy in AI in our next blog post.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page