How we can employ AI without compromising privacy
In 2023, we have seen a stark increase in the use of AI internally within companies. A majority of organizations have started using AI in some capacity. A study conducted by IBM in early 2023, surveying 3000+ CEOs spanning 24 industries found that 43% of the CEOs admitted to using generative AI to influence strategic decisions, while 36% use AI for operational decisions. However, what stood out was that while 69% of the CEOs recognized the benefits of AI across the organization, only 29% claimed to have the expertise to adopt AI internally.
AI is still a relatively new and evolving technology and because of this, companies and lawmakers alike are still understanding the potential risks and ethical damages that can be caused by AI. While many companies have already started using AI, we often see that it is being done without proper expertise, guidance, and frameworks in place.
As we consider introducing AI, we need to think about the risks and the overall benefits and make sure the benefits outweigh the risks and the risks are managed. This is important while complying with the laws being introduced on AI. We have covered them here - AI laws and policies to watch out for. When talking about risks while implementing AI, we see that there are multiple kinds, some of which are 1) Cybersecurity risks like prompt injections data poisoning. 2) Privacy risks which could involve the collection of data without consent to be used for AI training, a lack of transparency, how data collected is used, etc. 3) Discrimination and Ethical risks. For example, in 2018, it was found that Amazon’s recruiting algorithm penalized applications that contained the word “women’s.” Data sources or data that is inaccurate or is inherently biased can produce biased results. In our article, Implementing Chatbots.docx, we have used the implementation of AI chatbots as an example to illustrate the risks associated with using such technology.
Considering the focus given to privacy by lawmakers globally and the privacy risks associated with AI, it is important for us to ensure that privacy remains a core feature of any business when employing AI models. Here are some best practices to keep in mind:
1. Communicating the importance of Privacy: It is beneficial for all stakeholders to understand the importance of privacy from the onset. Maintaining the focus on privacy while discussing the design, development, evaluation, standards, limitations, and risks of AI is important. A shared understanding of the standards, requirements, progress, and issues by all parties is critical.
2. Adequate and Appropriate Testing: AI tools should be tested to ensure that they meet their intended standards as well as the requirements laid down by applicable regulations. Testing for unintentional bias in areas such as race, gender, sexual orientation, religion and the like should be given importance. The tools should be tested for Consequential Impacts and discriminatory bias.
3. Diverse Red Teaming: Is a method of adversarial testing to identify potential weakness and areas of improvement. Having diverse red teams when testing AI is beneficial as it provides perspectives from different backgrounds, races, genders etc. when providing feedback and critique.
5. Strong Risk Management: Processes should be in place to identify and manage internal and external risks. Impact assessments and internal evaluations should be conducted regularly to test whether the tool meets the intended purposes, the type of data being used to train the AI tool, identified risks and mitigation strategies.
6. Control on Data Sets: It is crucial to use diverse data sets that represent different backgrounds, people, and places when training AI models. Controls should be in place to determine what data gets used and what data does not get used. Including data from marginalized and underrepresented communities is important.
7. Ensure content traceability: Companies should be able to trace how content is generated to ensure transparency and accountability. Further, explanations on how the tool works, its intended purposes, how it was trained, its limitations and uses should be made available. Notices concerning the privacy of individuals, how data is collected and used within the models should be available for users to access.
8. Human involvement: Human oversight is required to understand and prevent unfair and biased practices and results. Accountability on parts of humans is necessary to prevent AI related risks as well as to improve the quality of the AI models.
AI has established itself as the way forward. It has been collectively accepted that the use of AI will be crucial to businesses success in the future. In the race to employ AI internally, we must also ensure that a Privacy by Design approach is established and maintained within the organization as Privacy by Design is also considered important to a business’s success in the future. A cross-functinal committee is needed as employing AI requires expertise from different sectors and departments of the company. Input is required from such a committee while introducing and maintaining efficient AI systems. To help organizations combat AI risks, the Congress directed National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF 1.0). A summary and important takeaways from the same can be found here - AI RMF – Summary and Key Takeaways (merudata.com). Artificial Intelligence cannot be made sustainable if privacy is not made a priority.