AI Governance
Artificial Intelligence systems can include the processing and usage of a lot of different kinds of information which can sometimes include Personal Information (PI). It is integral for organizations involved with AI in any business processes to assess and quantify the risks involved around its usage. AI Governance is necessary to mitigate risks around incomplete and misrepresented data and inappropriate, unfair, and discriminatory data
At Meru, we believe in embracing new technology and its challenges. We understand that the aim for businesses is not to avoid using AI or stop its deployment, rather it is to maximise on its benefits while mitigating its risks and that's what we're here to help with.
AI Laws and Policies
-
EU AI Act
-
Utah AI Policy
-
Colorado Artificial Intelligence Act (SB205)
-
New York Local Law (NYC144)
-
The Artificial Intelligence and Data Act (Canada)
Certifications and Standards
-
ISO/IEC JTC 1/SC 42
-
IEEE p7000 series
-
RAI Institute’s Certification Program for AI Systems
AI Frameworks
-
NIST AI Risk Management Framework
-
OECD Framework for the Classification of AI Systems
At Meru Data, we combine our extensive knowledge on the legal and regulatory landscape with our technical expertise to help you align your AI systems with regulatory requirements and your business goals
AI Audits and Assessments:
-
Conducting audits and assessments of your AI systems, data sources, data nature and practices around the use of AI systems
-
Checking for biases and discriminatory data
Risk Management:
-
Assessing, understanding and evaluating potential AI risks and their impacts.
-
Building and implementing strategies to categorize, avoid, mitigate and manage risks keeping in mind business goals and applicable regulations.
-
Implementing frameworks like NIST Risk Management framework to manage risks
Policy Building:
-
Designing, creating and maintaining your AI policies
Ensuring compliance with required regulations:
-
Adopting best practices to ensure compliance with regulations like the EU AI Act, Utah AI Policy, etc.
-
Ensuring Automated Decision-Making (ADM) algorithms are compliant with legal requirements.
Our Approach
Our first step is understanding the different AI tools that exist within the organization, data inputs and outputs for AI systems and how data is used for training models. With this information, we gain a deep understanding of the kind of risks associated with the AI usage and build a governance strategy accordingly.
We then score and categorize your AI risks. Different mitigation controls and plans are then implemented based on how the risks are prioritized. We also use AI risk management frameworks such as NIST wherever needed. Mitigation strategies are implemented keeping in mind your business goals, applicable regulations and the industry that your business is in.
Finally, it's important to monitor for model health, accuracy, bias and overall quality, develop AI policies and conduct training and upskilling programs company wide.
Resources
Find interesting topics and news related to AI and AI Governance.