top of page

AI Governance

Artificial Intelligence systems can include the processing and usage of a lot of different kinds of information which can sometimes include personal information (PI). It is integral for organizations involved with AI in any business processes to assess and quantify the risks involved around its usage. AI governance is necessary to mitigate risks around incomplete and misrepresented data and inappropriate, unfair, and discriminatory data.

At Meru, we believe in embracing new technology and its challenges. We understand that the aim for businesses is not to avoid using AI or stop its deployment; rather, it is to maximize on its benefits while mitigating its risks, and that's what we're here to help with.

Dry and Wet Paint

AI Laws and Policies

  • EU AI Act

  • Utah AI Policy

  • Colorado Artificial Intelligence Act (SB205)

  • New York Local Law (NYC144)

  • The Artificial Intelligence and Data Act (Canada)

Certifications and Standards

  • ISO/IEC JTC 1/SC 42

  • IEEE p7000 series

  • RAI Institute’s Certification Program for AI Systems

AI Frameworks

  • NIST AI Risk Management Framework

  • OECD Framework for the Classification of AI Systems

We believe in taking a risk-based approach to designing and building an AI governance framework for your organization. This requires understanding the jurisdictions you operate in and your AI use cases, to build your risk profile and tolerance.  Once we understand this, we can help your business to develop a robust, scalable framework for AI governance that can be easily deployed across products and jurisdictions.

Abstract Background

AI Audits and Assessments:  ​

  • Conducting audits and assessments of your AI systems, data sources, data nature, and practices around the use of AI systems

  • Checking for biases and discriminatory data

Abstract Background

Risk Management: 

  • Assessing, understanding, and evaluating potential AI risks and their impacts

  • Building and implementing strategies to categorize, avoid, mitigate, and manage risks while keeping in mind business goals and applicable regulations 

  • Implementing frameworks like NIST Risk Management framework to manage risks

Abstract Background

Policy Building:

  • Designing, creating, and maintaining your AI policies

Abstract Background

Ensuring Compliance with Required Regulations:

  • Adopting best practices to ensure compliance with regulations such as the EU AI Act, Utah AI Policy, etc.

  • Ensuring Automated Decision-Making (ADM) algorithms are compliant with legal requirements.

Abstract Background

Our Approach

Our first step is understanding the different AI tools that exist within the organization, data inputs and outputs for AI systems, and how data is used for training models. With this information, we gain a deep understanding of the kind of risks associated with the AI usage and build a governance strategy accordingly. 

We then score and categorize your AI risks, and different mitigation controls and plans are then implemented based on how the risks are prioritized. We also use AI risk management frameworks such as NIST wherever needed. Mitigation strategies are implemented keeping in mind your business goals, applicable regulations, and the industry that your business is in.

Finally, it's important to monitor for model health, accuracy, bias, and overall quality to develop AI policies, conduct training, and upskilling programs company wide.

Resources

Find interesting topics and news related to AI and AI Governance.

bottom of page