A Brief Summary of the Colorado AI Act
Colorado’s Artificial Intelligence Act was signed and brought into effect on 17th May 2024 by the Governor of Colorado, Jared Polis. It is a comprehensive legislative framework serving as a model for other states considering emphatic regulations around AI models. This regulation was created to regulate the deployment, development and usage of Artificial Intelligence technologies in Colorado.
Key Definitions
Artificial Intelligence- Defined as any technology capable of performing tasks or exhibiting behaviors that typically require human intelligence, such as problem-solving, decision-making, and learning.
Impact Assessment- A comprehensive analysis conducted to understand and evaluate the potential and actual effects of an AI system on individuals, communities, and society, including considerations of ethics, privacy, fairness, and security.
Audit- A systematic, independent examination of AI systems to determine whether they are operating in compliance with established standards, laws, and ethical guidelines.
Developers- Individuals or teams involved in the creation, coding, and testing of software applications or systems.
Deployers- Individuals or teams responsible for distribution, installation, configuration, and maintenance of software applications or systems in a production environment.
High Risk Artificial Intelligence Systems- An AI system that makes or is a substantial factor in making, a “consequential decision”
Consequential Decision- A decision that has a material legal or similar significant effect on the provisional or denial to any consumer of, or the cost of terms of factors such as education opportunity, employment, financial or healthcare services, etc.
Algorithmic Discrimination- Unlawful differential treatment or impact that disfavors an individual or a group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, and other such crucial factors.
Scope
The scope of this act is to ensure fair usage of AI systems, as this act mandates measures to prevent and mitigate biases in AI algorithms, ensuring that AI applications do not perpetuate discrimination or unfair practice. Applying to a wide range of AI systems, including all businesses and organizations within Colorado, the scope includes different sectors such as healthcare, finance, education, public services, and law enforcement, to name a few.
Provisions
Accountability: Organizations deploying AI must adhere to standards for audits and impact assessments, ensuring that AI systems are being used responsibly and ethically. They are also held accountable for actions and decisions made by their AI systems.
Transparency: The Act requires entities using AI systems to disclose when and how AI is being used, especially in decision-making processes that significantly affect individuals. Additionally, organizations are to disclose detailed information about their data sources, as well as algorithms.
Fairness: The Act mandates measures to prevent and mitigate biases in AI algorithms, ensuring that AI applications do not perpetuate discrimination or unfair practices. Diverse data sets, fair metrics, and collaborative development must all be part of all AI processes within the organization.
Obligations
The Colorado AI Act imposes specific obligations on developers and deployers to ensure the ethical, fair, and transparent development, deployment, and use of AI systems.
Developers must adhere to the following requirements:
Protect Consumers from Algorithmic Discrimination:
Exercise reasonable care to safeguard consumers from known or foreseeable risks of algorithmic discrimination. This includes ensuring that AI systems do not unlawfully discriminate based on factors such as age, color, disability, ethnicity, genetic information, language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other protected classifications.
Provide Comprehensive Documentation:
Supply deployers and other developers with thorough documentation about the AI system. This includes information on system uses, training data, data governance practices, known limitations, and measures taken to mitigate discrimination risks. The goal is to ensure that those who deploy or further develop the AI systems are fully aware of potential risks and how to address them.
Public Disclosure and Risk Notification:
Publicly disclose the types of high-risk AI systems developed and how the developer manages known or reasonably foreseeable risks of algorithmic discrimination. This includes making a clear and readily available statement on the developer's website or in a public use case inventory.
Deployers must adhere to the following requirements:
1. Implement a Risk Management Program:
Develop and maintain a risk management policy and program tailored to high-risk AI systems. This program should align with recognized standards such as the National Institute of Standards and Technology (NIST) or other nationally or internationally recognized AI risk management frameworks. The program should address the identification, assessment, and mitigation of risks associated with the deployment of AI systems.
2. Conduct and Update Annual Impact Assessments:
Maintain records of these impact assessments for at least three years following the final deployment of the AI system. Regular reviews ensure that the AI system does not cause algorithmic discrimination.
Perform comprehensive impact assessments at least annually and within 90 days of any intentional and substantial modification to the AI system. These assessments must cover:
The system's purpose and intended use cases.
Known or foreseeable risks and limitations.
Data inputs and outputs.
Performance metrics and transparency measures.
3. Notify Consumers and Authorities:
Inform consumers about the deployment of high-risk AI systems, especially when these systems make consequential decisions affecting them. Notifications should include:
The purpose of the high-risk AI system and the nature of the consequential decision.
Opt-out rights for consumers.
In case of an adverse decision, provide reasons for the decision, opportunities to correct incorrect personal data processed by the AI system, and the chance to appeal.
Disclose any discovered instances of algorithmic discrimination to the Attorney General within 90 days of discovery, along with details of the implemented risk management policy. This ensures that any potential issues are promptly communicated to the relevant authorities.
By adhering to these obligations, deployers play a crucial role in managing the risks associated with AI systems, protecting consumer rights, and maintaining the ethical standards mandated by the Colorado AI Act.
Additional obligations as per regulations of the Act:
Regular audits are mandated for entities using AI systems to ensure transparency, fairness, and ethical use. These audits must be submitted periodically to the relevant authorities. Additionally, comprehensive impact assessments evaluating the potential effects of AI systems on human rights, privacy, and civil liberties are required.
Non-compliance with the Colorado AI Act can result in penalties including fines, restrictions on AI usage, and other corrective measures deemed necessary by the overseeing office. Persistent or severe violations can lead to even more stringent penalties. The Act also outlines regulations for the storage, collection, and utilization of data in AI systems, emphasizing requirements for data anonymization, minimization, and protection to ensure privacy and security.
Organizations are held accountable for the actions and decisions made by their AI systems, which includes establishing mechanisms for redress and remediation if AI systems cause harm or violate individuals' rights.
Aiming to enhance transparency, accountability, and consumer protection, both developers and deployers play crucial roles in fostering responsible AI use, protecting consumer rights, and upholding ethical standards in the rapidly evolving AI landscape. By adhering to these obligations and regulations, they align with the broader goals of the Colorado AI Act.
Business Impact
To comply with this Act, businesses would benefit greatly from investing in legal and compliance teams to interpret and implement requirements. Additionally, businesses can establish a robust data governance framework to ensure that the data used for AI systems is collected, processed, stored, and shared in compliance with the Act.
Comments