top of page

AI Laws and Policies to watch out for



Artificial Intelligence has become a buzzword in 2023. On a global level, we see AI being used in every sector like Healthcare, Business and Education. We see businesses are incorporating AI into their functions as well as introducing AI driven products and services into the market. AI is being seamlessly integrated into our daily lives, often without the knowledge of users. The sudden and stark growth in the use of AI has left lawmakers behind with drafting and introducing AI regulations. In 2023, we see some formidable AI regulations. Here are the ones to pay attention to as build our frameworks around responsible AI-

The EU AI Act The act originally proposed by the European Commission in April 2021, which was then adopted by the European Council and the European Parliament, will become the world’s first comprehensive Artificial Intelligence regulation. While discussions on the act by the 3 institutions are still on-going; the next is to occur on the 6th of December, here’s what we know so far. The framework outlines 4 risk tiers, each with different obligations for providers and users to adhere to. These tiers are determined based on the risk posed to people’s health and fundamental rights.

  1. Unacceptable risk: These kinds of risks are banned with little to no exceptions are they are considered a threat to humans. Examples include government social scoring, real-time and remote biometric identification systems in public places and cognitive behavioral manipulation of people.

  2. High risk: These AI systems negatively affect human safety and rights. While high risk AI systems are not banned, they require that developers and users follow highly regulated rules. These risks are categorized into products that come under EU’s product safety legislation and 8 other areas that will need to be registered in an EU database.

  3. Limited risk: The rules for these kinds of risks focus on transparency requirements that help users make properly informed decisions. After that it’s up to the user’s discretion whether they choose to continue using the application or not. Examples can be applications that create or alter image, audio or video content.

  4. Minimal risk: The risks not falling into the above categories will not be subject to compliance as they pose little to no threat to human rights and safety

Generative AI regulations The framework also provides obligations for providers of generative AI systems to adhere to:

  1. Develop the AI systems with strong safeguards against generating content that breaches EU laws

  2. Disclose AI generated content

  3. Develop AI systems in a way that prevents the generation of illegal content

  4. Document and publish summaries of copyrighted training data

  5. Comply with strict transparency obligations

Depending on the risk-level of companies AI systems, they will have to adhere to different regulations which could include


  • Ensure that the AI systems have been assessed by a recognized body within the EU and provide a declaration of conformity for the same

  • Provide detailed technical documentation of the AI systems

  • Maintain a post-market monitoring plan and provision of incident reports to national supervisory bodies in case of infringements of fundamental rights

  • Conduct risk assessments regularly

High Risk Systems have strict regulations to follow. This includes conformity assessments for high risk uses; meaning that for the system to be made available in the market, it must first demonstrate compliance with the law. Companies whose products fall under the product safety legislation can meet frameworks requirements through third-party assessments. High risk products have certain technical and auditing requirements which include: 1. Maintaining strong lifecycle risk management systems, 2. ensuring that the system’s outcomes align with its intended purposes through quantitative testing, 3 maintain data governance frameworks to prevent bias and to ensure that the training and testing data is error-free, 4. prioritize transparency of system functioning and output and 5. very importantly, incorporating human oversight.

The European Parliaments obligation texts can be found here - EU AI Act Compliance Checker | The Artificial Intelligence Act

Interestingly, the Standford HAI Centre conducted a study on whether foundation model providers like OpenAI, DeepMind and Anthropic comply with the draft EU AI Act. The results can be found here. This study highlights the areas in which most organisations received poor scores, which included lack of liability due to copyright, inadequate reporting of energy use, non-mitigation of risks and absence of auditing and evaluation standards. It is important to note that the EU AI Act is still under negotiation and is still debating on certain areas such as the regulation of foundation models, prohibition, and law enforcement. Watch this space for updates. UK’s Artificial Intelligence White Paper

In early 2023, UK’s government unveiled their take on regulating Artificial Intelligence. A white paper titled “AI Regulation: A Pro-Innovative Approach” was published by the Department for Science, Innovation and Technology. The paper operates on 5 main principles:

  1. Safety, security and robustness: AI models should be developed and maintained to function in a secure and safe manner so that risks are managed appropriately

  2. Transparency and explainability: Communication about the systems functioning, how it is used, explanations on the system’s decision-making process should be provided

  3. Fairness: The AI systems should be compliant with UK’s existing laws like the Equality Act 2010 and UK GDPR

  4. Accountability and governance: Appropriate oversight is needed to ensure that the systems are used for their intended purposes

  5. Contestability and redress: Mechanisms need to be put in place to ensure that harmful outcomes and AI generated content can be disputed

Due to the stark growth in UK’s AI industry contributing £3.7 billion to the economy in the last year, this guidance focuses more on innovation and experimentation. The white paper is different to EU’s AI Act which is more focused on strict measures to ensure that people’s safety and fundamental human rights are protected. The white paper indicates that, during this stage, at least, there are no plans to introduce legislation to deal specifically with AI, instead the guidance focuses on the five principles of AI governance. A public consultation forum has been set up by the UK Government to obtain feedback from stakeholders with the intention of implementing a pro-innovation framework that is sustainable for the future. AI Regulations within the U.S

While there are no comprehensive AI regulations within the U.S, existing regulations do cover AI. For example, the CPRA introduced additional limitation, on data retention, data sharing, and use of personal information, all which impact AI.

The California Privacy Protection Agency (CPPA), the agency established to implement and enforce CPRA, is to issue regulations “governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer”. In other words, the CPPA is to regulate businesses using AI, without human intervention, to provide transparency around how decisions are made, the kind of data the AI systems rely on to make decisions and the impact of these decisions on consumers. In September 2023, the CPPA discussed regulations around AI and Automated Decision-Making Risk Assessments, where they plan to mandate regular risk assessments to assess the potential negative impacts to consumers caused by the AI technology used.

State privacy laws like The Colorado Privacy Act, Connecticut Data Privacy Act and the Virginia Consumer Data Privacy Act give consumers the right to opt-out of profiling in furtherance of automated decisions. The Utah Consumer Privacy Act gives consumers the right to opt-out of profiling using personal data.

To combat the existence of bias with recruitment and employment, New York implemented the Automated Employment Decision Tools Law (AEDT Law) to regulate the use of such tools which use AI to make or influence candidate screening or employment choices. These tools are to be subjected to bias audits and information about the use of such tools should be provided to employees and job candidates.


In early 2023, President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order focuses on setting high standards for AI safety and security, under which companies developing AI models will have to notify the federal government of models that pose as a serious risk to public health, safety and security. Further, they are required to share the results of all safety tests before the models are made available for the public. This order aligns with the standards of the Defense Production Act (1950) by targeting models that pose a risk to national security and public health. The Singapore Model AI Governance Framework

The Singapore ModeI Artificial Intelligence Governance Framework (SG-MAF) was released by the country’s DPA, the Personal Data Protection Commission (PDPC) and it establishes principles and guidelines that promote the responsible use of AI. It is currently in its 2nd edition released in January 2020. The framework outlines four sections; Internal AI Governance Structures, the level of human interaction in AI decision making, Operations Management and Stakeholder communication. The framework follows two main guiding principles: -

  1. Decisions made by AI should be explainable, transparent and fair

  2. AI systems should be human-centric

The framework includes a companion Implementation and Self-Assessment Guide to help organizations align their practices with the framework.

Complementing the framework is a Compendium of Use Cases in two volumes to demonstrate how responsible AI governance practices have been implemented and the benefits of their use. The volumes can be found here - Volume 1 and Volume 2

While there is a lack of comprehensive AI regulations in effect today, complying with the upcoming regulations is crucial. Starting to comply once enforcement for upcoming regulations begins is not recommended, as the cost of non-compliance can be quite high, for example, going up to to €40 million or, up to 7% of total worldwide annual turnover for the previous financial year for non-compliance with the EU AI Act. Establishing responsible AI Governance programs is a great way to start complying; conducting regular impact assessments and AI auditing are some best practices when it comes to AI. A guide to building a responsible framework around AI can be found here How to Build a Framework Around Responsible AI? (merudata.com)


Congress directed National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF 1.0) at the end of January 2023 with the intention of helping developers, organizations and users combat AI risks, a summary of the framework can be found here AI RMF – Summary and Key Takeaways (merudata.com)

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page