top of page

How Privacy Professional’s roles are evolving with AI


Privacy and AI in the Education Industry

Why a privacy professional is needed

Most companies today collect data from customers, users, visitors, employees, and potential customers. This data can include individual’s names, demographics, contact information, likes, dislikes, etc. This data is highly valuable to companies. It helps companies understand their customers, their target audience, the effectiveness of their sales campaigns and such. There are a number of laws that govern and regulate this data, how it is collected, processed and shared. For companies to understand, and maintain compliance with these laws, while still obtaining value from data and meeting business requirements, a privacy professional is essential.


What privacy professionals do

A good privacy professional should meet two criteria. First, they should have vast knowledge of the legal landscape. This means knowing and understanding the different applicable regulations, the nuances of the laws, the minor differences between laws, and their legal requirements. Second, they should have the technical expertise to put their understanding of the legal requirements into practice. This means building a robust, sustainable and comprehensive privacy program from ground up.


In companies with a strong privacy program, privacy is embedded into the company’s system and processes. To do this, the privacy professional has to train employees at every level to understand the importance of privacy and how to build it into their processes.


The role of a privacy officer is also one that works with all teams and departments of the business. Whether it’s marketing, IT, legal, audit or risk management, the privacy professional needs to be able understand their needs and functioning to build a program that integrates their departments and processes.


Finally, one of the most important factors of a good privacy professional is their ability to evolve and adapt effectively to the changing demands of technology and the legal landscape. In recent years, privacy professionals have had to evolve to combat the growing employment of AI and the legal ramifications that come with privacy violations because of AI.


How the role of a privacy professional is changing with AI

With the scope of AI expanding and with its numerous benefits and use cases for companies, employing AI will undoubtedly become the norm. What this means for privacy professionals is that their role is also evolving to meet the demands of AI governance. In fact, a 2024 study by the International Association of Privacy Professionals (IAPP), 69% of Chief Privacy Officers (CPOs) have added AI governance duties to their roles.


Once AI is introduced, privacy risks explode. Vetting AI systems and its data become very important. With AI, every possible scenario must be assessed for risk, especially when initially setting up the AI systems. Every use case, context, and new system must go through its own specific privacy and risk assessment. This is because each use case, context, and system bring a unique set of challenges based on the data, business, and users.


A privacy professional today also needs to have a deep understanding of how AI systems work and the risks that they bring to privacy as the two can be very interconnected. Many privacy programs are structured around certain frameworks and regulations (CCPA, NIST, ISO, GPDR in the EU). Given that the European Union Artificial Intelligence Act, the EU AI Act is one of the first comprehensive AI regulations, it will undoubtedly become the golden standard for AI regulation and so privacy professionals will have to have a strong understanding of the Act and use it as a jumping off point while building their AI governance framework.


Because privacy and AI governance are interconnected, a strong privacy program can help with AI governance. When using a risk-based approach, like the EU AI Act, for example, a datamap can be used to help with categorization. Using data inventory and mapping can help to identify AI use cases and data sources which can help with risk categorization. Privacy Impact Assessments, used in privacy programs can also be used to collect necessary information to help AI risk assessments.


And it is because privacy and AI governance are so interconnected, that a privacy officer is often best suited to oversee and manage AI governance. Existing privacy processes and solutions can be used and altered wherever needed to make room for AI compliance.


It should be noted, however that just being a privacy officer doesn’t automatically translate into being able to manage AI governance. Privacy officers should skill themselves to take on AI governance, as it is much more complex than privacy, which is complex itself. AI governance is a blanket term and can mean different things to different organizations based on how AI is employed in that particular organization. Some companies may be developing an LLM, some may just be using existing gen AI tools like ChatGPT for general tasks, some may be employing chatbots on their websites, some may be developing a product on top of an LLM. And so, the complexity of managing AI governance across these use cases differs. Also, the risks associated with certain AI use cases are still not very well-known. With a more novel concept such as agentic AI for example, the scope of its uses and its risks are still not well understood.


And so, when considering these factors, AI governance officers will have to build their skills and add to their existing list of capabilities. There are resources available for AI governance officers to use to skill themselves, such as: -

  1. The National Institute of Standards and Technology (NIST)’s list of AI risks that is constantly being updated - AI Risk Management Framework | NIST. This can be used to understand the risks around the usage of AI especially as new risks come up, having a centralized, dependable resource is helpful.

  2. The Organization for Economic Co-operation and Development (OECD) has resources for those developing AI systems, there are principles to guide their efforts - AI Principles Overview - OECD.AI

  3. When governing AI, it is crucial to understand its lifecycle - Advancing accountability in AI - OECD.AI

  4. Microsoft has a template for responsible AI Impact Assessments - Microsoft-RAI-Impact-Assessment-Template.pdf

  5. Ensuring AI systems are fair, free of bias, transparent and safe, the OECD has a list of metrics and tools for measuring and evaluating AI trustworthiness and AI risks.

  6. AI governance officers can also opt for formal AI governance training. The National Initiative for Cybersecurity Careers and Studies (NICCS) for example offers a certification course - Certified AI Governance Professional (CAIGP) from Tonex, Inc. | NICCS


The Future of Privacy Professionals

If there’s one thing for privacy professionals to keep in mind as they integrate AI governance into their roles, it is the importance of continuous learning. Working in a landscape with so many moving and evolving pieces, privacy professionals need to stay updated on AI advancements and regulatory changes. Learning about AI laws, standards and best practices is vital for this profession. Gaining insights from cases of AI enforcement can also prove to be beneficial as it provides real life examples of the mistakes that companies make, what enforcement bodies focus on and what solutions realistically work. The use of AI is not temporary, it is becoming the norm and so integrating AI governance into your role as a privacy professional is the way forward.

Comments


Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page