Simplify for Success - Conversation with Colleen M. Yushchak
Colleen M. Yushchak was recently on #SimplifyForSuccess, a podcast series presented by Meru Data and hosted by Priya Keshav
The conversation between the two privacy experts centered around trends and important changes in the privacy world, which includes recent regulations, technology, acts, and rulings; more specifically, the growing importance of AI, the laws surrounding and governing the use of AI, the growing concern and attention around health data and complaints regarding the same, including the My Health My Data Act, implementable retention records schedule plans, the increasing state privacy regulations and more
Thank you to Fesliyan Studios for the background music.
*Views and opinions expressed by guests do not necessarily reflect the view of Meru Data.*
Transcript
Priya Keshav:
Hello everyone. Welcome to our podcast around simplifying for success. Simplification requires discipline and clarity of thought. This is not often easy in today's rapid-paced work environment. We've invited a few colleagues in the data and information governance space to share their strategies and approaches for simplification. Today we have Colleen Yushchak on the show.
Hi, Colleen. Welcome to the show.
Colleen Yushchak:
Hi, Priya. Nice to hear from you again.
Priya Keshav:
It’s been 6 months since we talked, and a lot has happened in the last 6 months, so I thought it would be a great idea to get together and recap what we had discussed in January and also look at some of the new things that have happened over the last 6 months. We had in January, met to talk about privacy in 2022, as well as to discuss some trends for 2023. And obviously, we had highlighted AI as one of the trends to watch for. A lot has happened in the AI area, so obviously, you know, ChatGPT being one of the most talked about recent change. And, of course, there's a lot of other things happening as well.
So, what are your thoughts around AI, especially with what is happening with respect to regulations in this space?
Colleen Yushchak:
Oh, great question. And actually, I feel like the only thing that's not disputed around the topic of AI and artificial intelligence is that it's absolutely dominating the headlines. And you know, when I go to conferences, and I know that you were recently at IDPP, you know, and all of these other conferences that I'm going to, the AI-related sessions are absolutely the most attended. They're like standing room only, you know, everyone's talking about it. You know, my Daily News feed, there's so many articles about AI, and I do feel like there's conflicting ideas over how we should proceed with AI.
It's one of the topics too, that, unlike privacy, right when I would go and visit my neighbours or my friends or go to parties or go to dinner, nobody really was talking about privacy in the way that you and I work in privacy, but everyone now is talking about AI.
So, this topic is like bleeding into, you know, not just what we do in the privacy space, but everyone's talking about AI. And I think, similar to some of these other areas like privacy or even cyber, as usual, the regulations are falling behind, right? For the most part, they're not kind of overarching AI regulations out there. Umm, there's a lot of them that are in place like, for example, in Europe, there's definitely a couple regulations in place that kind of touch on AI. There's the Digital Services Act, there's the Digital Marketing Act, and there's other acts; there's this AI Liability Act, which actually talks about categorizing into 4 risk levels for AI, you know, either it’s an unacceptable risk, it's a high risk, its limited risk, or it's minimal risk, so there's definitely things in place in Europe. And there's also this EUA, I act that I think is proposed, and I think there's a lot of talk about it potentially going into effect later this year. And the UK, of course, because they are separated, Brexit, they have their own AI regulation white paper that they're kind of working on, so everyone's kind of trying to scramble to catch up. That’s Europe.
In the US, there's not kind of like a federal law necessarily that covers AI yet. As you know, the privacy law that American Data Protection and Privacy Act, which hasn't passed, absolutely touches on it and sets out some rules for AI. But that's not in place yet. And now we're going to have the same situation that we have with privacy, where there's state laws and regulations around AI. So, I think there's around 5 states right now that actually have an active law, and there's even more, just like privacy, like 7 more that have proposed laws around AI.
And the one that, at least in my space, with the companies that I'm working with, that seems to be causing the biggest stir is the New York City Local Law 144. That's the one that requires - if you're using AI for employment decisions that, you have to do these yearly bias audits, which of course,, is going to be a huge effort for companies to comply with. And then Illinois got a law and all these other states that have a law. So, I do feel that it is something that is similar to privacy; we're all scrambling to catch up from a regulatory standpoint.
But the good news is there’s definitely guidance out there, right? So even though we don't have the regulations in place, the NIST, the FTC, and the FDA have all provided recent guidance on you know - How to perform a Risk assessment, how to kind of manage AI. So, there's stuff out there to give people kind of some guidelines. But I'm curious. Priya, what are you seeing with companies that you're working with either doing or not doing related to artificial?
Priya Keshav:
So obviously, you talked about the employment-related law that goes into New York. So that is one area of focus. And on January 10, 2023, the EEOC also issued some draft strategic enforcement plan that placed AI-related employment discrimination at top priority for enforcement, so obviously, you know there is some concern over the fact that use of AI. I was just talking to talking about AI audits on a separate podcast in itself, and one of the things that we were talking about was categorizing AI into what we would call the high-risk, middle/medium-risk, and lower-risk use cases, and I think employment sort of probably falls under the high-risk use case, and there are other high-risk use case like credit lending and some other use cases that anybody who is kind of using AI for those use cases know that they need to sort of take some of the regulations or the upcoming regulations seriously. So, there's some effort into looking at, you know, biased audits, using frameworks, things like that to make sure that there is some governance in place around the use of AI.
But when it comes to what I would call the medium and the low-risk use cases, obviously the low-risk use cases, there is no need for governance because it is low-risk. Medium-risk, use cases, I think, are a little bit of a puzzle for most people.
One of the things that I also want to talk about is, you know, I was reading the McKenzie study around AI, the state of AI in 2022 and one of the things that they point out is over the last 5 years, adoption in AI has more than doubled, like so it's 2.5 times what it was in 2017, which is one of the reasons why you see a lot of people interested in AI because they see AI in place or being used everywhere and part of it could be related to privacy, but maybe all the concerns around what does that mean for jobs and things like that? So, it's a hot topic.
But uh, you know, if you kind of look at that study, one of the things that is concerning and is that while you know everybody is trying to adopt AI, including the amount of time and effort spent on governance around AI has been flat since 2017. Which means there is a significant effort to manage AI and kind of look at it from a bias and quality standpoint and other issues that can potentially happen with the AI. I don't think there is enough focus around those topics within companies. You can you see that with ChatGPT as well, you know, some of the surveys talk about almost every CEO feels that ChatGPT needs to be part of their strategy, and they're looking to sort of leverage ChatGPT for various reasons, but they don't have skill sets to be able to appropriately use AI, which is a concern.
Colleen Yushchak:
Yeah, I completely agree. And I saw some stat, I researched it, it was MGlobal AI adoption index in 22 said a quarter of all companies are using AI, and I bet you it's more than that. Because what I'm seeing is that, at least a lot of the companies that I work with, they aren't necessarily formally going out there and putting out policies and procedures around uh, you know AI and giving their employees guidelines that's starting to catch up with that now. So, I wonder, you know, how many companies have employees actively using AI for all sorts of things that they don't know that they are doing. And I think that's where the concern lies and that I know we're going to talk more about concerns and what are the potential drawbacks or risks associated with AI. But I feel that that's what I'm seeing is a lack of, uh, formal guidance necessarily being given to employees. And so, employees are like, OK, let's just dive in and see what we can use it for.
Priya Keshav:
So no, I agree, and I think the definition of what is AI, is confusing. And then because people kind of don't know. Sometimes you do have the data scientists sort of developing AI internally, in which case at least there is visibility around the fact that there are some proprietary AI use cases that needs to be looked at.
But a lot of times you know you're buying technology, you sort of just assume you know, things within the technology is sort of all taken for granted, right? So, the fact that there is AI built into it or the data is being used for AI training. I mean that is another use case, right? So, most of the time there is very little awareness to the fact that data can be either in aggregate anonymized, pseudonymized or maybe not pseudonymized form, and again, all of this is, there's very little understanding of how the AI models are being used, or trained, what data is being fed to these AI models and what happens. You've seen more and more privacy sort of taking the lead in in sort of looking into some of these issues and AI sort of has become part of privacy impact assessment process.
But obviously because it's very hard to understand risk and that's where like the medium use cases come into picture right like so when you have use cases for chat bots when you have use cases for optimizing spending or maybe looking at allocating resources, those kinds of use cases, you don't realize because they're not high risk, they're not about making decisions for credit lending and things like that. So, nobody knows that how AI is being used, whether AI is being used, how the data is being trained. So there needs to be more awareness and guidance definitely.
Colleen Yushchak:
Yeah, I completely agree. It's similar, I think back to when we were creating data inventories and trying to dive deep into the concepts of profiling and automated decision making. That's another area that I feel like trips people up because the definitions are so specific and so legal under these regulations. And so, trying to tease that out in a data inventory, they find we often get a lot of people saying, oh, I'm doing profiling and automated decision making and we look into it and they're totally not. And then I see people say, Nope, not doing that and we look into it and we're like, uh, yeah, that's the definition. So, I think it's like another stage of that. What is AI? What's the definition? You know, when do I need to alert somebody in the privacy Office that it's happening.
Priya Keshav:
I don't want to jump to other topics, but at the same time you know we have a short time. So, another area of focus has been, we talked a lot about, you know, with the Supreme Court decision that health data and enforcement around management of health data will continue to be a focus area, especially when we talked about the FTC we talked about how FTC has kind of come out and talked about you know how they're going to focus on the use of health data and you've seen that with some complaints, you know, with the Easy Healthcare Corporation, they alleged that Easy Healthcare failed to take reasonable measures to address the privacy and security concerns in their case, and similarly, you know you saw that with the enforcement against the digital health platform Good RX, where they alleged that Good RX used or shared the personal health information with the 3rd parties, without properly disclosing it in their data practices or in their privacy policy or getting proper consent from the consumers. But I don't think we expected to see a new health or see Washington Pass My Health My Data Act and that has been sort of causing a lot of stir for various reason.
So, Colleen, do you want to talk a little bit about what is this, My Health My Data Act. How is it different from other state privacy regulations?
Colleen Yushchak:
Yeah, absolutely. And I'll kind of cover what I understand so far. It's new. So, I like many other people, were anxiously, furtively reading through it, trying to figure out what does it mean? I obviously would love to hear your take too, but what my understanding is, is that the My Health My Data Act covers personal health related data and covers data that falls outside a HIPAA. So, I think it's something that companies that may not have had to worry about HIPAA in the past might need to be focused on it if they have anything that could be health related. So, we'll talk about that, but that's a super there's a super broad definition related to that. My understanding is it was created in direct response to the Dobbs, Steve Jackson Women 's health work all around reproductive data and sharing of that data and use of you know individuals’ reproductive data all related to Roe versus Wade. So, it was signed recently. So, since our last conversation, it was signed in April of this year, and it doesn't go into effect until about a year from now. So, March of 24. And what's interesting there, well, it has a private right of action. So of course, once you see that, everyone is very interested in understanding the impact because that always makes the risk a little higher.
But I think what's interesting about it that makes it so much of a big deal is that the definitions, number one are more broad and the scope is more broad. So, for consumers, the people that are covered within Washington aren't just the residents within Washington, but anyone that has helped the data collected in the State. Now it does not cover employee or B2B, so we're just talking consumers here. Also, with regards to what deed has covered, I mentioned this before that it's consumer health data. So, it's personal information that is linked or can be reasonably linked to a consumer and that identifies the consumer 's past, present, or future physical or mental health status. So, when you think about these definitions and the potential broadness of the scope of what data might be covered, it's pretty broad. And on top of that, here are no minimum number of data subjects or revenue, kind of like the other states. Most of that well, California has the 25,000,000. Most of the other states don't have a revenue. Almost all the states have 100,000. You know you have to have 100,000 people’s data. This has no minimum. And what that means is small businesses are in scope in Washington state, or if they're processing, you know, collecting health data in Washington state, which I think is going to really throw some people for a loop, because they've not really had to worry too much about that before.
Also, nonprofits have to comply. And there are exemptions, which is great, and it lists HIPAA and GLBA and Social Security Act and FCRA and FERPA. But the more I read about it, the more it seems confusing because it and I would love your take on it too. It seems like there may be overlap and so how will we know which law is the one that we're going to comply with, probably the federal law, but that’s my take.
But I think I mentioned, very broad definition and you mentioned like kind of OK what are the differences with this compared to other states? I talked a little bit about the broad scope and that sort of thing, but some other things that are different is, I'm pretty sure it's the first US regulation that introduces the concept of tracking legal basis for processing. Hello GDPR like now we're going back into GDPR land and coming up with legal basis. So that seems different to me. Also similar to the other regular privacy regulations, the individuals’ rights are the same. Very similar. You know, you can have notice and you can have different rights. You know, accessing your data, deleting your data, etc.
But some nuances that I notice that I think are different is for the notice. Uh, you have to provide people with a list of all 3rd parties and affiliates that receive personal information and their contact information of the 3rd parties that receive that information. Which seems new to me. There is also interestingly way less deletion exemptions. So, under California there's a long list of all these things you can say. Well, I'm not going to delete your data because I've got this exemption, it seems like there's hardly any exemptions. Which means if you were able to ignore deleting data in the past because of California’s guideline, this is going to be a little bit more deletions going on. Interestingly also, the thing that's also going to kind of blow their minds is, it absolutely covers archive data. So, data that was on backups, data that's in archive. It it's a little loose guidance in California, but it definitely seems like, hey, if it's going to be, you know, onerous and to restore it back up and you're not using it all the time, you don't have to go there to delete the data unless you restore the backup. And then you need to delete the data there. But now it seems like everything that is included, archives, backups and algorithms. So, what does that mean right? You know if there's personal data, health related data in algorithms that has to be deleted.
So those are some different changes that I'm seeing. And then the last one is just that it restricts geofencing around healthcare related entities which I don't think any other regulations are that specific about that requirement. So, what do you think? What did I miss there that stands out to you that you know you're worried about for your clients that you work with?
Priya Keshav:
So, you mentioned a lot of them. I think some of the bigger pieces are, yes, it goes into effect next year, but because of the drafting error, some provisions go into effect within 90 days from the date of passing, which means sometime in July.
Colleen Yushchak:
Uh, an update on that. I heard one of the law firms that I work with actually reached out to the AG and they confirmed that was a typo and it's not accurate that we have to comply within 90 days of the signing. That's the rumor I heard. So that's, I don't know. If that's true or not.
Priya Keshav:
Having said that, what you said about the deletion is a big problem. Again, no exceptions for anything whatsoever, right? So, which essentially means that you have to, there is no common workflow for My Health My Data Act, right? So now you need to sort of think about how are you going to process this data? The deletion request that comes your way, which has to be different, substantially different, differently handled than other states, even though the rights are similar. And the definitions also are much broader, like for example anything that is kind of likely to affect your bodily function, so which essentially could mean you know now you're talking about not just health, but things like purchasing a toilet paper or purchasing nutrition products right? So, which means how much is within the scope of health, right.
And that plus you know if you kind of see even the biometric definition seems to sort of indicate images and photographs not necessarily that can be used to create like an identity of an individual. So, it's not just the biometric, but data that can be used to create your biometrics. So, which means, you know now are you talking about all audio in scope, all video in scope, all photographs in scope, which essentially just makes the population of data that you're talking about, broad. And then you look at that and then you look at this in conjunction with and you know it's a similar kind of thing with one of the things that was very interesting to me about the GoodRX complaint was that it's talking about specifically event naming conventions that somehow revealed. So, when I'm talking about event, I'm talking about software event naming conventions that somehow revealed the health status of customer. From a operationalizing these regulation standpoint, you know, we've talked about privacy by design. But when you're talking about event naming conventions and how the event logs can have, uh, you know, some kind of health data, if they event naming conventions are too revealing, right. Like of patients’ health status. So, then you really can't understand these kinds of impacts unless you really do privacy by design.
So, which essentially comes back to, if you are operating by setting up a process for deletion and just kind of doing privacy impact assessments at a high level, there's no way you would even understand the scope and the impact, right? So, which essentially means that you know for any client that even remotely sort of has some type of health-related data we need kind of take more deeper dive into software development practices, deeper dive into naming conventions if they have already not done that.
So, from a practical implementation standpoint, you know something that needs to be considered when you look at acts like My Health My Data Act. But yeah, of course. The other big thing that you brought up is private right to action, which essentially means that, there is already enough conversations that this could be another BIPA you know from a, from a liability standpoint.
Colleen Yushchak:
And I'm already seeing some companies get predatory kind of investigations in a way where they're getting asked about certain cookies and when things are being put into shopping carts and it's still tracking this or it's not tracking this. So, it's interesting, I feel like with all these regulations that are coming out, especially the ones that have a private right of action and the ones related to this, there is a lot of interest going on out there and people testing companies’ website. So, something to be alert to and making sure that your website in particular you know has the cookies set up properly so that you are not tracking something like what vitamins someone 's putting in their cart.
Priya Keshav:
I know we can just talk about just My Health My Data Act for the entire episode but moving on. We had talked about the records retention and the amount of effort that it would take to kind of properly build a retention schedule based on the requirements or what's expected in these privacy regulations, but have you seen anything different in the over the last 6 months? And do you see more progress towards building a more granular records retention schedule.
Colleen Yushchak:
Yeah, I remember when we were talking about this, my interpretation was that these notices should be more granular and should be at the category level. That's what it says in the regulation, but then I know when we talked, we both were saying that, hey, what we're seeing is companies are taking the safer approach, which is let's stay pretty vague. Let's just, you know, hit a very loose definition of what we think they mean because we don't have operationalized records retention schedules and policies in the background. And so, in the meantime, we're all going to scramble to kind of really, maybe we have a scheduling policy, it's just not operationalized. Maybe we don't have one. Maybe we haven't updated it in the last 10 years. So, I think a lot of companies are taking this time now to kick off, you know, more fulsome efforts to build and operationalize this. And so, what I'm seeing is, all of the notices that went up in January haven't been touched yet because that takes time to operationalize something like that. And like you said, to kind of digest and understand at the category level how long a company is keeping things and how well they should keep them considering both legal requirements and privacy requirements like you know to match to how long you initially needed it for the original purpose. So, I'm not seeing any changes yet, but are you seeing anything in this space?
Priya Keshav:
I think that there is progress being made. I'm not talking about just the fact that yes, you know compared to like last December, I'm seeing more details, but you know I wouldn't say that it's tremendous progress, but I can see movement in the right direction, right?
But one of the things that I’ve also been intrigued about like for example, Colorado regulation. When you read through the regulations there are so many places where they are very, very specific. Like for example, you know, they talked about how if a consumer has not interacted with the controller in the prior 24 months, the controller must refresh consent in compliance with, you know all the requirements basically, especially for those that require consent or secondary use cases right? So, they want you to kind of think in terms of 24 months if somebody has not interacted with you. So again, that the I don't even know if we define our retention schedules that way. But also you know, I've seen some provisions in the regulations which talk about specifically looking at your images. You know, it comes back to again, the definition of biometrics being broader or becoming slowly broader. They want you to take a look at when you have photographs, images, videos, voice recordings to at least once say your look at it and decide if you really need that data and why are you keeping it.
So, you know that that sort of very granular, specific recommendations around, you know how often you meet and discuss or how often you retain or how should you really keep that data. And I see that as a growing trend, right? Like where there is very, very specific granular requirements around deletion, which kind of comes back to, If people haven't spent time and effort on building their records retention schedule and operationalizing it. So, it's very easy to build one. The traditional way of building one you know it's like you just put some numbers. But the hard part is actually making something that is that can be implemented, implementable and actually implementing it, which is, I think should be a focus area. If it's not something that is not on someone’s radar already.
Colleen Yushchak:
Yeah, agreed. And I feel like definitely in the past records retention schedules were built based on legal requirements and about keeping data as long as you can so that you don't get in trouble for deleting it prior to that. Whereas now it's the complete opposite, because you got to overlay that privacy component. You know, if you don't have a legal reason to keep it, you know what is your privacy reason to keep it? In other words, what do you need it for? What are you using it for? What information do you need? What do you not need? Let's only take what you need. Let's only keep it as long as you need it. I feel like that overlay on top of whatever the initial Records retention policy is such a change for people in the records department. Uh, that that's taking first of all forever to kind of, you know, get people comfortable with the concept and then on top of that, now take all your legal requirements and overlay a privacy office. It's kind of a data inventory requirements, so if you don't have data inventory, it's going to be pretty hard to make some of those changes. So completely agree with you about all the points you made.
Priya Keshav:
Yeah, agreed. And often I see that too, right? Like records retention schedule built without a data inventory. It's talking about things that without context to what is actually happening in the real world. And so that's one of the biggest reasons why, you know, you may have some citations and you may be putting some things together, but is it tied to what is being collected, how it's being used? Is it relevant? From a from a perspective of you know how you can break the data for the purposes of deletion, and if you can't do that, then there is no way to kind of get to an implementable plan.
I know, we're rushing through a lot of topics, but. We're spending a lot of time just getting ready for Connecticut and Colorado and Utah later this year. We've gone from 5 to 10, which is not something that is surprising because we were kind of expecting more state regulations. And that’s 10 without sort of including, you know, like, for example, the Washington MHMD Act because it's technically not a privacy regulation, so to speak. So, but if you kind of take some of those and the one in Florida, you might, you know we might be counting more than 10. What are companies doing to sort of prepare for what's coming this year, which? Colorado and Utah, keeping in mind that this number is going to double or more than double very soon.
Colleen Yushchak:
I'm exhausted. I am sure you are too Priya. And I'm sure many of the companies that we support are just so exhausted because it is a lot of work to keep up with these regulations. And if they were all exactly the same, it would be so much easier. The good news is, is that they continue to have a lot of overlap, so that's great. So, it kind of depends on you know the company, if the company is already prepared for California or Virginia in some cases, there's a lot of overlap and what you've already done, can prepare for that. Things that I'm working with that are maybe gaps, include things like some of the requirements around opt in consent when selling data. You know getting that prior, you know as opposed to the opt out approach under California. You really do have to kind of figure out, are you selling data under the definitions, and you know, are you processing sensitive data under the way it's defined in such a way that would require that opt in consent?
So, there's some scoping that needs to be done. What again, data inventory comes into play. But then figuring out if you are doing any of those activities that require that opt in consent then, just like how do you know? Using technology, Priya kind of like, you know, the technology you guys have to manage that consent. Also, Colorado is the one that introduces that universal opt out mechanism. Good news is, we have a long time before that's going to kind of go into enforcement and be a requirement. Think at least about a year from now. But I think that's the first regulation that actually has that in the regulation, although we know California sort of pushes it to with regards to what they've put on their FAQ and what they've done with enforcement. Uh. So that's another thing.
Colorado applies to nonprofits. So, I think for Colorado, you know hospitals, or you know other kind of nonprofits in the state might have to be looking at Colorado a little more closely and then a lot of the companies that I've been working with that have, let's say, just California, that they're worried about because California hasn't yet put out clarifying regulations on what's required with the PIA, the privacy impact assessment. I think that now that Colorado and Connecticut are coming into effect and they have kind of different, slightly, I say slightly defined, they are more defined than what they require for their similar DPIA or you know, data processing impact assessment.
So, they're, I think companies that were kind of waiting to hear what California had to say about the PIA and haven't done anything are now creating something for DPIA under Connecticut and Colorado. So that's sort of one of the big gaps in seeing so that they can be prepared. And then once California comes up with their clarifying Reg, you know they can pivot, they can see if they need to make any tweaks and that'll get them closer to being ready.
One thing that I saw and I'm wondering your take on it, is related to the opt out link for targeted advertising? I think under Connecticut and Colorado it's a little bit more clear that it has to be at the bottom of the website like it cannot be in a privacy notice for the opting out of targeted advertising. With a lot of my companies that I work with, we've kind of gone that route anyways, we started to adopt the California, guidance around having an opt out link at the bottom of the site. But it also seems like it has conflicting requirements with regards to what that link should say compared to what California says, so you know that's maybe an area that companies need to try to look into and work with Council to figure out what the approach is with that.
But otherwise, you know, there's a lot of similarities. The other thing is Colorado, I think has a $20,000 penalty per occurrence, which can increase to like 50,000 if the person is over 60 years old, which I thought was interesting, but that seems like the biggest fines that we've seen because I think California and some of these other states are more in the 2500 to 5000 per violation range. So, I think that's another interesting point. But what other things are you focused on? With regards to what's happening July one?
Priya Keshav:
I think part of this is when we were when we were implementing uh, California and Virginia, one of the things was to sort of think about the nuances right, making sure all the little nuances sort of we adhere to, whether the company was having a strategy to sort of manage all requests for all customers, or even if they just said OK, I was only going to comply with requests if they are coming from the states that have a comprehensive privacy law, so it didn't matter which one, there was a lot of focus around trying to get all the little things kind of specifically done for, let's say, California, Colorado. Virginia, etcetera, etcetera. But as the number of states have gone from 5 to potentially 10, 11, 12, whatever the numbers going to be like ultimately, I think there is a growing realization that there's no way. It's practically impossible to kind of comply with each one, because there's no way it's possible, right? Right. So, you have to kind of look at maybe the greatest common denominator in terms of how to sort of obviously comply with as much as you can, but you know you were just talking about, obviously we need a link for opt out from the footer of the website, but the naming convention as to whether you follow, Colorado, California and you can only call it one thing. You can't call it 4 things, right?
Colleen Yushchak:
Put a slash. I mean it is, it's getting a little crazy. You're speaking such good points here because there are going to be moments when companies are going to have to decide which law they're going to comply with.
Priya Keshav:
Yes. So, at some point you're just going to have to say, OK, am I in substance complying with the laws and, am I sort of doing what is required, and do you think I would get into trouble because they called it “Your privacy choices’ versus ‘Do not sell’ versus something else. You know it all means the same thing, right? And so, that's kind of where I think you see a little more pragmatic implementation. So, I see a lot of shifts in the last 6 months from, there's no way I can kind of do this 12 times, like because there's no way I can, Unless I'm going to geofence and start collecting specific information. I don't even know how to come and do that you know technically. So, there's a lot of pragmatics, you know, assessment of how do I take as much as I can and comply with it? But not kind of worrying about the little nuances to knowing that you know, if I comply with it for the most part, I should be able to explain that practically speaking, I cannot make it make 12 versions of the same thing.
So, that's kind of where I see everybody moving. Overall, I feel like the lift is getting easier. I think I see a significant difference, right, like even in December, clients that had a data inventory, that had a good process in place, a foundational process in place, you know most of them took the December holidays off without having to kind of, you know sweat. Whereas, if they didn't have a data inventory, they didn't have some of the foundational pieces in place that it was a nightmare trying to stand up in December and I see the same kind of thing now.
So, but obviously with uh, with more and more client’s sort of having those basic things in place, it's becoming a lot easier to think like, ok, so it doesn't mean that July 1st these things going to affect, OK or July 4th vacation is doomed. You know it seems much easier and much more of a routine process to stand up one more state. Not that I'm saying that all the complexity is gone and it's easy, but at least it's so.
Colleen Yushchak:
It's not December of 22.
Priya Keshav:
So, what other privacy developments should the companies be aware of?
Colleen Yushchak:
So, I mean, since we spoke, there's been a lot going on. You and I kind of touched on it in our offline conversations. I'll explain a couple that stand out to me. The more these things happen, the more whole privacy kind of concept is going to become more mainstream, so I kind of hinted that earlier. Sometimes, when I talk about privacy and things I'm doing, people just don't, If they're not in the space, they don't know much about it. But AI, they do. I think that is going to change, especially as enforcement starts to kick up. So, we're just entering the era of enforcement. And I think that is what is going to slowly change the, you know, people within the United States is kind of a concept of what privacy is and why it's important. And so, within the last week, I want to say, I know that Amazon was hit with a $25,000,000 COPPA or Children's Online Privacy Protection Act fine, that related to Alexa. The Alexa devices for children, was indefinitely retaining voice recordings and transcripts for no apparent reason or they're they just did never delete it, which of course is a huge issue, especially with minors’ data. And then Ring is going to be paying 5.8 million as part of an FTC fine, because they had very LAX policies around access controls. So, they actually have a whole bunch of Ring videos. I use ring. You know, I've got all kinds of videos of my mailman coming to my house. But I think that there are Rings being used all over people 's houses in all different places and you know sometimes all of that information is private and uh, I think Ring got in trouble for not restricting employees access to watching those videos. And apparently there were some folks in the Ukraine that were like a 3rd party vendor that had full access to all the videos.
So, you're starting to see, I think a lot of the enforcement come out obviously, Ireland DPC or the Data Protection Commission, just fined Meta 1.2 billion dollars related to transfers of data, you know, transfers of European peoples data outside of Europe to the United States without having a lawful basis for that transfer, because obviously Privacy Shield is, you know, currently unlawful and no longer in place. They were relying on standard contractual clauses, which is a very reasonable option for a transfer mechanism. But the DPC, when they reviewed those SCC, those standard contractual clauses, they were like nope, these are lacking. They're not, you know, they're not sufficient. And actually, they thought that, uh, Meta should have been relying on derogations instead of the kind of legal or the transfer mechanism because of the systematic bulk and repetitive nature of what they were doing? So, I mean that's I think the largest fine we've seen so far.
And then the other one that's impacting my clients directly is that Google has just put out notice that they’re no longer a service provider to companies relating to certain types of processing that they do, and specifically that processing is related to the customer match tool that they have or anything that would constitute cross context behavioural advertising. So, companies, especially retailers that I work with do use this feature. This, you know Googles customer match feature and they'll share with Google, customer information and then Google will basically, either reidentify those people or match data that they have and then provide that back to retailers and companies so that they can then understand more about their customer base. And do you know targeted advertising and that's sort of the thing. And so now that Google is saying no, we're not. You know, before they had this thing called RDP, which allowed them to kind of be in the role of a service provider and you just had to implement it when you share data with them, now they're just outright saying you know, no, we're not a service provider anymore. So, if you share data with us, it will be a self-share and so don't send the stuff that people, if people have opted out, just don't even send us that. And so, I'm curious to get your take on this topic because I think that Google is just the first, I bet you other large data processing, data cloud type, you know, data aggregation type companies are going to start doing this. So, it takes the risk off of them. They're seeing too much heat, too much pressure, too much enforcement, too many fines. And I think they're now pushing back and saying, you know what? Fine. Like, we don't want to have anything to do with this. We want the companies to manage that. You know, the companies will get the opt outs and the companies should then just not send us anything where somebody has opted out of sale and. And so, I'm curious, Priya, your take on it. I know we also talked about Snapchat doing something similar. So, do you have thoughts on that?
Priya Keshav:
No, I agree I and. I think pretty much it was sort of understood that Google is not a service provider, or it will be impossible to kind of call Google a service provider after the Sephora decision last year. So in in some ways, this sort of catching up to what was kind of understood in some ways, I feel like. But I'm glad that they did that. And because what it does is, from other people to also sort of not define themselves with service providers because when you have them, you see this as a marketing thing, right? Like people don't want to want to deal with the opt out, so, they just basically want everybody to be a service provider. So, there's this confusion with who is really a service provider, who is really not a service provider. Or you have these companies saying I'm sometimes a service provider. I'm sometimes not, and then you don't know how to kind of interpret them. So, I feel like Google coming out and saying that they are not, is a good thing because it reduces confusion. But I also thought that it's significant to see Microsoft will be paying a $20,000,000 fine to the FTC for illegally collecting and retaining children's data as part of its Xbox Video game console. And they were basically notifying parents, but they were notifying the parents after the collection of the data and we're not actually deleting the data if the consent from the parents were not obtained. So that was kind of one of the reasons why they agreed to pay a $20,000,000 fine. Which is kind of significant. But you know not surprising because it's been clear that both children's data as well as health data will be a focus area from an enforcement perspective and of course, the other part of it is sharing of sensitive data with 3rd parties for advertising purposes. So, as you see, some of these, you know, enforcement decisions, it seems to be consistent with what seems like high risk and consistent with what we've been talking about for the last 6 months. So, it's not much of a surprise, I feel. But it was kind of interesting to see a 1.2 billion dollar fine. We've talked about the billion-dollar fines, but we didn't have one up till now.
So, thank you for taking the time it was a pleasure to talk to you, Colleen.
Colleen Yushchak:
Always a pleasure talking to you too. Priya, talk to you soon. Thanks
Comments