top of page

Simplify for Success - Conversation with Shea Brown


Shea Brown was on #SimplifyForSucess, a podcast series presented by Meru Data and hosted by Priya Keshav.



Mr. Brown spoke about AI ethics and AI governance and why companies should engage in these topics. He also discussed the NIST AI governance framework and the need for auditing the use of AI.

Thank you to Fesliyan Studios for the background music.


*Views and opinions expressed by guests do not necessarily reflect the view of Meru Data.*







Transcript:

Priya Keshav:

Hello everyone, welcome to our podcast around simplifying for success. Simplification requires discipline and clarity of thought. This is not often easy in today's rapid paced work environment.


We've invited a few colleagues in data and information governance space to share their strategies and approaches for simplification. Artificial intelligence, machine learning and advanced analytics are an integral part of businesses today. AI has a potential to help us in solving many of our current challenges. The Capgemini recently conducted a survey of over 800 organizations and 2900 customers on how businesses adopt AI and the effects of these developments on customer relationships. Almost half of the customers believed that they trusted their interactions with AI systems, but they do expect those systems to be able to explain any results to them clearly, and they also expect organizations to hold themselves accountable if AI algorithms go wrong. While AI is very important, they can also cause harms. How do we understand the ethical implications of using such technologies?


Our guest, Shea Brown, is here today to discuss this very important topic. Hi Shea, welcome to the show.


Shea Brown:

Hello, nice to meet you.


Priya Keshav:

Tell me a little bit about yourself and what you do.


Shea Brown:

Yeah, so, so my name is Shea Brown. I'm a professor of astrophysics at the University of Iowa. But probably more importantly for this, I'm also the CEO of Babl AI, and Babl AI is a company which is really an algorithmic auditing firm. So we audit algorithms for things like bias and ethical risk and compliance risk, but we also do a lot of advisory work as well, not for the same client, but we do advisory work on AI, governance and responsible AI.

Priya Keshav:

Tell us a little bit more about AI ethics and governance. Why should corporations think about or invest in AI governance and ethics? Some of us in this space, we sort of live and read this and understand it, but it's still a very new topic for most of the others, so it will be helpful to know why they should do this.


Shea Brown:

So I mean, I think there's a lot of reasons why businesses should engage with these topics. So let me first sort of define a little bit about what I mean by AI governance and AI ethics. Because you might think, why would you… ethics is ethics and governance is governance. Why would you have something different for AI? And I think it really stems from the fact that AI is a powerful tool, which companies are now realizing and governments, everyone is realizing that you can use this tool to do a lot of things that were not possible before. You can automate a lot of processes. You can do things that, I mean, as the name suggests, you can do things that previously only a human could do, something that requires intelligence.


So detecting objects or making predictions, making inferences, and it's a powerful tool for businesses and they're making use of it. But what has become obvious is that this tool, which is powerful and can do a lot of good, also has the possibility or the potential for doing bad things. So biases are a very obvious one where, simply put, a piece of AI or machine learning may work differently depending on who you are as a person and who you are is an intersection of many things, but some of the things are protected categories.


Things like your race or your gender or your age, these things are protected by law and. These algorithms can be biased and have shown to be biased across these categories, and so there's a lot of potential for harm and there's also been a lot of problems in terms of reputational harm for companies, really. And so, I think the biggest motivator for businesses right now has been reputational harm that someone’s going to notice that their algorithms or AI are doing things that are inappropriate or discriminatory or unfair and that they're going to get called out on it. And that's severely damaging to their reputation.


Recently, there has been another motivator, which is compliance risk and liability risk, namely that there are new regulations that are coming on the horizon or some of that are already here and new laws that have or old laws that have new context in the terms of AI that companies are going to have to comply with and so there's a lot of downside risk associated with the use of AI that has to be mitigated in some way.


Priya Keshav:

You talked about the compliance risk and you also talk about reputational harm. Maybe we can delve into compliance risk later, but let's just talk about this in context of if I'm developing a program, so some of the more widely advertised use cases for ethics in AI are obviously hiring, decisions, being discriminatory in hiring decisions or credit risks. But AI is pretty much being used and part of pretty much everything. Do I have to just start looking at do I have those use cases and if I don't, then should I even be concerned, or should I start taking a deeper dive at all of the use cases that I have within the company? I mean in other words, is it very obvious that if you have one or two checkbox issues, you need to think about AI. If not, let's forget about it. It doesn't apply to us. Or it's much more than that?


Shea Brown:

Yeah, I think it's much more than that because the big high-profile cases like hiring and credit scores or credit risk, or facial recognition, those things are just the tip of the iceberg, and I think a good metric for trying to figure out how much you need to be concerned about this is to think about the decisions that get made by your use case or your AI. So, it doesn't take a huge amount of analysis to do this. You have to have an inventory of what you're using, that's clear. And I mean not everybody has that. But you first have to take an inventory of how you are using AI in your organization and then you have to think about in detail, what are the decisions that are being made. Does it touch humans in a way, and if it does, is there a potential way for that to go wrong or bad? Could there be an adverse outcome associated with that AI? And that's an exercise I think everyone should do. And what you'll find is that almost all use cases that have touched humans in some way have the potential for risk.


Not all of them are the same level of risk, but all of them have some risk associated with them, and so it's a matter of saying which are the highest risk use cases. And to actually get that ranking, is does take a little bit of knowledge. You have to understand what common failure modes are for these systems. Yeah, and you have to understand that more than a superficial level, how the socio-technical system works, like how are people in your organization or people outside of your organization or your clients, how are they using the output of the AI to make a decision and to take an action, and how do those actions potentially affect people in a negative way.


And then, you have to also think about the scale. How many people could it affect if it went wrong? How fast would it happen? How quickly could it spread and also asking the simple question of if I were to find out that this was had something had gone wrong with this AI, how would that make me feel? How would that make the stakeholders who interact with it feel and could that give rise to reputational risk? Because people can feel harmed even if it's only perceived, that's a real potential reputational risk if people have even that perceived sense of harm.


Priya Keshav:

Coming back to again, the definition of AI, right, I want to get this. So, New York City, for example, passed, and I think, we use this to talk about compliance, passed a law to regulate automated employment decision tools, right? But what is important is how they define AI. It means machine learning, it could include machine learning, it could include statistical modeling, data analytics, artificial intelligence, anything that can generate a score classification or recommendation, right?


Do you want to talk about what AI really is because we all think about AI, but we don't know what AI means, what AI is what AI is not. So it would be good to kind of get that right for our audience.


Shea Brown:

Well, I don't think we'll, I don't think we'll get it right for everybody in the audience. That's the part of the problem. The way I see it is that artificial intelligence is simply some sort of automated process that is going to do something that we normally attributed to human intelligence prior or maybe animal intelligence like recognizing objects, animals can do that very well.


So that's a really broad category and old-fashioned artificial intelligence, let's call it that. You know, it often used rule based systems where there was a lot of logic around “if this happens, then you do this”, “if you see this, then maybe you have some probability of it being this kind of object”. And it was very structured in the rule based so my definition of AI is anything that performs a task, an automated tasks normally done by a computer. Something like recognizing objects or reading text. I think what most people nowadays consider when they say artificial intelligence, they're really thinking about machine learning, and that's a statistical mechanism where you can learn from data, directly from data, and that's where a lot of the biases come from, or where a lot of the current risk associated with these systems are.


And so in the case of the New York law, it's not narrowly defined as being machine learning, but they do sort of highlight that and focus on that aspect, but I think that there are a lot of different tools.


So artificial intelligence means a lot of different things to a lot of different people and I think for the most part when people say artificial intelligence, they're thinking about machine learning. And machine learning is a subset of artificial intelligence which learns from data to make inferences. So it uses data, let's say historical data, to try to make better inferences about anything really. But artificial intelligence, really, broadly speaking, is just anything that the automated process will only attribute to human intelligence. And I think for the case of the New York City law, it's broader than machine learning. They really are talking about an automated employment decision tool, and that tool can use machine learning, but it can also not. And it could be just ordinary statistics, and I think in some interpretations, it could even be just a simple equation if that equation is informed in some sense by historical data. I would think that also counts.


Priya Keshav:

So let's talk a little bit about AI ethics and privacy. From a privacy standpoint, obviously, we talked a little bit about the New York City regulating screening of candidates using automated employment decision tools. But privacy regulations have also started including AI and are expecting companies to start disclosing where they are using automated decision tools and for what? And to look at it from a privacy standpoint. What are your thoughts around privacy and AI?


Shea Brown:

So I mean, I think that privacy is one of those. Well, let me step back and say that I approach these problems broadly from a sort of ethical risk standpoint. So, we tend not to focus on bias or any particular type of risk, and privacy, I think, is a subset of ethical risk there. Privacy is a right, it's something that is fairly fundamental and I think that it is important given the current context of how AI is being used in hiring or anywhere really. I think it is important that people understand when AI is being used on their data or to make decisions about them, and my personal opinion is that is something that is fairly fundamental. And given the risks involved, people should have the ability to know that and to potentially opt out of it in some circumstances. And so I think I'm all for that, I think that that knowing whether AI is being used is something that should be promoted by regulations and I think there are further transparency requirements that should be in place so that you can not only understand the AI is being used, but how in some level of detail that that's not too deep, you should be able to understand how it's being used.


Priya Keshav:

We talked a lot about why it's needed and what the definition of AI, what it should look like. But moving on, if I'm going to have a program or something in place to sort of manage ethics and governance in AI, how do I define the program and what are some of the fundamental elements that need to be in these programs? At a high level, I mean, I know there's a lot involved, so yes.


Shea Brown:

Yeah, at a high level, so I think…at a high level, I think oversight is the first key. Like there has to be some level of accountability. Accountability and oversight are two different things, but I think there has to be an accountable person or people, group of people who have the duty of providing oversight. And that's sort of step #1. So you, you need to ask the question, who in your organization is responsible for the various risks associated with the use of AI, either by your organization or if you're developing the AI potentially by your client and if that person doesn't exist, if there's not a clear function for that within your organization, that's step #1. You need to define who is responsible for these issues.


And then I think step #2 is really about assessing risk, broadly speaking. And so what that means is okay we know that there is a group that's responsible for this. That group now needs to look at your organization and how it's currently using AI, and make some determination as to the level of risk for all of the various use cases. Now this is easier said than done. But there are a lot of ways and I can get into details, but we wouldn't have time to get into all the details, but that's step #2, I think. Having those two things is a big leap forward for a lot of companies.


And then step #3, I think, is having a review process or at least policies and procedures that are associated with looking back at that level of risk. Okay, so you've identified risk, that there are certain risky use cases. You need to say, OK, how are we going to sort of revisit those, the risk level in those use cases? We need policies and procedures in place to do that. And further, we need policies and procedures in place to mitigate the risks that we've identified. Some sort of controls that are in place, somebody checking, there's some documentation that people have to produce. And so those three things are really the starting point. And there's a lot that can go into those, and you can build out a very robust AI governance framework just with having that oversight and accountability, risk identification, and then policies and procedures associated with mitigating that risk and also revisiting or reviewing the risk.


Priya Keshav:

Talking about frameworks, NIST is, I think you are working on this as well right on this. But NIST is recently sort of working on an AI governance framework. I'm wondering if you have any thoughts on NIST, their framework, as well as other frameworks that are available that could be used by companies.


Shea Brown:

So, so I'm not working with NIST. I've submitted comments through an organization called For Humanity that I'm a fellow of. But I'm not directly involved with the NIST work. However, I know a lot of the people involved and I know that they're very good and they're very smart and the process that NIST is going through is quite robust. So the current framework that NIST has is actually quite good and a big difference Between that framework and some of the frameworks that had come before is that they focus or they have a focus at least on the socio-technical system and the socio-technical sources of bias and risk. And this is something that at Babl, my company, we focus on that a lot and we've noticed that when we do risk assessments that a lot of the risk does come from that connection between the technology itself and the people who use it and how much they understand about it, that sort of thing. And so, NIST has taken care to include those sorts of things in their framework. Now there are other frameworks, and in fact there are so many frameworks out there that are thinking about AI, governance and but a lot of them are very high level, at the principle level, where you just think about what principles are we trying to further in this process. And they are becoming more and more that have more details about how to implement this, but I think there's no consensus currently. So, For Humanity, for instance, the organization I mentioned previously that I'm a fellow of, For Humanity has a very robust risk management framework that is built upon ISO standards for risk assessment model, risk management in financial services, and also just the experience of the current levels of risk associated with machine learning, in particular in AI. So there's a lot of work being done, but there's still has not been enough work to figure out what elements of these frameworks actually work at mitigating risk. That's a hard task, and that's something that we've actually gotten a grant to do that research. And so we're currently in the process of interviewing people in various industries, trying to understand what they're doing, in particular, to govern AI and whether those mitigations actually work. So that's something that's forthcoming. But NIST is a very strong example of a very good start.


Priya Keshav:

So coming back to, you said at Babl, one of the things that you do is audit the use of AI. Not that I want you to talk specifically about the methodology you use, but maybe at a high level. And then again also kind of an obvious question in my mind, but still should companies get an external person to audit their use of AI and what do you think they'll get from doing that? And obviously, some regulations like the New York City law, sort of expects that there is a bias audit, but even if there is no expectation from a regulatory standpoint. Should people consider getting an audit?


Shea Brown:

Yeah, so now that's a really good question. I mean, myself, this answer is yes, but I say that not just because it would further my self-interest. I think that it is the right thing to do. There are difference between audits and assessments, and so assessments are something that is done not necessarily in confidence, but it's in partnership with the organization. And we've done many of those where the goal is to assess your system, potential risks of your system, including bias and other technical things, and then to come up with mitigation strategies for those. That's kind of a no brainer intervention that I think a lot of companies assuming that they can afford it, a lot of companies should be doing because having an outside perspective from a group that whose job it is to just think about these downside risks is important? It's like the idea of red teaming sort of internal to companies, you need to have somebody who's really trying to poke holes in the system and understand exactly what could go wrong.


Now, let's say, you're an organization that has done a lot of that already has a lot of internal capacity to do that sort of testing, do those sorts of risk analysis and have controls in place and audit, even if it's not required, does make a lot of sense. And so let me talk about the methodology here. There's two separate methodologies. There's a methodology for doing the assessments where we're the ones who are sort of assessing the level of risk and the level of bias and that we have a whole framework and we've in fact published papers on that framework. That's something that we provide freely. And that's really looking at stakeholders who could potentially be affected, start thinking about the ways in which the sources of that harm, what part of the socio-technical system could give rise to that harm. And then, connecting those parts of the system to the harm. And trying to figure out what's the mitigation that stops that.


That's a really broad overview, but audit is something that's a little bit different. For audit, what we do is we take a lot of our learnings from working on these assessments and knowing what's important and we and we draft them into criteria. And so this is something that we do internally at Babl, but we also partner with For Humanity, this nonprofit organization that thinks about these issues in terms of audit and you have certain criteria which need to be met in order to satisfy the audit. So it's a criteria based audit. So it says have you done testing for imbalances in protected categories in your datasets that you used to train your algorithms? That might be one criteria, let's say, and there are standards sort of sub-criteria associated with that. And so, what we do is we have these criteria which are binary, like either you've done it to satisfaction or you have not. And what a company would get from this basically is that they would have essentially third-party verification that they have done everything that is in that list of criteria that are in that are publicly available, everyone can see it. And so when you get a certification, let's say from Babl or some other company that might do this, what that says is we have taken care to do all of these things that we know are important for mitigating risk. And you can go look at what those things are so you can look at what that certification means and it's a way of building trust basically. If this is something that's not required. If it's required, you just got to do it, but if it's not required, then what you would get from benefit is that it's a way of building trust with your clients. And we think that trust is going to be that critical currency in a world where everything is dominated by algorithms and AI. And so the people who win are the ones who are going to gai the trust of their clients and other of the people.


Priya Keshav:

Any other closing thoughts around ethics that you'd like to discuss?


Shea Brown:

Yeah, I think probably my only closing thought would be because there's going to be a lot of people who listen to this and other who are listening to the debate about AI ethics and think, well, they might think, who cares? That's one thing, but I don't want to talk to them necessarily 'cause they're going to learn that it is important. But they might also think, where do I start? What can I do? And I think that you don't need to hire an expert right away to start working on this. I literally think that if you haven't done it, you should start today. So that means go call a meeting and say we need to get together and discuss this. Try to figure out who in your organization you need to invite. Cross-functional, more diverse is better. Get them together and start talking about the ways in which your organization are using AI and start thinking about what are the potential risks associated with that, that's something that should just happen now. If you have not done that, then that should just happen now. You should stop listening to this and go do that. And that's the best way to start now it may come that you need an expert, and likely you will. But you can start this process now just by reflecting on the fact that there are risks associated with AI and it's not just bias, it's transparency, there's all sorts of risks associated with AI. And you can start now thinking about those risks and how you might mitigate them. So that would be my final sort of broad thought about, you know, you don't need to wait, you should start now.


Priya Keshav:

So I know I. I asked the final type question, but your final thoughts sort of prompted me to ask something right? One of the biggest things that I think is not so much who cares, but as much as maybe I don't have these scenarios and I know I mentioned this in the beginning. But I think part of it is lack of understanding because what happens is, it's very easy to understand my use case when I'm thinking in terms of I need to do the hiring right? But for a lot of scenarios, most of the time, I think people don't know that there is AI involved. So that is yes there is a I mentioned in every tool you purchase. Because that was part of the marketing process, but at the same time nobody has put a lot of thought into what AI means, and I think, that requires maybe this cross functional group to go educate themselves on what AI really means. And the second is there are so many use case of AI, where, again, ethics and governance is important, and there can be harms, but it might not be something that is visible or something that you can understand. We're not sort of thinking about it deeply enough. And so part of the question is most organizations are probably in that state, right? I don't know where my AI is, and I know you mentioned multiple times you need to have an inventory, but the other part is educating yourselves, in, we need to understand all the harms that are possible through AI. So what are some of the places that they can go to, to kind of build that expertise in-house. Obviously they can hire an expert to help them through it, but if somebody wants to educate themselves, where can they?


Shea Brown:

Yeah, that's a really good question. I mean, I think there are now a lot of places where you can get educated on this. So I mean for instance, Babl AI, we have courses that we offer, most of those courses are the primary function is to train auditors, but the same skills apply, like if you want to try to understand AI risk, that works. And organizations, like For Humanity, they have courses and there are a number of other nonprofit organizations and for-profit organizations that offer courses in this. There are also good books. Reid Blackman, just published a really good book about ethical AI in business.


Priya Keshav:

Thank you so much for your time. I appreciate you joining us, and I think it was a great conversation.


Shea Brown:

OK. Thank you very much.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page