top of page

Simplify for Success - Conversation with Ryan Carrier


Ryan Carrier was on #SimplifyForSuccess, a podcast series presented by Meru Data and hosted by Priya Keshav

The insightful conversation between the two privacy experts focused primarily on AI auditing, how it should be conducted, the need for responsible AI systems, how to build ethical and bias-free AI and more. They also touched upon the need for diverse inputs and the various sources of feedback needed while examining and accessing the AI systems.








Listen to it here:

Thank you to Fesliyan Studios for the background music.

*Views and opinions expressed by guests do not necessarily reflect the view of Meru Data.*


Transcript:


Priya Keshav:

Hello everyone, welcome to our podcast around simplifying for success. Simplification requires discipline and clarity of thought. This is not often easy in today's rapid-paced work environment. We've invited a few colleagues in the data and information governance space to share their strategies and approaches for simplification.


Artificial Intelligence and Machine Learning are not just buzzwords anymore, they are fundamentally transforming how we do business today. According to IDC, investments in AI are supposed to hit 500 billion by 2024. AI will revolutionize transportation, healthcare, the way we communicate and will be part of all the products we use every day. So how do we ensure AI is being ethically and responsibly developed?


Responsible AI is an approach to developing, assessing, deploying AI systems in a safe, trustworthy, and ethical manner. Our guest today is Ryan Carrier from ForHumanity. Their mission is to examine and analyse the downside risk associated with ubiquitous advance of AI and automation, to engage in risk mitigation and to ensure optimal outcome. So, we are here to talk about AI audits, how to conduct them, and why it's required.


Hi Ryan, welcome to the show.


Ryan Carrier:

Thank you, Priya. It's a pleasure to be here.


Priya Keshav:

ForHumanity spends a lot of time looking at AI and AI related threats. Could you maybe help introduce ForHumanity and your role within ForHumanity for our audience today.


Ryan Carrier


I'd be glad to. So, Priya, I had a 25-year finance career. And the last eight years of that was running a hedge fund. And that hedge fund is something I survived but didn't thrive at. And the reason I tell you in the audience that is that when you survive a hedge fund, it means you have to close it. So, it was 2016, I have to close this hedge fund. It's not a full-time job, but it's still a duty that I have. I built AI to manage money.


But Priya, the most important part of this story, and where ForHumanity came from, is that I have two boys, and in 2016 they were four and six years old. And with this time on my hands and this knowledge of artificial intelligence, I kind of extrapolated out into the future where we were headed in the path and the trajectory that we were on. And I don't mind sharing I got scared. So scared that I started a nonprofit public charity. With no money. And no plan. It's really just the mission statement, and the mission statement is this - to examine and analyse downside risk associated with AI, algorithmic, and autonomous systems, and to engage in risk mitigations. We take technology where it is. We are not anti-technology in any way shape or form. We take technology where it is, and we say - can we mitigate these risks? And if we can mitigate those risks, we believe we've achieved some success because now we get the maximum benefit from these tools for humanity. And that's where this overly ambitious name of the nonprofit public charity comes from.


So, we've been doing this for the better part of eight years, sorry, seven years. I started it on my own. I'm the founder and the executive director now. And as of March of 2020, it was still just me, talking about a bunch of different ideas and we'll talk about some of those. But in March of 2020, we started to actually engage in the process of establishing independent audit of AI systems. And as a result of doing that work, we have now grown to be 1500 volunteers from 86 countries around the world, and we're growing at 40 to 60 per month and we're engaging with governments and regulators to help them establish risk mitigations for these tools. Does that make sense?


Priya Keshav:

Yes, absolutely. So, let's talk about why. Why should we invest in AI ethics and governance? Why is it important?


Ryan Carrier:

Now I think the easiest answer to that is that these tools are what we call sociotechnical tools. So, unlike a calculator or even a desktop computer where we input into that computer and then it would give us something back. As a sociotechnical tool, the human is part of the equation through our personal data and the interpretation of that data to reach inferences and conclusions, and then those outputs are put upon us as well. And what we've come to realize over the last decade, decade, and 1/2 or so is that these tools can create or cause negative outcomes, negative impacts or harms to humans. And so, just like in every industry over the last 200 years, what is changing is that we've moved from the Wild West of Technology with no governance, no oversight, no regulations, to now beginning to bring to bear rules and regulations, processes, and maybe most importantly, liability to these entities and as a result of that, there's now an interest in mitigating bias.


So, we have we work through our data and our use of data. We're actually exacerbating existing biases in our society. We're acting in unethical ways. We're not building our tools to be privacy-preserving. We're not building our tools in a way that we can control them. Or where we have adequate transparency and explainability. And finally, these tools have introduced a whole new vector of cybersecurity attack vectors, and we have to manage those risks as well. Otherwise, people who are interested in committing fraud and malfeasance to get their hands on some very important data about the individuals who are involved with these tools. So, the risks are much, much higher. And then, when you couple that with the huge advances in artificial intelligence, algorithmic systems, and autonomous systems, the need is great. Because now the harms and the negative impacts are great.



Priya Keshav:

So, I was reading Mackenzie’s report on the state of AI in 2022, and one of the things that they point out is that the AI adoption has more than doubled. So, it's actually grown by 2.5X percent since what it was in 2017. And it has somewhat plateaued in some areas in the last few years. But you know, if you read that report, one of the things that also is highlighted is - While the AI use has increased, there has been no substantial increase in the reported medication of any AI-related risks since 2019.

Which means, not much attention has been paid to mitigating AI-related risk. I'm not sure why that's the case. Maybe it's because people don't understand the risk. Maybe you know you don't have enough regulation yet. But what are your thoughts around this particular study?



Ryan Carrier:

So, in any industry, as they advance, what eventually happens is people start to recognize that there's harms and negative impacts, and they begin to say we need regulations or we need governance. There are three mechanisms that we have in a capitalist society to change the behaviour of corporations who are not incentivized in the short term to mitigate these risks. And again, that's proven over long, long periods of capitalist society.

So, three ways to change behaviour of corporations is when governments provide laws and enforcement, and they regulate a particular industry requiring minimum standards. Usually some very basic things to ensure safety, good operations, and good controls. Now AI and algorithmic systems are highly complex and so we see five different vectors of risks, ethics, bias, privacy, trust, and cybersecurity.


So, when we talk about the three ways that we can change corporate behaviour, we have governments who can enforce risk controls over those five vectors. Or we have lawsuits. And this is a specialty of the United States. Where we actually begin to have plaintiff’s side attorneys who are bringing cases of harm and there are harms here, let's be very clear about that. They bring cases of harms, their clients, the plaintiffs win, and they win judgments. And the result is we get a shift in the economics. So, the costs start to rise of bad or insufficient behaviour. And so, we start to change that equation and now responsible and safe AI becomes profitable and irresponsible and dangerous. AI becomes costly, and that can be true because of enforcement through laws, through lawsuits and raising the stakes of liability, and then the third mechanism, which is key, which we do not have today, which is an informed consumer base.


So, when consumers begin to demand safe and responsible tools and they begin to move away from and stop buying dangerous and irresponsible artificial intelligence or algorithmic systems or whatever, it might be, well, that's when we begin to change the supply demand function where those who are producing and taking the time to build responsible and safe AI into their process begin to be rewarded even if maybe their costs are slightly higher, it's because now the consumer market, and consumer market can be moms and pops or can actually be the companies and the corporations who are taking these tools and embedding them in other solutions. When those downstream corporations demand more indemnifications and representations and warranties, when they demand liability coverage when things go wrong, that changes that equation as well.


And so those are the three key mechanisms that that societally we have to change this behaviour from an unregulated market where it's simply speed to market. How fast can I put it out? Don't worry about the risks to - we better have robust risk management in place, otherwise it's going to get expensive pretty quickly.


Priya Keshav:

So, you mentioned 5 vectors or five areas in which you can have risk and some of this has been also published. I mean, there have been a lot of work that people have introduced on how to manage or govern AI and most of them are principle based. But the problem with the principle based is it's a high level what I would call goal as opposed to a way to kind of govern or audit. And from what I know you provide a mechanism that would sort of take those principle-based guidelines and kind of convert that into a more of an audit criterion where you provide and train people on how to audit the AI based systems.


So, maybe you can talk a little bit about what you do and how you go about doing audits or training? How is the audit methodology sort of developed?


Ryan Carrier:

It would be my pleasure. It's our primary work. So, I appreciate being able to talk about it and why we focus on it.

So, before I get into the details of independent audit, the thing that I want to explain is it's built on a very simple premise about our human nature. Most people want to do good things. Most people are not out to harm people, and so they try to put in place good controls, they try to do governance, they try to do the right thing with data management. Lots of different ways where risks might come up. They try to do these things. Until they get busy. Until they get distracted, until their boss says this product has to go out because we have to win market share and now, they're under pressure. It is simple human nature that when we do things ourselves as self-assessment process, it falls short. It is proven time and time and time again.


Independent audit of AI systems changes that equation, and we learned this, and we can equate this to the way that we teach. When we teach our students, we don't turn them loose and say good luck on your learnings. You know, if you read these books, you'll turn out OK. We have teachers who say, “You have homework assignments. Did you do your homework assignment?” Or, with exams, “You passed or failed that exam?”


In other words, we have a third party who's coming in to check. And what that does is it changes the very nature in which we operate. Students complete their homework because they don't want to get a failing grade. They take their tests and study and prepare because they want to pass a test and they know that that teacher is going to check them. That is the role of an independent auditor. An independent auditor actually conducts their audits over companies. Not for or with companies. And that's because an independent auditor conducts their audit on behalf of society, not on behalf of the company itself, because they're essentially saying society has said - here are a set of rules that you need to comply with to be safe, to be responsible. Whatever the terminology might be, but here is the set of rules. The independent auditor says on behalf of society as a proxy, we will go in and check. And when we attach that word - independence, which is codified in law by Sarbanes-Oxley in 2002, now we have objective, trained specialist who are held to a high standard of conduct who cannot have other incentives to pass a client.


In other words, the only incentive and independent auditor can have, is the money for their fees. They can't provide advice, guidance, strategy, software platforms, hardware nothing. Because we don't want that auditor saying, gosh, if I tell them they're not compliant, I might lose my other revenue streams. They might be mad at me. We want them to be completely objective. And we hold these independent auditors to the risk of false assurance of compliance. Meaning, if they cannot prove that the auditee is compliant then they can be held liable for false assurance of compliance.


So, when an independent auditor uses third party independent rules, and they are trained practitioners held to a high standard of code of ethics and professional conduct and now they're looking over these companies to basically say - have you met the requirements?


Well, now we can create an infrastructure of trust and we've done this for 50 years in finance and it works magnificently well. Not perfect. There are no perfect systems, but this works magnificently well where people's trust is well placed in independently audited financial accounts and reporting. People are wise from a time and management perspective to trust those numbers.


And so now what we advocate, what we've been building for years now is implementable, auditable, where auditable means that a third-party independent auditor can say compliant or not compliant. There is no Gray like a lot of the frameworks you were talking about. So, we've taken all of those frameworks, we've taken the laws like GDPR, the EU AI Act, Digital Services Act, CCPA in California, Age-Appropriate design, New York City Bias Audit, I can go on and on. We take all of these laws, and we craft them into auditable criteria.


And then we, because we have no authority and we seek no authority, we take those audit criteria back to governments and regulators, and we seek their approval for those rules. When they approve those rules, we then train everybody in the marketplace to be auditors or provide pre-audit service or compliance so that they can all conduct the business, and all ForHumanity is trying to do is help that ecosystem be established and make sure that it's a level and uniform playing field. Where people are all held to the same standard. The reason we do all of this is because of that mission statement I mentioned before. We know that independent audit is the best way to assure this infrastructure of trust, assure compliance with all of these rules. Because when we mitigate risk with all of those rules, that's when we get that best result for humans. And that's why, for humanity does what it does. It's a very long answer, but I hope it was helpful, Priya.


Priya Keshav:

Absolutely. But you mentioned the AI act. We should talk a little bit about that because this is the first ever AI specific, AI regulation and it's the first major regulation in Europe, right? So, what are your thoughts about the AI acts, the good, the bad. What does it cover? Maybe where it short? And also, maybe you can also say whether references to AI in GDPR, CCPA, things like AI Act, is it adequate or do we need to go further? Where does it fall short? Is it helping with mitigating risk?


Ryan Carrier:

No, I think I could go for a couple of days on this Priya. We just finished teaching a course on the EU AI act for certified auditors in other words, we're training people to be certified auditors on the EU AI act, and that course is 42 days long, half hour per day. So, it's 21 hours of study that I could talk around the subject, but I will be brief, and I will be top level.


The EU AI act is the first major sort of global governance of what's known as high-risk AI systems. But I would tell you that it is coupled with privacy laws and data protection laws like GDPR. In other words. When personal data is present, it's actually GDPR that's more important than it is the Artificial Intelligence act. The EU AI act looks like it will be passed before the summertime. It will probably have about a two-year implementation phase, and it defines prohibited systems, things like using social behaviour, detrimental and psychological nudging, real time biometric analysis that isn't done properly, especially when it has a legal impact on persons. So, they have they've identified some prohibited systems and ForHumanity always supports countries or regions that are interested in prohibiting things if they want to, we support that.


The next thing that it does is identify as nine categories. It might be, might be effectively 10, how they delineate. But categories of high-risk AIs and I'll just cover a few of them. Critical infrastructure, anything work related, financial - like credit scoring and issuance of insurance, biometric identification, these are these are just some examples. Policing is important, border control and immigration is another one. So, they identify these high-risk aids and then they build in requirements for those tools, mainly that each one of these tools has to go through a conformity assessment. The most recent one added to this list of high risk is what are known as foundation models. And of course, all the rage ChatGPT is an example of a foundation model, generative AI used for large language models, LLM's and now GPT 4, which is known as an LMM, large multi-modal model. It's not easy to say.

But these foundational models are now going to be covered by the act. We actually haven't seen the text yet, but we've heard press releases on the details, and we are waiting on that text. And so, what they've established is that you have to have good data governance. You have to have risk management operations in place, monitoring. You have to have human oversight to these systems.


And then finally, it all has to come together in a quality management system. But also importantly, they've introduced liability frameworks between providers and users of these tools. So, the EU AI Act does a very nice job of covering all those things. There's a few words and lines in there that are a little disconcerting that seem to be making their way out of the language. One of the ones that really caught people out originally was that the data had to be complete and free from errors, which of course is an impossibility. So that freaked people out, but the intent is this - when you're using data to train these models, and by the way, if you take that GPT or GPT 4, it's trained on literally the entirety of the Western language that's online. So, the data that you have coming in is a mishmash of high quality and low quality. It could be, could be wrecked, it could be misinterpreted. There's all a whole bunch of problems that can come through with that data. And what the Act is saying is, if you're training, you know the old adage - garbage in, garbage out. And so, what they're saying is make sure that your data is high quality. Make sure, you've established procedures around drawing it in, what its source is, what its authenticity is, what its authority is, the data age. There are all kinds of elements of that that actually just make the model robust and good. And there are going to be some people who look at that and they're going to say, wow, that's really hard, that's really onerous. It might be more expensive to do it this way and you know. My answer to that is tough, seriously tough. Because this is just about good basic good practice that you'd learn in any sort of data science course on how to actually manage your data and make sure that it's good quality and then you know, you ask me, should you not monitor your system or your tool when it goes out? I mean, these are crazy objections to really basic oversight that's being required by the EU AI Act.


So, I don't find too many things amiss with the AI act. I would tell you that I think that they've set themselves up for failure is not the right word, but they've set themselves up for extra work that they don't need to do. ForHumanity, we've created an entire audit criterion that matches up to the law already, so we already have a set of audit criteria that's in place, but we handle risks slightly differently. We say prohibited, out. Good? and we agree with the act that way. We then say let's identify low risk AIs. Things that just simply do not have harm or negative impacts to humans. And the reason we do that is we want to get those off the table. We want to make small and medium enterprises and simple usage of these tools; they don't need to be governed by this. Everything that's left we say should go through an audit. So, if it's not prohibitive, it's not low risk, everything left should go through an audit. Because it should be a risk-based audit. If it's still not very risky system the audit should be fairly simple and if it's a very risky system, there should be a lot more to do with it.


That's how we approach risk and currently the Act says prohibited out and then these things are high risk, and they really leave a whole window of things that actually do have risk to humans uncovered. So that's one place we disagree, but we also think that eventually the EU will catch up to us. We understand that these are journeys.


Last thing I'll say is that these tools also use personal data, so GDPR is very present, or any other privacy and data protection law is in place that's in place will impact these systems as well, and they actually have to be done together. And then the nice thing that we see that's growing is also looking out for children online, looking out for disability and accessibility and inclusion online. These tools can be quite exclusive or excluding. And so, what we like and what we see is there are increasing requirements to consider how to be more inclusive, how to make sure that we have diverse input and multi stakeholder feedback when we assess risks. So again, a very long answer, Priya. But that's sort of the EU AI Act and how we've begun to interact and work with the laws that are being established out there.


Priya Keshav:

No, you brought up a lot of very good points and interesting points. So, you mentioned about data and the fact that garbage in garbage out, right? But I was just thinking while its onerous, if the data is biased, if the data is queued, I remember five years ago, there were a few AI projects. And, you know, everybody wanted to do an AI project. But many did not have successful implementations because they started off with really bad data. So, they did not understand what their data was doing. They did not have enough data points and obviously garbage in, and they got models that were not a reflection of reality.

So, it wasn't working well at all. That's because they started off with bad data. So, while it sounds onerous, it probably saves a lot of time and money for companies if they thought about this ahead of time. So, I agree with you on that.

And then you know, you also brought up another very interesting point. You have these systems that are, let's say, prohibited or maybe very high risk that needs to be sort of properly monitored and implemented the right way. And maybe there are very easy use cases, but then there is there are systems in the middle where there are real risks, but nobody is kind of thinking about them. So, let's say you know I'm the chief technology officer of a company and I'm trying to kind of do the right thing, like you said. How do I even bucket, like, I obviously can easily understand the very high-risk ones because you know it's easy to kind of identify them. But how do I bucket the medium risk and the low-risk ones and what can I do in terms of tools? Or are there some easy ways to be able to say OK this bucket, maybe it's low risk versus this one I need to pay attention to?


Ryan Carrier:

What you’re highlighting is the need for a robust risk management process. Now, NIST has recently completed what I would call a more high-level framework. ISO the standards body is also working on artificial intelligence risk. And then the EU AI Act demands an AI risk management framework. Most of these are a little bit high level. And so, what ForHumanity has done has created auditable criteria that allows you to fully operationalize a risk management process.


Now a big part of that process is just sitting down and identifying risks. When you identify those risks, then it's almost like a voting system, where we all bring our inputs to identify the severity, and likelihood of each individual risk input. And you might have something that's a risk to life and death, but it's a very unlikely to happen. Well, you have to measure that risk, right? That combination. Versus, you might have a risk that's kind of a petty annoyance. But if it literally happens to everybody, well then that becomes a more important risk, and you begin to balance and identify these.


The process needs to include diverse input and multi stakeholder feedback. One of the big problems that we see in risk assessment today is that it's being done by designers and developers who not only suffer from their own biases like sunk cost bias and confirmation bias, but they're also not trained in risk management. They're trained in design and developments, what they do. And so, what we need are people with expertise who can make good judgments on what these risks look like. identify risk controls and risk mitigations along the way, but also make sure that we have a full spectrum of stakeholders who are present? Not just, you know, 30 white dudes who are middle-aged or even young, right? We need to have the impact. We need to have people who will use these tools. We need to have those in the global north and the global South. We need to have males and females. We need to have multiple races. This is what we call diverse input and multi stakeholder feedback. Because diversity is key to identify risks that others may not see, and that diversity comes from different protected categories.


But oh, by the way, we could have old young, male, female, black, white, brown, yellow. But if we all attend MIT, we might all think in a similar way. So, diversity means more than just protected categoryness. It also means diversity of thought and diversity of lived experience. Someone who grew up in a fishing village in Scandinavia is going to have different perspectives than a person who grew up in a favela in Brazil and grew up in farm country in the middle of the United States. Those different perspectives, they perceive risk where they might have unique impacts to them because of their situation from these tools.


And so, we need to make sure that we have a this, we call it DI and MSF diverse input and multi stakeholder feedback assessors who are examining when something is lower risk, when something is higher risk. And by the way we can take the tool, assess its risk with no mitigations or controls in place, but we may be able to mitigate a lot of these risks along the way. And if we can't, we lower the whole risk of the tool and that's actually when we maximize the benefit and that's the exact nature of what ForHumanity is trying to do with independent audit of the AI systems, let's mitigate as much risk as possible.


Priya Keshav:

Yep, you mentioned ChatGPT, and there's no way we can talk about AI without talking about ChatGPT, its everywhere. You know both for all the benefits. I mean, obviously I was reading, you know, a report that almost every company is thinking about. Doing something with ChatGPT in the next few years. But most of them also believe that they don't have the skills and the expertise to be able to figure out how to implement it correctly, which is a concern.


And then you ought to see existing problems already, right? Like it's one of those tools where you've already seen millions of users, hundreds of millions of users rather and also breaches, being banned and then once again allowed in Italy, and you know it's fast tracked pretty much.


We talked about the need for governance around foundational models. Beyond that, as I look to implement ChatGPT based solutions, how should I approach it from a governance perspective?


Ryan Carrier:

I have a couple of examples for you. When new drugs are developed to treat disease, do we just throw it out on the marketplace and let everybody try it out? Absolutely not. We have strict regulations. The government requires clinical trials which are strictly governed and tracked so that we can identify side effects in advance. Because the assumption is that the drug is good and useful, right. That's why it's created, and now that's why they going to go to the expense of trialling it.


And so, we go through these very controlled trials, and then at the end, when we offer the drug, we basically say, here's why you're taking this drug. And by the way, here's the risks, side effects, right? And so, we have informed users and takers.


When it comes to airlines. We don't build an airline and say, oh, you know, give it to America and they could go fly it right now. Those things get tested every single way. Every single environment, thousands and thousands of hours of testing before that plane is allowed to fly. The same with our cars. We have billions and billions of miles of testing with autonomous vehicles, OK? And now we get a cute little prompt engineering tool, LLM called ChatGPT, and it gets turned loose on society and they say - but we needed real world testing. Why is this any different?


We've already seen harms. There's a gentleman in Belgium who killed himself talking to it to a GPT based chat bot that basically, he got so deep in, and he got so depressed, and he couldn't live anymore. We see corporate damage. Samsung has banned the use of ChatGPT. Why? Because their engineers were putting sensitive company data into the tool, and they basically leaked critical information out into the marketplace. Open AI already had privacy failures where other people’s searches were popping up and being made being made available to others. There are a lot, a lot, a lot of harms and risks associated with this tool. Why it's being allowed to just be tested in the real world in this manner is beyond me and makes absolutely no sense, other than it's coming from a world that has no governance and no regulations, unlike health care or transportation or financial models, which get tested over and over and over again before they're allowed to be used.


So, I see this as a last remnant, probably the last real-world trial like this, because the reaction has been so robust, so angry, and there's a now sufficient concern in the marketplace. I mean, look, there was a there was a petition calling for a six-month moratorium on development of these tools. Now for humanity didn't support that moratorium. We didn't feel it was an effective way and that it was picking winners and losers. And it wasn't the right way to govern these systems and tools.


But it became a mainstream conversation about the risks associated with these tools. Gary Marcus And Geoffrey Hinton, who are sort of two of the four so-called godfathers of AI development, both are railing against the concerns and risks. Geoffrey Hinton just left his role with Google so that he could feel free to highlight these risks. We would be unwise to not listen to their advice. And essentially require governance, oversight, and accountability of these tools, and we have none currently. That is the main mission of what for humanity does we aim to do it in a practical way that has been proven in a 50-year track record through financial audit.


And so, we aim to replicate that track record and that process, and we believe it brings enormous governance, oversight, and accountability to these tools, and that's why all of us spend all of our time working towards this mission.


Priya Keshav:

Any other closing thoughts?


Ryan Carrier:

Again, I could go for days. We talked about those three key mechanisms for changing behaviour and we need to raise consumer retail mom-and-pop awareness of these risks, not just that it's cool and interesting. We as a society, humanity has this obsession with new technology. It's really built in an automation bias. Every single person on this planet suffers from automation bias, and we need to wake up. And we need to have a higher awareness that these tools are imperfect, that these tools can lead to harms. We need to raise our awareness of when we can trust a tool or when we should be skeptical of a tool, and this is going to become much, much more important. These tools are massively exponentially increasing information. Information is actually relatively useless. Because if you have too much information, you don't even know how to take the critical pieces. You don't know what's fake from real. And so, what we really should be seeking is knowledge and wisdom,, and that's when you combine information in ways to achieve knowledge or even better, to achieve wisdom. When you consider all the impacts from the decisions that you make, and so, the goal overtime will be to help people, empower people to increase their understanding of what is good information versus bad information, what is fake versus what is real, what is crafted and created and how to enjoy that versus what we need to develop our tools and actually reach conclusions that resolve problem and improve our overall well-being.


So, it's really this idea that we need to overcome this automation bias in all of us and try to make sure that we are approaching these tools with a wisdom that is thinking about when I can trust the tool when I can rely on my calculator to give me an answer. But what process did I go through to learn that I could trust that calculator?


We need to have a more robust process with our sources of information, with our artificial intelligence tools that are increasingly becoming tools that you and I can have in our hands to achieve things. And so, we're going to have to raise the bar substantially. That's something that ForHumanity is working on as well. But ForHumanity is also a place that anyone can come to have their voice heard, to change the direction and path that we're on, to improve the well-being that we receive from all of these tools by engaging in these processes in this governance, oversight, and accountability.


So, I hope people will come check us out, get involved, have their voice heard. And if I may, they can find us on the on the web; ForHumanity.centre is the main website. We also teach. We teach foundations of independent audit of AI systems. We teach compliance, how to comply with laws and rules like GDPR, EU AI Act, Children's code and so on.


And so, we offer all of these services, but it's an all-volunteer organization. So, we hope you'll come and either find your voice or find a way to help or find a way to get involved. And so, we can all get a better result and increase all of our well-being through the use of these tool.


Priya Keshav: It was a pleasure to talk to you. Thank you for joining us today, Ryan.


Ryan Carrier:

My please Priya. Thank you for having me on. And thank you for the time.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page