top of page

Ethics in AI: Are we there yet?

As the tech space is getting increasingly fascinated by AI and its capabilities, many are turning a blind eye to the darker side of it. With great power comes great responsibility―this holds true, especially in the world of AI and machine learning (ML).

While automating and expediting tasks, AI-based models have been found to be exhibiting bias on several occasions. AI systems can also be hacked. As we rely more and more on automated decision-making, bad actors will and can employ techniques like adversarial machine learning and data poisoning to hack our AI systems.

Last month, Twitter announced a bounty challenge for detecting bias in its image cropping algorithm. The announcement comes months after its algorithm was found to be occasionally discriminating based on race and gender.

However, this is not an isolated case. In the last few years, several companies have discovered underlying bias and discrimination in their AI systems. For example, in 2018, Amazon had to scrap its AI recruiting tool after it was found to be unintentionally favoring men for open technical roles. Such biases could cause dangerous repercussions if utilized for making important decisions, for example, in credit decisions, criminal justice, healthcare, etc.

The biggest problem is that it is difficult to find bias in these AI models, and the damage may already be done by the time the bias is detected. When AI-based systems are fed with data to learn and derive outcomes, they end up amplifying embedded biases present within the data. Even datasets with billions of pieces of information may be biased and not diverse enough.

Contrary to popular belief, just avoiding parameters like age, gender, race, etc., in your AI model doesn’t mean you have eliminated the bias. It is crucial to audit the datasets are diverse and understand how the system is making decisions. But this requires more governance and rigor around AI.

The debate around ethical AI has been at the forefront of AI development for quite some time now. How we incorporate ethics and governance into AI is a challenge that needs to be addressed. It is imperative if we aim to make AI more accurate and useful.

Many organizations have come around to seeing the business imperative of an AI ethical risk program. Countless news reports — from faulty and discriminatory facial recognition to privacy violations to black-box algorithms with life-altering consequences — have put it on the agendas of boards, CEOs, and Chief Data and Analytics Officers. What most leaders don’t understand, however, is that addressing these risks requires raising awareness of them across their entire organization. Those that do understand this often don’t know how to proceed.” (Everyone in Your Organization Needs to Understand AI Ethics by Harvard Business Review).

What goes into it?

Building a program around AI ethics requires cross-functional collaboration, trust, and transparency amongst all stakeholders. We need transparency around the type of data being used to train the AI system, how it is being trained, and what recommendations are being fed into its algorithm.

The more we learn about the underlying data sets, decision-making process, etc., the easier it would be to eliminate biased decisions and recommendations.

We also need a better understanding of the types of security threats AI poses as the attacks are not easily detectable through traditional methods. AI is an emerging technology which means the technology and its usage is rapidly evolving. This also means the threat posed by these technologies is likely to evolve and shift as well. Continuous education, cross-functional collaboration, and an adaptable mindset would be crucial to ensure the organization does not fall behind in managing these risks.

Building a program to understand risk

It’s easy to dismiss AI and the threats posed by AI as futuristic. AI is not futuristic but is distributed, ubiquitous, and has become an integral part of our everyday lives. It is already being used in everyday applications as its easily available and can be deployed with modest resources.

However, very little awareness or tracking of AI exists, making it almost impossible to understand its impact or effectively govern it. AI applications have enormous potential for fundamentally transforming our lives, and it is changing the entire fabric of our society. However, AI also poses some big issues around ethics and security that are real and can potentially cause great harm.

As we embark on new AI projects, organizations need to ask some serious questions about AI and how risks around AI should be managed.

1) Who holds overall accountability for any risks these technologies pose?

2) Do we understand the unintended consequences of AI?

3) How do we protect AI from hackers, and what are the new threats posed by these systems?

4) What is our framework for the ethical, transparent, and accountable use of AI?

5) How can we bring together a cross-functional group of people to understand and effectively manage a program around responsible AI?


Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page