top of page

Simplify for Success - Conversation with Nancy Nemes


We invited Nancy Nemes on Simplify for Success, a podcast series presented by Meru Data and hosted by Priya Keshav.


With expertise in AI, IoT, mobility, cybersecurity, and embedded systems, Nancy discussed the challenges in humanizing AI and the intersection of privacy and security with AI.


She also spoke about the need to regulate AI without restricting its development and ways for companies to build consumer trust in AI.

Thank you to Fesliyan Studios for the background music.

*Views and opinions expressed by guests do not necessarily reflect the view of Meru Data.*






Transcript:


Priya Keshav:

Hello everyone, welcome to our podcast around simplifying for success. Simplification requires discipline and clarity of thought, this is not often easy in today's rapid paced work environment. We've invited a few colleagues in data and information governance space to share their strategies and approaches for simplification.


Today, we will be speaking with Nancy Nemes. Nancy is a managing director at the digital startup Nemes Ventures. She focuses on AI, IoT, mobility, cyber security and embedded systems. She's a keynote speaker, author, blogger, and advises many boards. Prior to Nemes Ventures, she was a leader at Google and Microsoft. She was named one of the most inspiring businesswoman in European technology by inspiring 50.


Hi Nancy, welcome to the show.


Nancy Nemes:

Hi Priya, thank you so much for having me.


Priya Keshav:

So, you've been working towards humanizing AI, we want to focus on the intersection of privacy and security and AI. How do you think AI impacts privacy and security?


Nancy Nemes:

Yeah, I am indeed working on putting the human touch on technology in general and AI is part of it. We love using this word these days Artificial intelligence. The reality is, it's nothing new and really that intersection you mentioned Priya of privacy, security and AI is becoming more and more important, and so the first concern with AI basically, it's that potentially it has to replicate or to reinforce or even to amplify harmful biases. And we know that we, as humans, are biased, and so if you want to ensure that we kind of put the human at the center, then the main question at the intersection is how do we avoid biases which is a human issue and these biases can proliferate depending on the nature of the data collection that we perform, and so you can have issues like the spillover effect.


If you look at one of the most sensitive areas, it is facial recognition and there are those systems that are being deployed now for years in cities and airports around the world. But if you look at, for example, how China is using facial recognition as a tool for more of the authoritarian control, for example in Shenzen, which we see examples these days and elsewhere it is opening up a large opposition movement to this expansion. So there are many privacy issues that are emerging, we can see that various organizations called for a ban on the use of facial recognition, right and of course, machine recognition of faces has progressed very rapidly. We have all these examples of fuzzy images of cats or dogs that’s part of the first steps you do with facial recognition, but even to individual humans, it's still imperfect what we see, but it's advancing very quickly, and so you can see many, many cities around the world. Also, in the United States, San Francisco for example, yeah Berkeley in California, but also Cambridge, Northampton in Massachusetts, even Oregon State have adopted bans on the technology. And so that's one very important challenge for regulators is to pass privacy legislation that protects individuals against these adverse effects that may happen from the use of personal information in AI. But also at the same time, without restricting the AI development. So that intersection that you are talking about is a highly sensitive balance between allowing the AI development to continue in a positive, non-harmful way while at the same time having the right balance of legislation that allows us to manage that data in a private and secure way. And of course the challenges are big. So yeah, in a nutshell, it's really about the balance on how you regulate without restricting development.


Priya Keshav:

So, you talked about facial recognition and obviously Facebook, for example, said that they're going to abandon their project, right? So, what are some of the biggest challenges with privacy and AI, and we can probably focus on facial recognition or other some other aspects of AI as well. And what are some of your tips to overcome some of these challenges.


Nancy Nemes:

That's a great question, and actually I'm glad you bring up the Facebook example because that role of the psychographic profiling of Facebook users and the scandal that happened with Cambridge Analytica with all the implications for privacy within the US presidential elections. That added a lot to the erosion of confidence that we see today. And of course, to the threats to democracy that we can see fueled by AI manipulating some of the democracy levers. Also, you may know about the other U.S. company Clearview AI that violated Canadian privacy laws by collecting photographs of Canadian adults and even children for mass surveillance and for facial recognition without their consent and for use for commercial sale. And that of course is reducing trust and confidence in the AI business, and I think that's the number one major challenge is that reduction of trust and confidence not only in the businesses that do this, but it reduces trusting the entire country’s ability to appropriately direct all those matters that are related to privacy and AI.


Same example, Microsoft who has removed database of 10 million facial photographs that were being used by large organizations like Alibaba, IBM, military researchers. And so most of those people whose faces were in the data set were not aware that their image had been included and the companies are not doing this on purpose, it just happens that we wind up very complex areas that we're working on.


We are not ready to address all these issues, so I would say that there are three major issues that lead to that lack of trust or to reduced trust. Number one is, is the data accurate? So data accuracy is really important, and it's important to produceclean outputs and that means of course you have to have algorithms that contain huge datasets and then managing under representation or over representation of certain groups in those datasets can result in inaccurate outcomes. And that can lead to harmful decisions. And that algorithmic bias most of the times it is treated unintentionally.


Nobody wants to do it on purpose, but of course you may remember the famous example of the voice recognition technology that fails to understand minority voices or female voices or children’s voices because the algorithms are built from databases that contain maybe primarily male voices, right? And so yeah, so data accuracy is one of the big challenges.


The other one is of course, how do I protect my data? As we produce these huge datasets, large amounts of data, you run into a much higher privacy risk, and so it's very hard to ensure data protection in that space. And so if you have data leaks, for example, when privacy considerations are not taken into account, that's a huge challenge and then of course when you start to see and define patterns in AI and when AI starts to draw conclusions and starts to make decisions, then you have an issue around data control. And if you look at AI that is, for example, used to score credit risk, and I've been working on a project just recently about that in the United States, where, of course AI is being used by financial institutions to score credit trees, but unintentionally is cutting credit lines of individuals who fit specific profiles. So you have to know, you have to understand that those decisions and data control can happen without a customer's knowledge or their consent, or especially if the data driving these decisions is collected without your knowledge. How do you deal with that and so that can lead of course, to a lot of other details that people may not be aware that they are being collected, and so when you start broadcasting these details online, then data control becomes a huge issue.


And so I think these are some of the key challenges. In terms of your question about how to address those challenges, of course, today we see new roles that are coming up in companies that are trying to create, let's say a Chief Privacy Officer or even a chief ethics officer to work on privacy in that context of AI. So, how to create the right organizational structure is one aspect on addressing these challenges but also how do you basically look into the role of it versus the broader business professionals that work in companies? That's a very important aspect of trying to address or addressing the challenges, so it's really the role of both, the IT governance professionals but also the privacy professionals, to ensure that these AI based privacy challenges are being oversight.


And u on the other side, outside of the business world, there are key questions on how do we address that algorithmic discrimination, for example, as part of the privacy legislation. So to what extent can or should legislation address issues of algorithmic bias? And I've seen these in projects that we're working on recently. Discrimination, for example, is not necessarily a privacy issue because it represents broad social issues that persist even without the collection and use of personal data and so who should address that? Is it the civil right laws? Is it the CTO of New York City? Who should actually work on this and as soon as you start working on legislation, you open a huge Pandora's box because of course it's a political issue that you start touching, right? And so using that personal information about specific attributes, making automated decision making against or for the interests of specific individuals is of course bringing huge privacy interests in controlling how information is being used. So, on the one side, you have the major regulations on the privacy legislation that are part of what the public sector should look into, and then you have the private sector, the businesses that have a huge role to play and a very big responsibility.


And so, so there are many measures of course around accountability. We can talk about if you want, but those are some of the key challenges, I would say, in how to how to overcome them and I think it's really all about collaborating. For example, in the United States, I was working on a project that I was putting together specialists from the public sector, the private sector, startup accelerators, but also research think tanks and put all these people together to try to address very, very specific issues around algorithmic bias. And so, I think, only with broad collaboration, creating these bridges and involving humanities, not just IT and business, but really have an ethicist sit at the table. Have a philosopher ethicist that can help address or formulate specific questions, but even having an anthropologist is important, so I think it's really important to broaden our thinking by collaborating across these segments.


Priya Keshav:

Yeah, so you talked about collaboration, just outside the company, across the board, across various stakeholders. But even within the company you mentioned, a little bit about IT and privacy, for example, getting together and maybe IT and risk management getting together to kind of have a coalition to sort of look at and address these problems. So when you have these coalitions and when you are trying to kind of address some of these challenges, how can you use ,if I can use the word “metrics”, but how do you know where the privacy implications are? So end of the day you need to be able to understand the issue at a very high level, but also be able to spot it so know how to identify and know how to kind of flag it for all the stakeholders, right? Whether it's bringing IT, bringing risk to the table or privacy to the table to sort of have a conversation about it and flag some of these things so the issues are dealt with in a more proactive manner. And what kind of metrics or measures can you put in place and how has it have helped improve your program, so more examples of things that you've seen people do and what has worked so well so far.


Nancy Nemes:

Yeah, that's really a great question and I just want to bring this example. I work for Microsoft in the United States right now and we formed a coalition exactly like you said . We actually formed a coalition that is called the National Council for Artificial Intelligence, and we started about a year and a half ago and I have to say, not surprisingly, it was something very new, so people are just starting to talk about this. First, we had to drive awareness of what the issue is broadly, also in the media. And even at the very sophisticated companies such as Microsoft and their partners, we started this National Coalition for the financial services sector, and now we are doing this for the healthcare and insurance sector and it's really amazing how we were able because we actually formed this as a collaborative, collective effort where we said let's take three or four or five of the major financial institution companies and invite them to join this coalition together with the technology company with research think tanks such as the Brookings Institution in the United States, but also with the academia like, major universities from Stanford all the way to Sunni Cooney in New York and others. So you have the academic representation all the way to startup accelerators from Silicon Valley. But there people that are in charge of AI, in the broadest sense to basically identify some of the key issues that they need to address, and that's how one of the things we came up with was around fraud detection and standards that we need. For that and building that coalition in a diligent way and then really working on a framework on what is important for that particular industry, what issue we want to address, and then putting our brains together, it sparked a really important dialogue which didn't exist before because nobody was thinking before to put these people all together and to identify issues that are common to everyone and then to really start to work on it.


And so I think to your point and about your question like what kind of metrics 4ould you use? At the end of the day, we really have to go very deeply, very specific. So, if you identify, let’s say, algorithmic bias as an issue, then some of the key metrics around that are, for example transparency, right? So how do you basically use algorithmic decision making, right? What are the specific privacy issues that are most important to most consumers? We're talking about banks and financial institutions. But then how do we involve the regulators? What is the benchmark that allows us to examine data handling and keep the companies and all the other actors accountable? And of course, what are the privacy in this context of your questions here? What privacy disclosures require a description of what and how data is being collected, it's being used or is being protected to enhance that benchmarking, so transparency is one of the key metrics around that?


And then of course, very famously explainability so of course, transparency enables you to understand well in advance what the algorithm is doing but explainability is giving you more of a retroactive information about how you used algorithms in specific decisions and of course to that respect, the EU is using GDPR and we know that requires for any automated decision you need to work on specific legal aspects, right? For example, a credit lines or an insurance coverage or even employment, the person that is affected at the end of the day. There is a human in the loop at some point, so at the end of the day, we know that all the automation has to have an element of human in the loop in specific processes, and that's also an important metric, where is the point where you put the human in the loop for decisions that really affect people's lives and how do you combine that power of machine in a way with human judgment? And of course, with empathy and so of course the other aspect and a very important metric is how do you assess risk? And risk assessment is a very important way of trying to understand potential biases and design that data in a way that potentially reduces bias, so risk assessment is important to look at, and of course at the end of the day, you look at audits that evaluate your privacy practices end-to-end and so when you look at algorithmic decision making and with the foresight and the hindsight. And one of the clear lessons from our debates that we had in our coalitions.


Actually, we are publishing these in white papers that we're working on right now in panel discussions where we have these expats talking. But the importance of our findings and it's really very pointed, so it's not general aspect, but really very specific. And we know that it's very important to ask key questions like what happens if the algorithm is designed in a way that was not intended by the human being and how do we deal with the people that were not treated equally. Even just having that type of conversation was new, it’s pretty revolutionary. I can see that in the United States, and I think the United States is absolutely a pioneer in this area and absolutely a leader. And so, seeing that we can advance this. I Wish we would have much more of these discussions that are very pointed come here in Europe as well, where I feel like we have more of the political conversation, but not enough of the coalition type of conversation we see in the United States.


Priya Keshav:

So, you mentioned a few things, but I wanted to focus a little bit on the AI component, right? Most people probably have heard about the bias in AI, but you kind of look at couple of examples right? So you kind of look at scenarios like we want to make sure that when you're looking, looking at people’s credit risk, there should be no bias and the other big area of focus is obviously bias in hiring, right? So because these are obvious examples and it's something that comes up all the time and I think there is some level of understanding around ethics and the need to have some controls around it. But how do you build an awareness for technology professionals who are not sort of looking at ethics and privacy on a regular basis. They probably know it, but I often sort of wonder how much they recognize it, right? That AI is everywhere and decision making around AI is also pretty much everywhere. Maybe the impact of the decision varies, but, for example, facial recognition, how it can make a difference? When you start incorporating it into all kinds of things, similarly, chat bots and customer ratings of just feedback and reviews. And for example our behavior when we take a ride with Uber or Lyft or any other car services or when we go stay in hotels or an Airbnb. So how do you sort of build that awareness that AI is everywhere and there are decisions being made by AI that can have both privacy implications, security implications as well as biases.


Nancy Nemes:

Yeah, yeah, and that's a really good question, because we can still see all the hype around AI, I think and we spoke about it a lot right? It's due to that term, now we finally have a term, AI that allows humans to be very creative on what they understand by it. But I would say really the most important thing about building awareness is simply education. Just educate and it is absolutely the job of everyone in a company, not just the technical but the business, all the way from the CEO to everyone in the company. So why is this so important is because having the right education and raising the right awareness will allow us to remove some of the hype, show people the opportunity. Of course, the challenges as well, but most importantly, it's really the opportunity of creating strong communities of interest and what we can see today is that we really do not have the right skilled people that we need and driving that awareness around what AI is, the media has a huge role of course, on the creation of awareness, what can be done? What should we do? Do how can you create something that I'd like to call a digital hygiene in the reference to what users can do. It's not just about the companies, but it's about you and me who are using our social media, for example.


And so, how can I assure my the of my digital identity and so really, I think the curricula in universities, this is really also one of the reasons why we have academia of part of our coalitions that we create. It is to ensure that we understand the right skills and the right types of education we need now and in future and what type of knowledge we need to drive. What kind of digital literacy we need to put into our education, systems and of course, I think, that should start at very, very early age. Not just at the university degree stage, but really that digital literacy needs to be improved for people of all ages, all genders, old, cultural backgrounds throughout the entire world so that everybody can learn about their rights online, about their opportunities with these tools that we already use everywhere. And creating those communities I was mentioning, that's the reason why I founded a community that is called humanAIze. And also I found a committee that is called Miss AI, just to raise awareness that we need more gender representation, more diversity, more inclusion at all levels and I really think that awareness you build through communities that are working on education, on showing the opportunities but also the dangers. And of course, on the opportunities to kind of level up education in schools and the impact of AI even though we have a lot of hype and people speak about general artificial intelligence and conscious machines and all that stuff which, I think, will take ages until that nearly comes to, nearly to what people dream about or science fiction dreams about but still AI impact is underestimated. And that's where also different organizations, different associations should visit schools for example, universities and schools, and organize events to raise that awareness, like you said, what are the issues? How can that be addressed and also, provide benefits, like, what is your benefit? If you really start looking into this. And of course, the awareness wouldn't exist without communities and lobby and governments are engaging now, in specific, lobbying opportunities, I think GDPR is one of those and that we protect people in that cyberspace. And so we need to stay very careful, very attentive. Of course, harmful AI systems can affect people all over the world, it can come to really ugly scenarios, but again, if we have the human in the loop and if we always ensure that technology is only an enabler for humans, and if we really create that multi-stakeholder system and keep being engaged because we have the opportunity just like in your podcast here to really have a great collaboration. I think what you do Priya is great because you are now raising awareness with your podcasts on very specific topics, very deep tech topics and the pandemic shows that we are being pushed into this cyberspace. It's essential that we know our rights and we are clear on what our privacy should mean. We need to be attentive, I think the bottom line here is we need to be careful and attentive on how this technology is being used and stand together for equal and democratic digital space and make it safe and you make it safe by educating and raising awareness.


Priya Keshav:

As you build your diligence, accountability and audit framework for AI, how do you simplify to make sure that it's successful?


Nancy Nemes:

Yeah, I love you. OK, that's a really good question. How do you see if there's such a complex area, right? Well, that's both an art and a science, I would say, at the same time considering that you have a complex landscape, let's say in this particular example of privacy, security, you have ethics of technology trying to solve specific privacy challenges. And simplification here really means finding the balance between technological innovation, privacy considerations, development of a social responsible AI and that balance should create public value in the long term. Yeah, so how you simplify that, I think, that's a big question that I don't have an answer really to how to simplify. I think really it's about bottom line, being clear that you have specific frameworks to use, you really realize that technological benefits are there, but they can also harm individual privacy. So, in a way, if you use good data hygiene and you really focus on good datasets, We keep talking about good data, it's not about just volume but about the quality of data. So, one way is to not focus on volume, but really focusing on quality of data and that way a developer can build more simple AI using accurate and fair datasets and that where possible, the AI algorithms can audit and ensure the quality of other algorithms, so even there to simplify you can use technology. I also think simplification means users have more control when you allow users to know when their data is being used, how AI is being used to make decisions about them. Transparency is a way of simplifying these things and they should also be given a choice to consent to such data use. So a few a few big rules that every society and every company should have that can lead to that simplification, but I think it's still a long way to get there because simplifying is an art and it's the most difficult thing to do.


Priya Keshav:

Any other closing thoughts?


Nancy Nemes:

So Priya I would say for this particular topic which you chose for this podcast today, I think what's really important is for us as humans to stand together to make it as safe and democratic and inclusive with digital space, it's important to really stay vigilant to raise awareness, but also importantly, not only to talk about protecting yourself but also protecting others also when it is about your personal space. So, to conclude, I would say really, if we don't stand up as individuals, and if we don't form communities of common interests, it will be hard to control this space, and it's really important to look at this from a democratic point of view. How do we keep our democracies functional? How do we avoid manipulations and how do we really update our education systems? To avoid polarization or radicalization, or even these echo chambers and so that really is through dialogues like this, I believe, and continuing this in many different ways.


Priya Keshav:

Thank you so much for your time, Nancy. Very great, interesting thoughts here so I appreciate you taking the time to join us.


Nancy Nemes:

Thank you so much Priya. It's been a pleasure and I look forward to seeing what the audience will have to say about our conversation.


Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page