top of page

Simplify for Success - Conversation with Dheeraj Thimmaiah


We invited Dheeraj Thimmaiah on our #SimplifyForSuccess podcast series, hosted by Priya Keshav to discuss IG programs.


Dheeraj joined Anheuser-Busch InBev’s Global Ethics & Compliance team in 2019, where he serves as a Global Director with a focus in Compliance Analytics and Technology.


Dheeraj discussed the use of analytics and compliance, maintaining the data quality, and finding the right balance between people and automation. He shared the example of ABI’s Brew Right tool and how they measure its ROI and its effectiveness.









Listen to it here:


Transcript:


Priya Keshav:

Hello everyone, welcome to our podcast around simplifying for success. Simplification requires discipline and clarity of thought. This is not often easy in today's rapid-paced work environment. We've invited a few colleagues in the data and information governance space to share their strategies and approaches for simplification.


Today, we will be talking with Dheeraj Thimmaiah. Dheeraj joined Anheuser-Busch, InBev's, Global Ethics and Compliance team in 2019, where he serves as a Global Director with a focus on compliance, analytics, and technology. Before joining ABI, Dheeraj was a senior manager in EY’s forensic practice in New York, serving clients with a focus on designing, developing, operationalizing enterprise compliance products with a focus on fraud, corruption, and monitoring, controls, and policies. He's recognized as a leader in the application of data analytics to demystify compliance problems and streamline compliance products and processes enabled by technology.


Our topic for discussion today is the use of analytics and compliance. According to the 2016 Compliance Trends Survey that Deloitte carried out jointly with Compliance Week, fewer than 1/3 of the chief compliance officers were confident or extremely confident in their organization’s ability to fulfill their ethics and compliance responsibilities and reporting requirements. Asked if their organizations were leveraging technology to analyze big data across enterprises to identify emerging risks, 56% of them said no. Obviously, a lot has changed since 2016, and more and more organizations are embracing data analytics. Data analytics holds the key to simplifying and improving the efficiency of an organization’s compliance process. Through data and continuous monitoring, you can identify new and emerging risks and ensure that you have adequate controls around some of these risks.

Dheeraj, you're one of the leaders in this space, and you're doing some amazing work around analytics at Anheuser-Busch InBev. So very excited to have you with us today. Welcome to the podcast.

Dheeraj Thimmaiah:

Thank you, Priya for this great introduction and the opening statement. Delighted to be here.

Priya Keshav:

In ABI, how is the approach shifted from reactive to proactive in compliance leveraging Brew Right product? Can you tell us about your experience in making that shift?

Dheeraj Thimmaiah:

Sure, so I think, you know, it's important to kind of rewind a little in our story and kind of touch upon a little about the mindset. And, I think, in your opening statement you kind of made a statement on simplification and shared some stats on it. And, for us, it was very much about that simplification. Especially about the size we are as a company, we're one of the biggest brewers in the world, and we manufacture almost one in three or four beers consumed worldwide, so our portfolios are massive in size. When ABI bought SAB Miller in 2016, one of the biggest questions we asked is very much on the process-How do we want to simplify compliance? Because compliance is when you run and you know about things that can go wrong, it gets very complicated. How do we drive insights in the most concise manner? So, there were a lot of questions asked and starting with my boss, Matt Galvin, and then his bosses John Blood and Katie Barrett, you know, the first question for them was how do we move ahead? And this is 2016 right? And as you said from those stats, a lot of people in 2016 were not thinking about data analytics, but the journey was clear for us that if we need to move forward, it is with analytics. But we knew it was baby steps to where we are today in 2021. So, for us, the journey from reactive to proactive was very clear. It's not that reactive was going to go away. Reactive is very much part of compliance because you will have a hotline. You'll have, you know, other intakes for like whistleblowers to basically provide complaints, etc. So reactive is very much here, but it is how do you shift from being more of a reactive organization to more of a proactive organization. And for us that basically came down to the benefits. First, it was going to be really a lot of cost benefits for the company, and traditionally in a reactive investigation, you're flying forensic accountants to different countries, you're flying lawyers in these reports, and if you look at the future return of investment from there, there are no learnings that are going back because the same thing can occur in a different geography. And then we're basically spending cost again, but the second benefit that we were looking for is allocation of resources. The third we were looking for is consistency, so now, I'll kind of pivot to showing you how it helps in analytic. But when you have analytics. You're basically changing. Just gather your insights from different data. It's not saying that basically, proactive is going to be the new way of identifying risk. Yes, it's a new way, but it's also not a substitute completely for reactive. Proactive, just basically, complements the reactive in a much better way. So people within the field can identify risk and they can be ahead of the risk and which basically intakes that at the beginning of the year when we plan what our year looks like, we know where we need to focus, where the risks are, where the resources need to be further allocated and, especially, in the AB and Bev World. Even though we are a global company, our businesses are very local. We have local vendors who we basically buy supplies from and those go into a beer and a product that we make so our supply chains are very local as global as we are in a company and the final thing is consistency. We're operating in 60 different countries. Now, if each compliance team was allowed to do their own processes, we have no consistency and when you have no consistency, it becomes very dysfunctional. So, analytics data inside all of these different components allows us to drive that consistency that we need. But a lot of this just doesn't happen with data and insights or an analytics platform. It starts with the people. And people and mindset are the most important things and how we all make it work in ABI is basically tying our local people and enabling them with the toolkit, which is our data analytics product called ”Brew Right” And having them use Brew Right and improve Brew Right by using those insights and basically improving those insights to help create a better product in the future. So just to kind of summarize, I think reactive will continue to be here. A proactive strongly complements reactive, and when it comes to the analytical product, it's first the people and then enabling the people with the product. And this definitely requires good support from the leadership from the legal leadership, plus the technology leadership as well.


Priya Keshav:

You shared some great thoughts and insights here. You operate in 60 plus countries and as you mentioned, compliance is about people and changing their behavior. So, how do you find the perfect balance between focusing on people keeping it local versus focusing on data and automating as much as you can?

Dheeraj Thimmaiah:

Yes, so from a business perspective, we are a global company, but you know, as I mentioned earlier, our businesses and supply chains are very local to the country we operate in. And so, each of our different companies, they have their own different databases and local systems. And for the people who we are, we have different people in different zones who use it. And what we've really done is to create a global enterprise-based compliance product. We really created a structure called a unified data model. Even though every country has its own specifications on how they operate to local processes, when we get it from a global perspective, we try to create a template that can harmonize and normalize this data to create a rule engine. But this rule engine, while we create this rule engine, it looks into the different perspectives not to get too technical here, but we utilize a concept called the Federated model which looks into risks locally, but it basically aggregates the risk into a more combined manner, in a global firm, so we kind of use these assets, we kind of normalize it. We look into risk. We kind of focus on risk locally, but then we aggregate it more in a global manner, and then we basically generate these outputs. But when it came to automation and all of these processes, these are something that, you know, I always like to refer to as a journey. In D1, when you know people are moving into building an analytics platform, it's hard to think about optimization because at that time, you know, there are different discussions about how do we get the data. How am I going to aggregate it? What type of results is going to give me? But when we think about automation, it's about understanding first the process from end to end. And thinking, where do you want to start in the journey of operation? And a lot of times it goes into the ETL which is the extract transformation and loading, extracting the data from the ERP system, transforming it to the template you need, and then ingesting it to your database to run your insights. Within these processes, you can further automate it to the visualization layer. The second part of automation, which really comes to our operation team is, basically, looking into the quality reports and this is when folks and other companies also can mature or in any analytical journey. When you mature, you start talking about quality reporting, data problems, and things. How do you start having a user-centric mindset of understanding? How do you start looking into your product and then how do you basically make sure your users are comfortable? And that brings a different level of automation, which we're really in the journey of work towards in the future. So for example, one of the things I always tell my team is, how do we identify data issues or problems with one of the criteria we've used and provide a heads up to our users rather than wait for our users to come to us. That's a mentality of customer servicing is because we want to manage the credibility aspect of the tool or the product that we have out there in the field rather than, you know, them come to us and say there's a problem, so some of this automation I think is a large topic. It's very much on the priority and the strategy of the product journey.


Priya Keshav:

So you mentioned a lot of things around data, right? So you talked about the Federated model where you keep the data model localized, but you have some sort of a model that works at a global level, and you talked a little bit about data quality. And then you also talked about ETL, so structured data is a lot

easier to handle from an analytics perspective, but you come across and you rely on both structured as well as unstructured data or unmanaged data. And sometimes you may even look at external sources for information and data that kind of helps you in your modeling. You talked a little bit about you spending a lot of time normalizing, cleaning, and being more proactive around data quality and bringing up issues ahead of time as opposed to waiting for the users to flag them. So how do you handle normalizing data and how do you ensure data quality in your work? Or how do you even define it?


Dheeraj Thimmaiah:

Yes, so let me start with data quality. So, data quality is very important. Because you could have a great product, a very sophisticated product. But if you don't have good data, it's as simple as garbage in and garbage out. How do we monitor it? So, today we've reached a sophistication where it is one of the metrics that we're going to kind of discuss. You know, as we move forward. And I'm going to speak more about those metrics is how it informs us. So we have a metric called data quality index. The Data Quality index monitors the movement of data very much from its source, which is the ERP system till it's in the destination. That's a visualization platform. So for each of these data flow processes and different steps we have within our processes of ETL, we monitor what is going from step to step and how it's impacting transformation and normalization, and based on those criteria and how they trigger, we basically take the actions. You correct it. And these actions may be temporary actions where you could just put a band-aid on them. Some may be larger actions that we may need to inform the user and let them know that there's a larger fix that needs to be applied, so the dashboard will be down for 2-3 days and there may be situations where sometimes the fix is to the larger duration where you couldn't hold the dashboard down for 2-3 days, that means we're just going to remove that piece of the impact from the dashboard and then basically re-publish these dashboards so the users can use it for other reasons. Let's talk about normalization. So, within our automation process, we basically built the normalization, but a lot of the normalization works through a unified data model. And how does the normalization and data quality index kind of help and complement each other? So, normalizations at the end are different allocations of basically taking different data value points from different jobs? Trophies and creating into a consistent grouping. So, if a new value is introduced tomorrow, so for example in a GL account, how does that be categorized into a GL category if it's not part of your rules, so some of these, we've basically kept it as triggers in our data quality dashboard. Or data quality index dashboard. So within these dashboards this normalization logic, if there are new ones introduced or if existing ones don't map into a category, we've asked it to, or the rule is written for it. So, then it all triggers there and our data engineers and scientists on a look back review can look into it and then make the necessary changes. So even when it comes to quality, we've basically prioritized these qualities into which qualities basically kind of can break the ETL process where you know the data engineers get a text message and say that they have to attend to it, whereas some it just goes as reporting and it gets fixed in the next iteration.


Priya Keshav:

You referred to ERP and ETL as one of your main sources, but you probably pull data from a number of different systems and sources. And they are also constantly changing. You talked a little bit about leadership both within legal as well as leadership within the technology or the IT department who are our partners in this exercise because they are managing these systems and they, sort of, you need a good partnership to be able to continue to automate the process of providing good data, right? So how

do you keep up with these changes to systems? And how do you foster a partnership between your team and for it to be successful?


Dheeraj Thimmaiah:

Partnerships I think are very essential in moving any technology product and it all comes to, I think, how companies and groups work together. So, I'm in a very unique role in this company. For example, we look into the ethics and compliance team, a lot of people are legal folks with a background as lawyers. I am a data analyst by skill and I sit in the ethics and compliance team and I have very good relationships with our, you know, my counterparts in the solution team, so the leadership from the solution team in ABI and Ricardo Farias and his team you know which is He is, Eduardo and Rupa, they're you know, great contact and people we work with. But how does it all work together? So when we work in “Brew Right”, one thing we do- whether you are ABI or external, which is a vendor, or whether you are ethics and compliance or solutions, which is a technical group- we do not allow all of those different naming conventions to come and stand in between us as borderlines or boundary lines. What we do is our full focus is on the product, which is ”Brew Right” So we try to put out our differences of which group we represent or which company we represent, and we basically put ourselves into pods which are different groups focusing on different dashboards within Brew Right. And you have a leadership strategy reviewing Brew Right priorities, which agenda tailored it to the pods. And people who represent Brew Right come from ethics and compliance and our solution group. But when we sit in Brew Right, we do not take into the boundaries of what functional group or which company we represent as best as it is representing the product we deliver. And that's exactly how successful our. the partnership is because we are very much focused on our product and most importantly always asking the question- How is the product benefiting the user? How are the users using this product? Because just take a day like mine, my days always start up talking with the users, right? So that's the priority we give to our users. So another thing we've started doing is, every time we implement something, we always ask ourselves the question- How do the users use it, and what is the impact that the user will benefit from this new user story? Or this new feature we're going to release? So that's kind of how we remove any boundary lines between, you know, ethics and compliance. A solution to really push the partnership more on basically helping the user, which is the compliance folks in the field with the product, which is Brew Right?


Priya Keshav:

You mentioned CPE and specifically talked a little bit about the data quality as a KP. Can you tell us how do you measure the ROI and effectiveness of the tool?

Dheeraj Thimmaiah:

Yes, so this is something we introduced in 2020, and let me kind of, you know, kind of provide a little more context on why we implemented metrics to move forward. So, when we moved into the machine learning unsupervised world in 2019, and as we graduated to supervised in 2020, and you know, refining the supervised with training data in 2021, one of the key challenges we found in 2019 and 2020 is our technology team was always saying- Which is the Brew Right product team that works on it that we're doing a lot of work to improve the product. Then our zone users, who are customers of a product were seeing the opposite. They're like the results that we're getting are very ineffective because there are too many false positives. So, they're contradicting information. So, we needed to see how we can bridge the gap because at the end of the day whatever we do if it's not benefiting your end-user, you basically lose the value or the rhythm. So, we developed a few metrics focusing on few different perspectives to help us in this journey. Because at the end of the day, numbers are numbers, right? You cannot draw different significances of them once you basically allocate the measurement of each of these numbers for each of these criteria. So, we started with the first one, which is- Risk or. Risk is basically a North star. It's the compass in each dashboard that tells which transaction, which entity is riskier than the other. And then we looked into understanding how can we measure the accuracy of this Riskor if we have to completely rely on it. Then we focused on three metrics. So, from a perspective of the outcome of the machine learning model, we looked into the F1 score, which is a balance of precision and recall. Then, we looked into creating our own formulas for false positive and false negative rates, which is taking into comparison their predicted data and the label. The data and the difference between the false positive and negative are, in the false positive which is little easier to describe is, you know, you predict it as high risk and it's not high risk. But in a false negative, it's the opposite, whereas the zones identified through another investigation or another review that these are high risk which Brew Right has either flagged it as low risk or has not flagged it at all. Because both of those are very important to us in our review because we need to rely on the risk or so that basically took care of our machine learning outcome and basically measuring our outcome with the label data.


Then a larger part is our users. We need to understand how comfortable our users are. Are they satisfied with the product? So we took, you know, a point from a chapter from a commercial team which is beast and we said how do we measure our customers and we started with Net Promoter score which is used in different ways. I think it's commonly used to measure like surveys or like promotional events and things like that on understanding how many people are basically promoters of the product. How many people are passive people and how many users are detracting users? So, we really needed to kind of really see where is a product Brew Right enthusiasts really like. And we used that as the predicament of, you know, where our product is from a perspective of user sentiment. Then we also use another metric which is the number of users. This is mainly to kind of monitor who our active users are, who are our users who basically use as maybe once in a month or once in two months. And who are the more dominant users? And finally, the last thing that comes is the data quality index that we spoke about in detail. Our data quality index is mainly telling us what is the quality in the health of data within each of our dashboards based on end-to-end processes and steps we follow. But these metrics, they're not one metric that basically tells you an overall grade, but it basically focuses on different perspectives so we can organize our internal teams in such a way and give them individual measurements That they can focus on and or not only focus on but basically move forward and where it needs to go. So how does it all come into play? So at the beginning of the year, we look into the trend from last year, and then we basically look into where we want to go in the journey for this year and the year after. And then we create, you know, intervals on expectations every quarter on where we need to be, and according to them, we readjust our priorities or strategies to ensure that they are basically driven by making sure these metrics are informed from those strategies that are made. So now, the strategies that we are making are informed by the metrics and the outcome of it is going to be measured by the metrics and every change the product goes through has to change that is going to be measured by these metrics and impact these metrics positively.


Priya Keshav:

That amazing. So what is the future of compliance analytics?

Dheeraj Thimmaiah:

So one thing that was clear to us, you know, within Brew Right as we matured and moved to machine learning in the supervised way is that training data is one of the biggest challenges. It's because, you know, from a commercial side of the business, training data is in abundance because you have a variety of users. You have plenty full of users who generate a lot of training data, but a product like Brew Right within the walls of AB InBev, the amount of training data we can get for the four labels we've categorized at least internally. For example, our labels are, you know, which are illicit or unethical payments, which are policy deviations, which are insufficient data, and which are verified and of no concern. So for us, when we basically look at a year and how much data we received and labeled, they are not consistent enough. And to train machine learning models and improve your models, you need to have consistent data- one. The second piece of it is, you know, the model will only learn based on the data and the insights present in ABI. So the limitation of the model is based on the limitation of information present, and it seems as feedback. So you know with the help of Matt Galvin, who basically architected this plan and how we need to move forward, we launched a new concept called C2C Right. So, what C2C Right does is it launches Brew Right into different companies and within their own firewalls. But part of the outcome or label data. It basically communicates with a supermodel or a collective model where the features from individual companies are then driven to the larger model. So what does it do? Already, so when there are five different companies that are basically providing features, it's increased the quantity of training data, not data itself, but metadata of the labels. So which really translates to features in the machine learning world and the second is. We've also opened the doors where now companies know new types of risks that can happen. The bribery and corruption wall. Now this is industry agnostic because bribery and corruption are not just limited to one industry or the other, right? Bribery and corruption can happen in any industry and this can be a violation of an FCCP or a UK Bribery Act. So for us, I think the future of compliance really is how can different companies come together without sharing underlying data but share metadata or features to improve each other compliance algorithms?


Priya Keshav:

Yeah, so you're talking about basic data sharing in a way that helps collectively the not just one industry, many industries as a whole, so compliance as a whole and sort of raising the bar across all companies, right? So which is actually? It makes a lot of sense. Any other closing thoughts?


Dheeraj Thimmaiah:

Yeah, yes, Priya, we're doing data sharing, but not sharing actual data, but meta.

Priya Keshav:

Yeah, yeah, the right it's.

Dheeraj Thimmaiah:

A whole different view of data privacy stuff can come up.

Priya Keshav:

Yes, obviously the metadata piece of it, not the data piece of it. Yes, I agree. Any other closing thoughts?

Dheeraj Thimmaiah:

Yes, you know, my just advice that I always give people you know when I speak on different podcasts or panels is every time when people hear the ABI story, they're like wow, you know, you guys, you know, just had this great merger and acquisition in 2016 that really helped you propel to where you are, and it's very hard to do something like that in the house because of budgets or other constraints. But I always tell people. It's the journey, you know nobody ever reached a stage like this overnight. But I always tell people it's so important to focus on how you use data because I'm a strong believer that data is the strongest renewable resource of the future. And what organizations do with their data and how they harness the data to transform to insights basically deliver judgments or information which can further be fine-tuned to be a repetitive iteration is going to basically differentiate each company from the other in the future. Because if Covid has taught us one thing, it’s that things are changing and digital transformation is going to just get amplified further and further. So just to summarize, everything: use data, build analytics, drive insights out of data because that's the best formidable information that people can start to leverage and rely on.


Priya Keshav:

I couldn't have said it better, right? So used data not just to drive insights but also to simplify your process to improve your ROI. Fascinating story and great examples on how to use data to improve both the process itself as well as reduce the cost and the complexity of it, so I'm glad you were able to share it with our audience. Thank you for your time, Dheeraj.

Dheeraj Thimmaiah:

Thank you, Priya for the opportunity and it was great chatting with your audience.


Priya Keshav:

Same here, thank you.

Dheeraj Thimmaiah:

Thank you.

*Views and opinions expressed by guests do not necessarily reflect the view of Meru Data.*

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page