top of page

Simplify for Success - Conversation with James Sherer

James Sherer, who co-chairs the Emerging Tech Team and leader of Information Governance and Artificial Intelligence practices at BakerHostetler, was on #SimplifyforSuccess podcast series to discuss IG programs.

James spoke on the importance of focusing on viable client solutions in the current state of regulations surrounding AI.

He discussed how IG professionals can guide clients to a more defensible practice with their understanding about the aims of regulators and investigations, ethical frameworks, and related controls and responsibilities.

Listen to the podcast here:

*Views and opinions expressed by guests do not necessarily reflect the view of Meru Data.*


Priya Keshav:

Hello everyone, welcome to our podcast around simplifying for success. Simplification requires discipline and clarity of thought. This is not often easy in today's rapid paced work environment. We've invited a few colleagues in the data and information governance space to share their strategies and approaches for simplification. Today we will be talking with James Sherer.

James is a co-leader of the Emerging Technology team at Baker Hostetler Digital Assets and Data Management Group and directs the firm's Artificial Intelligence and Information Governance engagements. His work focuses on litigation, discovery management process, enterprise risk management, records and information governance, data privacy, security, and bank secrecy, artificial intelligence and algorithmic transparency, technology integration issues, and related merger and acquisition diligence. Prior to joining Baker Hostetler, James worked as an in-house litigator with a Fortune 500 company. Previously practiced litigation in New York. He holds an MBA, his CIPP US, CIPP Europe, CIPM, FIP and PLS data privacy professional credentials. The CIP and IGP information governance designations, the UCLA Extension Global Cyber Security Institute Cyber Security Certification, and the seeds and CM Law EPC E-Discovery specialist credentials. James is also a member of the Sedona Conference working Group 1, 6 and 11 and writes and presents on E-Discovery Information Governance, Privacy, Investigation, Merger and Acquisition issues. Hi James, welcome to the show.

James Sherer:

Thanks so much for having me.

Priya Keshav:

So, you help and advise clients on privacy, security, information governance and now around AI as well and they are interconnected. So how does the role of cross collaboration play in supporting these projects and initiatives? What are your thoughts on that?

James Sherer:

Well, I don't think you can truly be an expert in all things, so it's finding the right level of expertise that's warranted for the right issue. So, for example, you mentioned you know, kind of that combination of privacy, security, governance and AI. I mean, I can tell you off the bat that I'm probably least strong on the security side. You know I've done work with kind of security audit, application of principles, different frameworks, you know primary and compensating controls. But you know, if I want to dig into that issue further with a client, or we really want to establish, you know, a good policy background for security procedures, or, if something goes off the rails and there's a security incident with the client, then you know I have colleagues that I'm going to bring to the table that can speak to both, you know, learned knowledge as well as experience benchmarked across clients. So, I think that plays directly into collaboration. Knowing how to issue spot at a high enough level and then knowing the correct resources to bring to bear so that you can bring them up to speed and you can provide, I think, the most comprehensive and appropriate picture for a client.

Priya Keshav:

I totally agree that these are different issues, and they require a level of expertise that you know are there are complex issues in a way.

But what are some of the barriers that you feel are, especially with your client's perspective, as you sort of advise them on, let's say, maturing their program, what do you think some of the barriers are in collaborating between some of these functions.

James Sherer:

I don't know if I have the magic wand to eliminate a lot of the barriers. Pragmatically they just start at the level of when you're asking people to collaborate, you're asking them to do more work. That's a challenge, and it's rare to find organizations in which people are looking for more work to do. There are not a whole lot of people that are sitting around with a lot of disposable time or feel as though they're being underutilized. That's normally not the case, especially when you move into pictures where your collaboration is fomented on this idea of bringing expertise to the table. As it turns out, experts stay fairly busy, so when you're asking them to collaborate on a given project, especially if it's not within the normal remit of whatever their job description is, then you're asking to do more work. That's a challenge.

Now to confront that particular challenge. I channel one of my early partner managers. When I was coming up in the profession, I was maybe five or six years out. I wasn't brand new, but I still had plenty to learn, as I do today, but I mean at that point in time, I certainly did. And you know, I remember being frustrated because we were working with another party who just didn't seem to get the message, who wasn't helping with us. And I had this expectation that they had a part to play, as did we. And in that case my boss really sat me down and said, what's the ultimate goal here? The ultimate goal is the correct resolution for the client, and in this case to get the client paid. Well, if you've got a barrier to that, is there more you can do? Can you frame it in such a way you're providing the least amount of work for them so that they can move things forward? You know, lower those barriers, don't make it all incumbent on them. You know, meet them more than halfway and you'll solve that issue. Because of course in in the case that we're talking about here, you're relying on somebody's expertise. You need that as part of the process because you don't have it. But what do you have? What can you provide? So, if you smooth that road, if you remove the roadblocks, I think that that that helps you. And if you go into it with not just a collaborative nature and not an expectation of collaboration, but really, you're asking someone for help, and you want to make it as easy as possible for them to help you

Now that carries with it though another friction. Here that I think clients struggle with when it comes to collaboration. They want it done. They want it done sometimes surgically in a way that doesn't have a lot of complexity to it and they might have an expectation that for whatever reason, an organization like a services organization with whom they're working, like once one person knows that everybody knows it and I don't know exactly how that would work through osmosis. But you know, certainly, I've worked with clients before who will not, they wouldn't pay for an internal meeting within a law firm or within a service provider. It's just this idea like that's you doing your job and then for us it's really about the deliverables or the time you spent here. So again, that's a friction where bringing more people to the table means more time spent bringing people up to speed, and I think that can be a constructive measure, especially if you're being respectful, being an expert and you want to make it easy for them to help you.

But of course, a client may look at that and say, well, that's added friction here when we've got a service provider or someone we're paying by the project or hourly, that may be an additional cost, and sometimes they can block it out. Sometimes you have to find creative ways to do it. Uhm, good summaries, good documentation, bringing people up to speed quickly, being very cognizant of the time invested and then being very, very goal oriented. Especially if you've got concerns on the client side about that issue.

Priya Keshav:

So, you brought up a really good. One is communication, the other is communicating in a very, very simple way so you know you're always advising clients on very complex legal issues surrounding privacy and information.

Can you share some examples of how you've simplified? I mean, whether it's in the in the, you know implementation of privacy or information governance or any other scenario, how you've simplified to succeed.

James Sherer:

I probably overuse metaphor. I like to find instances where clients are familiar with one concept and then translate that over to something new if I can. And oftentimes that's you know taking a client who may be new to a specific idea.

Let's say an application of cloud storage, you know, and a master services agreement in support of that. That is going to change the manner in which the organization manages this information. Suddenly they've got like an extension out of that practice into another platform. They're going to have to deal with those issues differently, and you know, and I say, well, look at this instance and compare it to you outsourcing certain functions of your family or if you had to bring in somebody to help with this issue, the other and the other. You can see how you don't really lose responsibility.

So here I'll give you a bad metaphor in exactly this one, because this is usually how it happens, you know. So, if you're outsourcing to a cloud service provider and you're sending the information, or they're acting perhaps as a sort of middle ground for the collection of information that you're going to utilize in support of your business processes well so that some of your responsibility goes away in some ways its height in the same way that you might bring a nanny into your home to help with your children. And you can have every reasonable expectation that the nanny is going to do a good job. You would certainly hope so you can contract around it. You can do your due diligence and background checks, but you know what you never really know and you're still responsible for your kids. And not only do you remain. Responsible for your children. You're now also responsible for the nanny who's responsible for your kids, so you have in some ways increased your risk profile, but that's of necessity because there are other ways in which you need to spend your time. You have to have an income you have to do other things, so it's all about choices. Understanding choices and then I think trying to distill it down to something that's easily graspable by the client, or at least we can start there and then they start to challenge my metaphor. Then I can draw it back to, you know the specific professional system at hand, and then we can address it as we go.

Priya Keshav:

So how do you leverage technology to solve some of your problems?

James Sherer:

Sometimes you need to. And I think that those instances where technology is necessary is when things scale when things scale, you need tech because we do not scale. We are very much analog in a digital world. I think I cut my teeth in a lot of governance principles as many of us did within the E-discovery realm and that was the introduction to this deluge of information just coming in. You cannot read everything. You can't print everything out, that's just not an approach that's viable. You have to figure out ways to scale. And now you can scale it by using autonomous systems to some degree, and certainly technology assisted review in that instance you can scale it by bringing in other individuals who are subject matter experts who might be dealing with things a little bit more efficiency.

But of course, like I, I think the best plans are usually, you know, an inappropriate marriage of technology with the subject matter experts. Those people are competent who are going to set up a system that works that's logical, that's explicable to a client, especially those that are ultimately footing the bill and then using those technologies to scale. And I think then finding systems that work that you have experience with, that you've benchmarked across clients in some ways that you've done quality checks on and then finding other analogous instances that you can start to apply it.

So of course, information governance has tried to pull some of the learnings from other places, like E Discovery. They have done document indexing at scale, doing certain searches across the documents you know, coming up with or surfacing things of interest and then trying to translate it to bigger governance projects that maybe didn't have the same time and attention in the past. So let's take the tech, let's find new instances where we can, we've now got an economically feasible model to tackle problems that before were just too daunting and could not have been handled at an individual level.

Priya Keshav:

I mean, you mentioned a lot of things, but I'm kind of taking a slightly different turn, which is you mentioned Technology assisted review. We started talking about technology assisted review in the E-Discovery world about, you know, I don't know, probably more than 10 years ago. So, AI has become more and more ubiquitous. And also, conversations around ethical use of AI and governance of AI models. You spend a lot of time advising clients on how to solve problems with both AI as well as ethics around AI. Any thoughts that you'd like to share? On how you approach it.

James Sherer:

Well, hopefully holistically. And I will say that given the state of regulation surrounding AI, it is easier to come in trying to focus on client solutions that are viable. Like I, I'm there as an advocate for the client and to help the client do the business that the client is trying to do, like you know, a mainstay of the organization they're trying to either make a profit or fulfill a mission, or you know, execute on certain tasks or whatever it is that they're working on. So, we're trying to make things work. We're not trying to be a stumbling block to the application of these technologies.

However, the regulatory landscape is changing and it's changing quickly and you're absolutely correct. Like over time there have been a number of ethical frameworks that have been proposed literally hundreds at this point, literally hundreds of ethical frameworks out there in the world. But we've been getting better and what truer guidance as to what regulatory enforcement on the back of the utilization of AI tools is going to look like. And even a couple weeks ago, if you go to our blog at Baker Hostetler, you can see we've started to write on this. We're continuing to do scholarship. We'll have something up soon with a little bit more detail. But both the FTC and the European Union gave guidance about AI ethical frameworks in their application and responsible use, you know, within the same week. So, two weeks ago we got another jolt of guidance. Which is extraordinarily helpful to me, because I can start to knit together, kind of a proposed framework and have and spoke with a client just today about this.

Here are some things to think about. Here is essentially a life cycle of considerations, starting with the data acquisition you're going to build a model off of, as well as an application of a model in it's use all the way through to audit and disclosures that we can think about. And it's not to say all of this is going to be determinative. But to have an intelligent conversation with the client, the client is certainly going to come away unless I haven't done my job, they're going to come away with more information. A much more educated viewpoint, and then we can help guide them to a more defensible practice, given what we understand about the aims of regulators and investigations. As well as ethical frameworks and kind of related controls and responsibilities, it's an exciting time. Those discussions as we have them are pretty active.

Now to your point, a lot of different maturity within something like E discovery and Technology Assisted Review, but don't forget there, you're looking at the application of a tool and system within. It's almost like it's governed by the expectations of the parties, there's kind of an objective model that you're working toward, but ultimately, you're negotiating with another party, whether it's the government or plaintiff's counsel or another party on whatever side of the V, you're coming up with the parameters that you're going to execute on the parties are agreeing to those. What we're looking at now is more prospective.

We're dealing with artificial intelligence, not quite in a vacuum, but we're putting it out there in the world. There are going to be certain expectations associated with the users, the consumers and then how are we going to disclose those things in a way that the regulators world really going to be happy with, or at least satisfied? So, it's a new world. We can take our learnings about systems and our benchmarked work so we know things that that will technically work. Now we have to look at how they work with and operate within more general concerns and thoughts about enforcement, where we have a little bit less guidance.

Priya Keshav:

I mean this topic in itself. We can talk about it for hours, but uhm, you know you mentioned a few things that I sort of want to explore a little bit. Which is, you know, having this conversation with a client and looking at how you would govern AI? We're also introducing, and again, the definition of AI can be all kinds of things, right? So, there are pretty much today in the market, there are no products that don't have AI or some of it could be just hype and some of it could be reality. And so, you're using it for what I would call very, very small maybe automations too all the way very complex decision making.

So as a client, if I'm if I am looking at both, disclosing from a privacy perspective as well as having some level of understanding on the type of AI that is being used, my guess would be to kind of prioritize on the ones that are sort of critical to govern versus every piece of AI that is implemented in every technology that is being put to use in the company.

So, what are your thoughts and around how I would you know, prioritize, and even if I do prioritize and I kind of look at all these ethical frameworks, it's not that easy to come up with a way to sort of manage this, because to some extent, even with Technology Assisted Review, even with Technology Assisted Review, we spent a lot of time debating as an industry about you know how to measure, what to measure, What are the questions to ask? What are the inputs et cetera, et cetera? I'm not sure the same level of scrutiny, every product that has AI has been has gone through that level of scrutiny and to a large extent their black boxes, so I'm just wanted your perspectives on some of those things, some of the challenges that revolve around governing or understanding ethics around AI.

James Sherer:

Well, I much like data privacy. It is a risk-based framework such that there's not a one size solution for everything and you're absolutely correct. In our instances, clients need to think about the dangers inherent within systems and whether they're going to matter to the consumers and, and frankly, if you do that analysis, you're probably going to get pretty close to the areas in which you have to worry about enforcement as well, because it's two sides of the same coin. If you're looking at a system, you're like, this seems a little bit iffy, or it could have unintended consequences, let's say in an area of disparate impact. You know, protected class work in the United States. If you're building a model where you're like, oh well, you know is this going to turn out OK? It's like with the discovery, the question is like should we have a legal hold in place? Sometimes the question answers itself, or as I said, it's like the question should we call off the wedding? like the question answers itself. If you have to ask the question, then you probably want to think more carefully about it.

Now that's already happening, I mean AI, weak AI is proliferated through a ton of different systems, but there are some ways in which you can say OK But what are we using it for? And all of the proposed frameworks, especially on the regulatory side that are going to have some teeth, again speaking specifically to the FTC and the EU. They understand and appreciate there's a spectrum here. If you're trying to sell something, or if you're just trying to get someone's attention for a second, then alright, that's unlikely. The cost to a person is several seconds of their time. However, in the US, if it's about the rate of your loan, if you're trying to buy a house, and that's predicated on certain factors that are really indicative of your race or ethnicity, or your gender, or your age, or any of those protective class sides. Guess what? That's a potential problem.

Now if you're in the lending business, you already know this because there are a number of laws that have been in place for a long time that were there before AI came into play. So, the application of AI was just seen as another tool in which they could scale, let's say loan origination in a way they couldn't before, but they appreciated all the way through that the same concerns about the manner in which they were making those loans were going to be there.

Same thing for selling homes. Or you know, the concerns about redlining and they knew that within the data too, because of course. And the FTC has given guidance about this, Certainly the EU has. Think about the way in which you gather the data and compile it. If you know that there are, there are certain problems that are just going to be rife within the data set, then you have to acknowledge that and either work it out on the back end, or you've got to fix it within the data set. Figure out ways in which you can do it and not to get overly technical, but you know we've seen, looking at like dimensionality reduction algorithms, where instead of parsing out too many factors that end up mattering, you can. You can kind of dumb down the data model so that it's not taking specific factors into account.

Now, if you're sophisticated enough, you'll recognize that you're losing some flexibility within the model. And sometimes you may actually be costing yourself money, but you're costing yourself money because you're not doing things that are prohibited by law and that level of sophistication. I know you said like the. Black box but, you can tweak these things on the front end, and even if you're not awesome on the math, and if you're, you're not so sure of what's happening within the black box, and you can audit on the back end. And there's an expectation for that too, within a number of the ethical frameworks, and certainly within some of the governing bodies, or those we expect to be really, really active in enforcement.

Priya Keshav:

Information governance, AI privacy. What are some of the biggest challenges that you see in implementing some of these programs at the corporate level, and what are some of the, Do you have some suggestions on how to overcome these challenges? As one looks to kind of, have a program in place.

James Sherer:

Information has grown much, much quicker than headcount that’s responsible for the information that's grown much, much bigger. That's the initial challenge. Like it's just the problem kept building bigger and bigger and it's as though you're starting to look around like you're looking around the apartment you're like. Yeah, I keep buying a lot of stuff I really got to manage this or the apartment getting dirty. And then you just kept getting free rooms in the apartment because storage is super cheap, and the information just multiplies. It's it is cost less, essentially costs less to make copies of things and to stash them somewhere.

And the way in which we work as humans. It's kind of out of sight out of mind. So, the bodies are not there to handle it and. I keep trying to find hooks in dealing with clients with because it obviously it represents a potential challenge for a number of different challenges, so I keep trying to find hooks in which we can get clients to focus on it. Do we do it because there's risk associated with it? Is there a risk of E discovery? Because if you've got it then you may have to turn it over. You certainly have to pay to review it, and that's expensive. Is there cyber risk if you have it, it's potentially vulnerable, and if it you know it ends up getting compromised and you don't know if it's been exfiltrated, then you've got to treat it as such, or you may have to treat it as such.

So, is it that risk point? Is it that it costs money to store? Is it that if you've got too much of it, people can't find the things they need to do their job? Is it then the flip side, can you get value out of your analysis of it? And while we're analyzing that data, it also makes sense to clean it up because the cleaner the data sets, the better the metadata, the better the inputs to these system, the better the outputs you get kind of the color area to what we were talking about before.

So, I think I worked through all those issues, but the first challenge is having enough people that's you know, time, money, attention. The other part, I think, is that. It's just it's too big a task. Often again, it is growing nearly exponentially and it's tough to grab onto that and say, OK, cool, I'm going to. I'm going to master this thing that's growing faster than we really understand. We're not great with those the geometric growth models. So, you know trying to work clients back and saying we're not going to boil that ocean. You know the famous information governance saying. We're going to find ways in which we can address near-term issues. So, let's find doable projects you know, the debt snowball approach will work from what we've got in front of us, and let's get policies in place. Let's get people to understand policies. The very least then we can proselytize to them, and they can go out and work through these things. That gives us a more defensible posture, if the regulators ever come knocking or you're in front of a factfinder and they're asking you questions

And then, once we've got kind of a strategy in place, we can be tactical with it, and we can find places where there's no value to the information and we can start to remediate them. Or we can recognize prospectively we don't need to capture certain types of information.

So, I think those are challenges. I think sometimes too, it's just, you know, clients dip into this program for different reasons and it's sometimes it's tough to keep them going. You know I've had programs with fits and starts with clients where I won't hear from them for a while, or we'll try to do as much as we can with them, but. If they're using us as a resource, we're not embedded there. We're not internal consultants. We are a cost and sometimes they just have other things that throw away their attention and we have to be flexible and understand that.

But I mean there have been times where I've worked with clients and tried to work on policies and I'll send them a draft and I won't hear back for 11 months and then they'll want to pick it back up because for whatever reason they need it for this audit, or they need it for this project. Whatever it is. I think for us too, it's being flexible. Knowing that we're we are client service first. And that means working with clients doing the right thing but doing it according to their need. Certainly, according to their ability internally to manage projects and to support them, and in trying to get the best bang for the proverbial buck as we go.

Priya Keshav:

Don't you feel that when you sort of start with, let's start with what we can and when somebody starts, and they stop and then they restart. Uhm, I also find that it makes it very difficult for them to sometimes it just means that you're going 3 steps backwards, 2 steps forwards, 3 steps backwards.

Or you know maybe three steps forward, 2 steps back, but you're really not making progress because there isn't enough of an investment and the recognition within the organization to tackle the challenge so and with information, just because it grows at a pace that is tremendous, if you took an eleven-month break, you're pretty much back to square one, because you've probably generated more data in the in the last 11 months than you did in the previous year. You know because of the way their data exponentially grows.

James Sherer:

If my goal was always to just execute on a project plan or even I, I guess I wouldn't say limited to that, but that's not my goal. It's just not and maybe I’m lucky in that way as an attorney. As a licensed practitioner, I'm a counselor. I'm there to help you. I'm there to help inform you so that you, as the decision maker, can make those decisions and you know I can guide you to certain ways. My goal is well, my goals are multiple, but my first goal is to support you.

I'm not there to judge your process. Yeah, like there. There are plenty of judges out there. There are plenty of people who are going to challenge that process. I'm there to help you see the risk in sticking with the behaviors that you have. I'm there if you decide to address certain points of it based on that what your risk tolerance might be to see if there are technical measures we can implement. If there are ways in which again, we could scale technology to help you. If you can't do it in a manual way. And frankly, there to answer questions and help support defensible positions that you come up with.

A lot of my clients, like I said. They'll be fits and starts. That's OK, that's their process. And at some point they may switch over to a new technology and maybe all the work that we put in just wasn't availing. I mean, I've certainly had that before. They'll start on one part, or they'll start with some kind of relationship and then they'll blow it up. That's OK, yeah. And you're right. Certainly, you get a little bit further behind, but we can always catch up and deal with whatever the next situation is.

And again, if it really, really, really really came down to it, there's not really any issues where an infinite amount of time, money, attention, personnel wouldn't fix, so we're always working backwards from that. I expect that there's like an objective measure that we could handle everything. We're just moving back off of that.

And what I don't want to do, what I certainly don't want to do, I never want the client to feel as though they're disappointing me or that they're doing the wrong thing, I'm never going to be documenting that they're doing the wrong thing, because if I do that, this is still very much a personal job. I mean I have personal relationships with my clients who might support whom I work for under my license. So, I've got to be there to help them execute on decisions they make to the extent they allow me or to bring in other help.

I cannot be another stumbling block because they get enough of those often within their own organizations. So, I got to be a cheerleader. I have to be there to help move things along. I cannot afford within this practice to be negative in approach. I've got to find the positives and work them through find winds, find momentum, keep things going, and do the best we can for the clients.

Priya Keshav:

Great thoughts. So how do you use metrics in your program and how has it helped inform your program or your client's program? How does it help improve their and sustain their program in the long run?

James Sherer:

Sometimes I have a love hate relationship with metrics within this practice. But they can be very helpful. Now there's that adage, right? If you can't, what is it? You can't measure it then you can't improve it, or I don't know if that's necessarily the case, but it does make sense to find metrics where you can. To enumerate certain things so that you can execute on them.

It is helpful to keep momentum going. So, let's say you're on a governance project and you're trying to do inventory work well. Finding all the systems, numbering them, and then you know using that as a punch list to move through. That's extremely helpful and there's going to be a lot of the and I get bad. Bad saying, but the low hanging fruit where you can hammer those things through really quickly and then to the extent that part of your governance program is keeping the higher ups satisfied that you're making progress. Well, it's great to be able to say we identified 42 assets and we were able to clear 31 of them in the first round and make sure that the policy is applied against them.

Now, maybe you don't mention that the ones that are left or all the file shares in the email system in the cloud storage, because those are going to be the difficult ones, but metrics help move things along. And again, that you get a little bit of bump going through then you can keep that momentum going and I think it is helpful.

Likewise, metrics are extremely helpful when you start looking at where efforts are going to be seeing the most or where you're going to get the biggest bang for your buck. That's easy within governance because you just start looking at information sets and you say, well, what is the volume here? What's the type? If you could start to categorize certain things, you're like, OK, well if we make this decision, we make this judgment call it's an investment of X amount of hours or money but this addresses 70% of the outstanding picture. I mean that that's a really, really good one.

Again, sometimes you leave the hard cases till later in the process, but if I can eliminate ostensibly a 70% part of the risk profile for a given asset, that's a win. And then those metrics will speak to people who aren't steeped in the culture who don't know all the heavy lift that's gone into a project. They can look at that and say cool like this is an investment. We've done this. I can see the definite benefit to it. Those metrics help tell a story and you know, because numbers are kind of unilateral. In that way, I think they tell a clear story, that sometimes a narrative about this that would require investment just doesn't have the opportunity to do.

Priya Keshav:

So, what do you don't like about metrics?

James Sherer:

Well, not everything is equal, and I think it's the flip side of that. It's, you know it's not one of those things to say, OK, well, let's assign each person 10 assets to deal with. And if, like I said, someone gets stuck with email file shares, cloud storage and I don't know unstructured data on user devices. It's just not something helpful, it's apples and oranges. Or it may not even both be fruit. There are sometimes challenges buried within that.

Also, sometimes like the metrics. If you've had experience with vendors and you're looking for metrics for price or spend. Well, if you're dealing with outside vendors and hopefully this isn't a secret. There's a lot of ways in which they can shade things. You're like, how much does hosting cost? Well, here's what I've got to make. So how much do you want hosting to cost? Because maybe the review costs a little bit more. The application of technology assisted review is a little bit different. Or maybe we'll get you on the processing fees as opposed to the hosting.

So, some of those metrics I think can be misleading because, within pricing systems across some the technologies we'd like to apply to dealing with information governance and actual solutions. It's not an apples to apples comparison, you know. If you really have to like to draw it all the way out to say, what is it going to cost for us to go from A to B because you have to ignore some of those provided quote unquote metrics. But in order to do that, then you run into the challenge we have before, which is a lot of these projects are fits and starts where we might be able to get it through you know, points AB&C of a 26-point process. And those can be meaningful, and we shouldn't avoid doing them. But then, if we're assigning costs there, and we've got a different model.

I think that's where some of the challenges have been right. It's like the misuse of statistics sometimes. You can twist them a little bit like we have to be really, truly careful about the metrics. Come up with objective and measure as we can, not everything is going to be like Hash 5 value.

Priya Keshav:

Yeah, if you are just looking for numbers and statistics you can kind of manipulate statistics to meet your goals. So yeah, obviously that just won't work. I agree with you.

Any closing thoughts before we end the show?

James Sherer:

I mean to the people listening or you know who might be sent a link to this or have any kind of interest in it. There have been different waves through practice, I think through the you know both the legal IT professional sides and you and you've seen them coming. If you work within the legal profession, you may have seen, like the E Discovery wave coming in like 2007, 2008 changes to the rules and like just an increase in information. Then you saw privacy start to enliven and then you know we see little blips here about like blockchain and cryptocurrencies. Uh, social media was another one.

AI is coming arguably here and to your point, it's just so integrated in, but I think a true attention toward it is coming in a way that we suddenly had a lot more attention in the US based on the CCPA certainly among a number of my clients and clients that I work for. But I really, truly think enforcement is coming, and while I hope to be well positioned to assist clients in that area, I cannot be the only one.

There's going to be lots of opportunity in that space, and it is an area where if you've got the inclination and the wherewithal, you can do a lot of homebrew training. You can become educated. It's not just about certifications or accreditations, although I am a staunch proponent of those. Maybe it's because I'm lazy and they help me focus, but there are going to be true opportunities here. And for those, those practitioners that understand that information and its governance are the correct underpinnings to these systems. You're going to be well positioned to help with those systems developments. Their protections, applications of frameworks and ultimately their defense if they're ever called into question.

So, it's just an amazing time. People are getting better educated about it, but there's still so much opportunity really to find a voice within this space too. And so, I hope that's encouraging to people who hear this and say, oh well, cool, maybe, maybe I just spend the extra time just reviewing it a little bit. Don't be shy about using a search engine to dig in a little bit more and certainly don't be shy about asking questions, because if you're asking questions within this space, you're going to uncover one or two things. You're either going to get answers, which is great, hopefully answers you can trust. Or you're going to determine that the person to whom you're asking the questions doesn't know the answers. And then you're going to be a better informed either user of the product, the service, or a better advocate for your clients, whether internally or externally, so that you're just better positioned to know what it is you're doing and aware of more of the risks.

Priya Keshav:

Great closing thoughts. I agree. I think it's uh, it's coming, and we haven't even, I feel like there is a lot that we'll learn as a community. I know we've talked about frameworks to evaluate, but you know, we really don't know. So, to some extent we don't know the full potential of AI as it's being implemented in all these projects. Or in various technologies, but the converse to that is we talk about AI wars and cyber wars, You with AI so how do we evaluate AI is one. There's just tremendous opportunity for someone who wants to spend time, and I think it's a wave that's definitely coming. I agree with you. Thank you so much for your time, James. Appreciate your thoughts.

James Sherer:

Well, thanks so much for having me.


Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page