0.2 C
London
HomeAI USEFUL TIPSThe Critical Role of Governance and Ethics in AI: Jeff Saviano from...

The Critical Role of Governance and Ethics in AI: Jeff Saviano from EY – expert.ai

Related stories

Farewell: Fintech Nexus is shutting down

When we started Fintech Nexus in 2013 (known as...

Goldman Sachs loses profit after hits from GreenSky, real estate

Second-quarter profit fell 58% to $1.22 billion, or $3.08...

What Are the Benefits of Using Turnitin AI Detection?

In today’s digital age, academic integrity faces new challenges...

This Man Can Make Anyone Go Viral

Logan Forsyth helps his clients get hundreds of millions of...

AI governance and ethics are critical topics that demand our attention. On the Insurance Unplugged podcast, host Lisa Wardlaw dives into these pressing issues with Jeff Saviano, In addition to a 33-year career at EY where he is Consulting Emerging Technology Strategy & Governance Leader, Jeff also has an appointment at the Edmond & Lily Safra Center for Ethics at Harvard University, he is a Research Affiliate at the Massachusetts Institute of Technology and a lecturer at the Boston University School of Law.
This insightful conversation covers the current state of AI governance, the challenges organizations face in moving from principles to practical applications, and the essential role of corporate boards in navigating the complex AI ethics landscape.
The full conversation is a must-listen for anyone interested in the intersection of AI, ethics and corporate governance. Here is an excerpt.
The State of AI Governance
Lisa Wardlaw:
How are people thinking about AI when it comes to governance ethics?
Jeff Saviano:
I’ll start with the state of AI governance frameworks in the world a year ago and today. The approach that many organizations are taking is in naming and reciting responsible AI principles. There has been great work by the US Department of Commerce from NIST (National Insititute of Standards and Technology) and also from the OECD (Office of Economic Co-operation and Development) and others on enunciating responsible AI principles, such as adhering to data privacy, ensuring there’s no bias or discrimination in models, and fairness and accountability. We’ve seen these, and many organizations have cited these in terms of Responsible AI. We’ve seen a prevalence for organizations to adopt it and say, we stand with those principles. In 2020, there were only 80 organizations that publicly announced ‘we believe in these Responsible AI principles.’ Today, that’s measured in the hundreds, probably thousands. Every company has done it.
The problem is, they’re oftentimes words on a page. They’re incongruent, they conflict. And so, an organization may say, I’m all for data privacy and I’m also for transparency. Sometimes those don’t go hand in hand. We saw many, many organizations announcing that they believed in these principles, and that’s where it ended.
Where we’re trying to pick it up is to create an executable framework, real actions. The Harvard team, we call it an ‘applied AI ethics initiative.’ Some call it practical ethics, some call it applied ethics. It means pretty much the same thing, that it’s not having discussions with the philosophical elements, that’s not the intent. The intent is to find a linking mechanism. We need a linking mechanism from the philosophical to the practical, and that’s what we’re trying to do.
Lisa:
It’s easy to come up with academic frameworks, philosophies, in terms of what we can put on a piece of paper and make it look good. But applying it in an executable, practical, pragmatic way is something that we seem to struggle with both in the technology itself and governance and frameworks. How does the linking work? What do people struggle with and how do they get beyond the academics of it?
Jeff:
What we consistently heard from boards and also from upper management, from business leaders in the C-Suite, is that the first place they wanted to start was ‘tell me what I’m legally mandated to do. What is legal compliance?’ And companies get that very well. They have chief legal officers, chief compliance officers, they have risk officers, they understand how to comply with existing laws.
And we broke that down into two categories. The first category is non-AI specific laws that exist like GDPR will still apply for your AI systems. The Civil Rights Act of 1964 applies to your AI systems and to ensure there’s no discrimination. So existing laws become very, very important.
The next layer is AI-specific laws on record. I don’t believe we’ll ever see a heavy hand of global AI regulation. It won’t be consistent. It’ll be a patchwork. There are plenty of examples, from President Clinton in the US in 1997 with e-commerce saying, we’re going to leave regulation to industry. We want industry to self-regulate that emerging technology. We’re going to stay away. We don’t want to impose a burden on interstate commerce. And he did it again in 1998, signing the internet Tax Freedom Act. He said, we’re not going to let you tax the internet. So, there’s a long history of government saying that this is powerful technology that could produce incredible commercial and GDP value. We don’t want to over-regulate it and stifle innovation because of that. We think ethics matters more and more.
Corporate Boards: The Convergence of AI and Sustainability
Lisa:
How are boards thinking about their experience as it pertains to AI governance and ethics?
Jeff:
The average director may be in their mid-sixties, and we’re hearing this from many directors, that they’re craving additional knowledge and education about AI. They didn’t come up in their organizations using these tools. We’re trying to also map it to other areas where the board is familiar.
I can give you an example. Boards are wrestling with cyber issues, but approaching AI issues, you can look at it from an offensive or a defensive standpoint. From a defensive standpoint, the way that you approach cyber risks may help provide some governance considerations for how you want to manage your AI risk. Of course, there are plenty of opportunities with AI, then we don’t look at cyber as managing opportunities. So that’s one.
The second is around sustainability. Boards have been wrestling with stakeholder capitalism. We want to do good in the world. We want to ensure that for many companies signing onto the Paris Accord and managing the carbon emissions, yet, how much of that can we do and where does that rub up against shareholder primacy? And so all of that I think is important.
Fast forward to 2021 and an important case involving Boeing. It was a shareholder derivative suit against the board related to the 737 Max software issues that they had. The court ruled that the board wasn’t doing enough to protect their stakeholders, the customers. Now we have a decision for the first time from the court saying that you must also ensure that there is a duty to your stakeholders, not just your shareholders. I think that is a game changer for AI risk management.
Lisa:
If you put a sustainability lens on that, how do we still leverage these new AI emerging practices with data privacy practices, with sustainability practices?
Jeff:
Let’s say it was costing physical compute $200 million a year just to lift and shift and move the data, which every time you’re doing that, you are consuming energy to compute lift, shift and move that data. So, this board of management actually said, we need to apply our sustainability all the way down through our AI. And they said, how can we start bifurcating the data that we actually learn on from the data that we need to actually lift, shift and move? They ended up going from a $200 million cost of compute, which just connect that to energy use computational power. They ended up limiting that to about $20 million. So, from $200 million to $20 million by deploying these concepts.
Jeff’s Call to Action for Responsible AI
Lisa:
What’s your call to action for people in this realm to say, ‘this is what you need to think about in a simple way’?
Jeff:
Just to hit the first point about behavioral studies and behavioral science, that the core of this isn’t that really the crux of what we’re talking about is making better decisions and data-driven decision making. From our collaborators at MIT, I learned about the “ combinatorial creativity of data.” Looking at your data hole, think about your constituents; think about the insurance industry and the trove of data that insurers have, how could they enhance it? How could they look to what’s emerging as new data marketplaces to buy and sell data and increase your data holdings and why? Because it’s about making better decisions, looking for those unique and novel insights from the data, the art of the nudge. Then how could you nudge people perhaps to take one action over another? And anytime I think of behavioral science, I think of the art of how could you nudge others?
But then also, look, what’s important is that in the AI domain, that there are some companies forming ethics councils and who are they comprised with? Well, you said a few sociologists, anthropologists. There’s a great proposal from Kinney Zalesne, one of the advisors from the Harvard Research Network called GETTING-Plurality, calling for citizen representatives on boards. What a great idea. Put [on] a citizen representative. What if every board had a citizen rep on it? Wouldn’t the world be a better place? I think it’s a wonderful idea to try to get to what’s the impact that you’re having in the world. And so, board composition matters, and I think that there’s some really unique suggestions and recommendations that exist in the world.
Lisa:
What’s the final wrap up that you’d like to leave with our audience?
Jeff:
I’ll pass along a comment that was made in a panel I was moderating at MIT last month. One of the panelists was talking about the need for governance and used an analogy that really hit home for me: in the auto industry when they first came up with the need for brakes, you think brakes will slow us all down, in the world and in our cars, but actually the brakes would allow us to go faster.
And I think the same applies here to governance, that these governance systems are not to slow us down, but to enable innovation. Good governance will enable innovation, and it won’t stifle it like breaks. They’ll allow us to go faster, but to do it responsibly. That really hit home for me, and I think it’s a good analogy for us to think about.
 
Listen to the Insurance Unplugged podcast episode with Jeff Saviano.

Latest stories