Make lemonade out of Lemonade: Takeaways from recent events

AI Governance & Assurance
Risks & Liability

Being a disruptor is hard. It requires ignoring doubters, taking disproportionate risks, pushing the status quo, and – more often than not – hitting speed bumps.

Recently, Lemonade hit a speed bump in their journey as a leading disruptor and innovator in the insurance industry. It is likely that they will put this news cycle behind them; however, this very public incident should create a moment of reflective awareness for every major insurance company and C-suite officer.

What evidence and readiness does your organization have to explain what your AI does? If requested, are you prepared to prove how you are managing risks consumers and regulators are increasingly voicing about the fairness, safety and compliance of your AI? There is a 65% chance the answer is no

Let’s talk AI risk

Pay attention, AI innovators: without more intentional talk efforts to address the risks of algorithmic systems, Machine Learning (ML), and Artificial Intelligence (AI), we are going to hit a massive innovation speed bump. If all we do is talk about “black boxes” and impossible-to-understand neural networks without also clearly investing in and celebrating investments in AI Governance and risk management, the public and regulators will push pause.

Companies are not just blatantly throwing AI systems over the fence to make unsafe and unfair decisions about people. Regulated industries have well established internal compliance, governance, and audit functions; however, there is a gap of awareness regarding the risks presented by their use of models and big data. In many cases, companies have not built for cross-functional transparency and understanding about how their applications were built and actually work.

Lemonade, as one of the most prominent Insurtech companies in the world, promotes better consumer experiences and products fundamentally centered around new technology and AI. One could argue that, with AI at the core of their business, they are more prepared to handle and resolve this incident, but what about a “traditional” carrier or market leader pushing aggressively to innovate and disrupt themselves with AI?

While companies are aggressively promoting their investments in and use of AI, they are not talking about investments in responsible governance and oversight enough. Recent accelerated investment in AI Ethics is a fantastic step in the right direction, but it is only a piece of the puzzle. Companies should talk about when they are investing in people, process, and technology to build oversight, governance, and controls around these systems to be safe, fair, compliant, accountable, and transparent. These efforts will create confidence and trust.


Image credit: FDA

Almost a year ago the FDA, through their SaMD guidance, highlighted their support for the great transformative potential of AI in healthcare, as well as their desire to determine the right balance of regulatory oversight to protect consumers without stifling innovation. Their goal is not a perfect and fault-free AI world, but established standards and methods of enforcement that reduce the likelihood or scope of incidents when they happen. They acknowledge they will not be able to compete with industry to attract the level of data science or engineering talent to deeply inspect an AI system, so they will need to lean on controls-based principles and corporate evidence of sound governance.

Why reference the FDA’s regulatory outlook in AI? Because while insurance has unique challenges and considerations, the fundamental emerging expectations from regulators across sectors and geographies are the same: Show us you know what your AI is doing, you understand the risks, and you have controls to manage those risks.

Read the NAIC, FTC, DOD, and European Commission outlooks, and you’ll see a pattern of principles that provide a blueprint for a company to organize around and to evidence effort and intent. Regulators and the public know mistakes will happen – but errors and negligence are two very different things.

Do not hide behind “it’s a black box”

Having personally spoken to hundreds of executives and innovation leaders across major regulated industries, not one is actually using completely opaque technology like complex neural networks to make consequential independent decisions about their customers’ health, finances, employment, or safety. The risk is too immeasurable. They are absolutely investing in these assets and using them to enable humans, but they are not on auto-pilot (pun intended).

The current most common form of AI is some form of “classical machine learning.” These systems can be instrumented to be recorded, versioned, reproduced, audited, monitored and continuously validated. They can have documentation of governance controls and business decisions made through the development process. Companies can evidence the work performed to evaluate data, test models, and verify actual performance of systems. All of these create transparency and confidence – they lead to trust – which leads to accelerated deployment and benefits.

These “basic” AI systems are incredibly exciting. They are creating vaccines for Covid-19, new insurance products, new medical devices, better financial instruments, safer transportation, greater equity in compensation and hiring. They also will and do have issues. Prior to AI’s existence, carriers were found responsible for unfair practices, people get in car accidents, and doctors provide poor health advice – but we still allow people to make consequential decisions because of the oversight and controls that exist around those given the ability to make those critical decisions.

Celebrate AI governance – lean into the risk

We need more companies to acknowledge these risks, own them and proactively and proudly show their employees, customers, and investors that they are committed to managing them. Is there a simple fix for the comprehensive risks of using big data and models to make decisions? No, but humans and markets are generally forgiving of unintentional mistakes. We are not forgiving of willful ignorance and lack of effort.

Companies building high stakes AI systems should establish assurances by bringing together people, process, data, and technology into a lifecycle governance approach. Don’t search for a perfect solution to removing bias or guaranteeing fairness. It doesn’t exist (yet). Start with already understood principles and practices, and layer them onto your AI systems. And prepare for the necessity to talk publicly with your internal and external stakeholders about your efforts.

Let’s make lemonade out of Lemonade.

Returning to where we started, Lemonade has provided everyone with an object lesson about how to talk to the public about AI. Beyond lessons about clear and accurate communications on social media, business leaders should take away that we have rounded a corner in terms of public awareness of the problems created by AI. The steady drumbeat of alarming mistakes has permeated the social consciousness. There needs to be more investment and celebration of AI Governance to balance the hype about new AI capabilities and features.

We have not done enough to show the broader public that AI can be fair, safe, responsible, and accountable, perhaps even more so than the traditional human processes they often replace. It will take work, but it is absolutely achievable. If companies do not implement assurances and fundamental governance around their systems, which are not nearly as complex as many regulators and members of the public believe they are, and the inevitability of software regulation plays out, we’re going to have a big slowdown in the rate of AI innovation.

Billions of dollars are being invested in AI, and we should all be crazy excited about how it will make our lives better… Bring on the lemonade.

As seen on PropertyCasualty360