3 things we learned about the modeling mindset

Principles & Frameworks

What’s your mindset when building an AI model? Christoph Molnar, a statistician and machine learning expert, explains that our approach to factors such as interpretability and uncertainty is what takes our models beyond mere performance.

Christoph attests that our modeling mindset is the key to making ML applicable to the target problem and explainable to non-technical users and regulators. He joined us on The AI Fundamentalists to discuss his thinking – here are 3 things that we learned:

1. The task matters more than the tooling

It can be hard to separate machine learning from statistical modelling, but Christoph delineated statistical concepts behind ML against the required holistic thinking required for real-world problem-solving and applications. He highlighted that his should begin for any modeling, with clarity about objectives, purpose and outcomes. A good question to ask is whether ML is even the best approach for the task or problem.

2. Interpretability means different things to different people

Making models explainable to non-technical audiences has long been a challenge for data scientists and systems engineers. Organizations that use models to make predictions need to consider what interpretability means. Internal model users typically need to understand how to act on predictions, whereas regulators are more concerned with the features and functions of the model.

3. Conformal prediction with Python offers an easy technique for quantifying uncertainty 

Data scientists need to go beyond the data and answer questions about how to quantify the problem their model addresses and determine confidence intervals. Christoph’s latest book, Introduction To Conformal Prediction With Python, explains the mathematical ideas behind conformal prediction and offers practical examples of how it can be used to quantify uncertainty.

Listen to the full interview in Episode 4. Modeling with Christoph Molnar.