I was once challenged to make a list of the top 10 machine learning controls needed to ensure risk is mitigated and adequate assurance is provided for responsible AI deployments. Over the course of my research, the following controls emerged, which are an excellent jumping-off point for creating machine learning controls at every organization deploying machine learning:
- Data governance structures are in place for a thorough understanding of modeling data.
- Data preprocessing routines are standardized and follow statistically valid techniques.
- Modeling data is appropriately segregated into train/test/validation sets without pollution (i.e. segregation prior to variable transformation).
- Standard, well-established machine learning models are deployed.
- A robust, cross-functional team with appropriate compensation mechanisms thoroughly evaluates machine learning models prior to deployment for inappropriate biases and “humanness” of models; and reevaluates the models on a regular basis.
- Accountable executives are held responsible for model biases and erratic model performance that adversely effects customers.
- Appropriate metrics are used for training and continued evaluation of the effectiveness of the implemented model.
- Monitoring processes are appropriate and sufficient to provide timely identification if the models behave unexpectedly.
- Models are thoroughly validated prior to deployment and regularly throughout the models’ deployment duration by creating a validation dataset across the range of inputs with the predictions evaluated by subject matter expertise for appropriate outcomes.
- Model predictions are logged with sufficient detail for local interpretability of outcomes.
These controls are constructed at a high enough level as to be relatively timeless and applicable across a wide range of industries and use cases. In conjunction with a Machine Learning Assurance approach based on the CRISP-DM framework and the Monitaur platform, these top 10 AI and ML controls can help to provide the assurance and confidence needed to unlock ML-powered innovation while still meeting risk management and regulatory needs.