Monitoring machine learning models in production

Talk

Go to NumFOCUS academy page.

Monitoring deployed models is crucial for continued provisioning of high quality ML enabled services. Key areas include model performance monitoring, detecting adversarial instances, outliers and drift using statistical techniques. The talk goes in depth on the algorithmic challenges to monitor models in production and the open source libraries and infrastructure to support these capabilities.

Speaker

Arnaud Van Looveren

Arnaud leads the data science research effort at Seldon Technologies, focusing on machine learning model interpretability (XAI), outlier, adversarial and drift detection. The team’s work can be found in open source projects Alibi and Alibi Detect. Arnaud recently discussed challenges around monitoring and explaining models in production at the “Challenges in Deploying and Monitoring Machine Learning Systems” workshop at ICML.