MS in Data Science - Seminar Series

11November
12:30PM - 2:00PM
On-Campus Event - SFH Downtown

Introduction of the speaker:Funmi Doro is an Engineering Manager on Spotify's ML Platform Organization. She currently leads the development of Spotify's ML Lifecycle Platform. Our goal is to enable ML Practitioners to develop a deep understanding of model performance across the model lifecycle with standard metadata & metrics infrastructure for Offline Evaluation, A/B Testing, and Production Monitoring of ML Models.

Topic:Model Evaluation - How to avoid surprising outcomes after models are deployed to production.

Abstract:

Product Teams are accustomed to the lean, agile, iterative approach to product development, where continuous value is delivered to users using a Build -> Measure-> Learn loop. This methodology depends on a standard approach to measuring incremental value consistently across iterations.

While this iterative approach is well-suited to many applied ML problems, stakeholders often have a different understanding of the problem and goals for each iteration. For example, it is common for Business Executives & Product Managers to measure success using metrics like revenue, cost savings, user engagement, e.t.c, while ML Practitioners use metrics like accuracy, precision, recall, e.t.c. This divide is often more evident after models are shipped to production and outcomes diverge significantly from expectations.

During this talk, we will explore this divide, its impact, and some approaches to closing it.