Learning from Time Series for Health

Workshop at NeurIPS 2022, Room 392

Contact: ts4h.chairs@gmail.com


Home Call for Papers Accepted Papers Schedule TS4H 2022

Schedule


2022 Workshop Agenda

 

Location: New Orleans, LA

Friday, December 2, 2022 (all times in CST)


09:00 - 09:15 Opening remarks from the organizers
09:15 - 09:45 Invited talk: Yan Liu (USC), Interpretability and Fairness of Deep Learning Models on Health Dataset

The recent release of large-scale healthcare datasets has greatly propelled the research of data-driven deep learning models for healthcare applications. However, due to the nature of black-boxed models, concerns about interpretability, fairness, and biases in healthcare scenarios where human lives are at stake call for a careful and thorough examination of both datasets and models. In this work, we focus on MIMIC-IV, the largest publicly available healthcare dataset, and conduct comprehensive analyses of interpretability as well as dataset representation bias and prediction fairness of deep learning models for in-hospital mortality prediction.

9:45 - 10:15 Invited Talk: Stephanie Hyland (Microsoft Research), Machine learning for intensive and perioperative care

Intensive and perioperative care are promising application areas for machine learning due to the richness and diversity of monitoring and data capture. However, a combination of complex (time series) data, high-stakes decision-making in a shifting environment, and translational challenges mean machine learning is far from a panacea. In this talk, I will touch on various attempts to use machine learning to solve relevant problems in ICU/perioperative care, highlighting successes, open questions, and reflections.

10:15 - 10:30 Coffee Break
10:30 - 11:00 Invited Talk: Danielle Belgrave (DeepMind)
11:00 - 11:30 Contributed talks for poster session 1
11:30 - 12:30 Coffee break and poster session 1
12:30 - 01:30 Mentorship lunch break
01:30 - 02:00 Invited Talk: Emily Fox (Stanford), AI-driven remote monitoring for type 1 diabetes

Type 1 diabetes (T1D) is a chronic condition in which the pancreas produces little to no insulin. The result is a constant management challenge involving the intricate interplay between insulin injected, food consumed, and resulting blood glucose levels, the dynamics of which are further modulated by activity levels, diurnal effects, and other sources of variation. Over 9 million people worldwide are faced with this life-long T1D management challenge, and poor glucose control can have both severe immediate and long-term health consequences. Wearable devices, such as continuous glucose monitors (CGMs), insulin pumps, and activity monitors hold the promise to transform healthcare for patients with type 1 diabetes. However, fundamental challenges arise in seeing the potential promise of such remote patient monitoring (RPM): extracting clinically actionable insights from these datastreams, scaling remote monitoring systems to large patient populations with constrained clinician time, and assessing the treatment effects of RPM from the complex wearable time series. Stanford’s Lucille Packard Children’s Hospital has deployed such an RPM system for new-onset T1D pediatric patients. In this talk, we explore neural mechanistic models of glucose that encode domain-specific mechanistic information while flexibly learning aspects of the system dynamics. We also discuss data-driven policies for selecting patients for review in a micro-randomized trial and estimating the effects of remote monitoring.

02:00 - 02:30 Invited Talk: David Sontag (MIT), Modeling multivariate time series of disease progression.

I will give an overview of time series data often found in health care, with a particular focus on data at the time scale of days/years as is relevant for disease progression modeling. I'll describe our recent advances in parameterizing deep Markov models using insights from the pharmacodynamics literature (Hussein et al., ICML '21), how one can do risk stratification from a patient's historical time-series data, including our recent work using transformers and introducing a technique called reverse distillation that can help mitigate overfitting (Kodialam et al., AAAI '21). Time permitting, I'll also talk about how one can use a deep generative model to cluster time-series with substantial missing data while accounting for censorship (Chen et al., AAAI '22).

02:30 - 03:00 Contributed talks for poster session 2
03:00 - 04:00 Coffee break and poster session 2
04:00 - 05:00 Panel Discussion with Danielle Belgrave, David Sontag, Luca Foschini, and Elham Dolatabadi. Challenges and lessons learned in deploying ML time series models”
05:00 - 05:10 Closing Remarks