Click here to flash read.
The accuracy of predictive models for solitary pulmonary nodule (SPN)
diagnosis can be greatly increased by incorporating repeat imaging and medical
context, such as electronic health records (EHRs). However, clinically routine
modalities such as imaging and diagnostic codes can be asynchronous and
irregularly sampled over different time scales which are obstacles to
longitudinal multimodal learning. In this work, we propose a transformer-based
multimodal strategy to integrate repeat imaging with longitudinal clinical
signatures from routinely collected EHRs for SPN classification. We perform
unsupervised disentanglement of latent clinical signatures and leverage
time-distance scaled self-attention to jointly learn from clinical signatures
expressions and chest computed tomography (CT) scans. Our classifier is
pretrained on 2,668 scans from a public dataset and 1,149 subjects with
longitudinal chest CTs, billing codes, medications, and laboratory tests from
EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs
revealed a significant AUC improvement over a longitudinal multimodal baseline
(0.824 vs 0.752 AUC), as well as improvements over a single cross-section
multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741
AUC). This work demonstrates significant advantages with a novel approach for
co-learning longitudinal imaging and non-imaging phenotypes with transformers.