×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

The accuracy of predictive models for solitary pulmonary nodule (SPN)
diagnosis can be greatly increased by incorporating repeat imaging and medical
context, such as electronic health records (EHRs). However, clinically routine
modalities such as imaging and diagnostic codes can be asynchronous and
irregularly sampled over different time scales which are obstacles to
longitudinal multimodal learning. In this work, we propose a transformer-based
multimodal strategy to integrate repeat imaging with longitudinal clinical
signatures from routinely collected EHRs for SPN classification. We perform
unsupervised disentanglement of latent clinical signatures and leverage
time-distance scaled self-attention to jointly learn from clinical signatures
expressions and chest computed tomography (CT) scans. Our classifier is
pretrained on 2,668 scans from a public dataset and 1,149 subjects with
longitudinal chest CTs, billing codes, medications, and laboratory tests from
EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs
revealed a significant AUC improvement over a longitudinal multimodal baseline
(0.824 vs 0.752 AUC), as well as improvements over a single cross-section
multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741
AUC). This work demonstrates significant advantages with a novel approach for
co-learning longitudinal imaging and non-imaging phenotypes with transformers.

Click here to read this post out
ID: 129959; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>