×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Self-supervised learning (SSL) enables label efficient training for machine
learning models. This is essential for domains such as medical imaging, where
labels are costly and time-consuming to curate. However, the most effective
supervised or SSL strategy for transferring models to different healthcare
systems or novel tasks is not well understood. In this work, we systematically
experiment with a variety of supervised and self-supervised pretraining
strategies using multimodal datasets of medical images (chest X-rays) and text
(radiology reports). We then evaluate their performance on data from two
external institutions with diverse sets of tasks. In addition, we experiment
with different transfer learning strategies to effectively adapt these
pretrained models to new tasks and healthcare systems. Our empirical results
suggest that multimodal SSL gives substantial gains over unimodal SSL in
performance across new healthcare systems and tasks, comparable to models
pretrained with full supervision. We demonstrate additional performance gains
with models further adapted to the new dataset and task, using multimodal
domain-adaptive pretraining (DAPT), linear probing then finetuning (LP-FT), and
both methods combined. We offer suggestions for alternative models to use in
scenarios where not all of these additions are feasible. Our results provide
guidance for improving the generalization of medical image interpretation
models to new healthcare systems and novel tasks.

Click here to read this post out
ID: 129615; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>