×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

arXiv:2403.18026v1 Announce Type: new
Abstract: High-quality fluorescence imaging of biological systems is limited by processes like photobleaching and phototoxicity, and also in many cases, by limited access to the latest generations of microscopes. Moreover, low temporal resolution can lead to a motion blur effect in living systems. Our work presents a deep learning (DL) generative-adversarial approach to the problem of obtaining high-quality (HQ) images based on their low-quality (LQ) equivalents. We propose a generative-adversarial network (GAN) for contrast transfer between two different separate microscopy systems: a confocal microscope (producing HQ images) and a wide-field fluorescence microscope (producing LQ images). Our model proves that such transfer is possible, allowing us to receive HQ-generated images characterized by low mean squared error (MSE) values, high structural similarity index (SSIM), and high peak signal-to-noise ratio (PSNR) values. For our best model in the case of comparing HQ-generated images and HQ-ground truth images, the median values of the metrics are 6x10-4, 0.9413, and 31.87, for MSE, SSIM, and PSNR, respectively. In contrast, in the case of comparison between LQ and HQ ground truth median values of the metrics are equal to 0.0071, 0.8304, and 21.48 for MSE, SSIM, and PSNR respectively. Therefore, we observe a significant increase ranging from 14% to 49% for SSIM and PSNR respectively. These results, together with other single-system cross-modality studies, provide proof of concept for further implementation of a cross-system biological image quality enhancement.

Click here to read this post out
ID: 806544; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: March 28, 2024, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 15
CC:
No creative common's license
Comments: