×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Convolutional neural networks have made significant strides in medical image
analysis in recent years. However, the local nature of the convolution operator
inhibits the CNNs from capturing global and long-range interactions. Recently,
Transformers have gained popularity in the computer vision community and also
medical image segmentation. But scalability issues of self-attention mechanism
and lack of the CNN like inductive bias have limited their adoption. In this
work, we present MaxViT-UNet, an Encoder-Decoder based hybrid vision
transformer for medical image segmentation. The proposed hybrid decoder, also
based on MaxViT-block, is designed to harness the power of convolution and
self-attention mechanism at each decoding stage with minimal computational
burden. The multi-axis self-attention in each decoder stage helps in
differentiating between the object and background regions much more
efficiently. The hybrid decoder block initially fuses the lower level features
upsampled via transpose convolution, with skip-connection features coming from
hybrid encoder, then fused features are refined using multi-axis attention
mechanism. The proposed decoder block is repeated multiple times to accurately
segment the nuclei regions. Experimental results on MoNuSeg dataset proves the
effectiveness of the proposed technique. Our MaxViT-UNet outperformed the
previous CNN only (UNet) and Transformer only (Swin-UNet) techniques by a large
margin of 2.36% and 5.31% on Dice metric respectively.

Click here to read this post out
ID: 130140; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>