×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Supervised masking approaches in the time-frequency domain aim to employ deep
neural networks to estimate a multiplicative mask to extract clean speech. This
leads to a single estimate for each input without any guarantees or measures of
reliability. In this paper, we study the benefits of modeling uncertainty in
clean speech estimation. Prediction uncertainty is typically categorized into
aleatoric uncertainty and epistemic uncertainty. The former refers to inherent
randomness in data, while the latter describes uncertainty in the model
parameters. In this work, we propose a framework to jointly model aleatoric and
epistemic uncertainties in neural network-based speech enhancement. The
proposed approach captures aleatoric uncertainty by estimating the statistical
moments of the speech posterior distribution and explicitly incorporates the
uncertainty estimate to further improve clean speech estimation. For epistemic
uncertainty, we investigate two Bayesian deep learning approaches: Monte Carlo
dropout and Deep ensembles to quantify the uncertainty of the neural network
parameters. Our analyses show that the proposed framework promotes capturing
practical and reliable uncertainty, while combining different sources of
uncertainties yields more reliable predictive uncertainty estimates.
Furthermore, we demonstrate the benefits of modeling uncertainty on speech
enhancement performance by evaluating the framework on different datasets,
exhibiting notable improvement over comparable models that fail to account for
uncertainty.

Click here to read this post out
ID: 130165; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>