Click here to flash read.
While black-box variational inference is widely used, there is no proof that
its stochastic optimization succeeds. We suggest this is due to a theoretical
gap in existing stochastic optimization proofs-namely the challenge of gradient
estimators with unusual noise bounds, and a composite non-smooth objective. For
dense Gaussian variational families, we observe that existing gradient
estimators based on reparameterization satisfy a quadratic noise bound and give
novel convergence guarantees for proximal and projected stochastic gradient
descent using this bound. This provides the first rigorous guarantee that
black-box variational inference converges for realistic inference problems.
Click here to read this post out
ID: 181657; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: June 7, 2023, 7:31 a.m.
Changes:
Dictionaries:
Words:
Spaces:
Views: 8
CC:
No creative common's license
No creative common's license
Comments: