×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Accelerating the learning of Partial Differential Equations (PDEs) from
experimental data will speed up the pace of scientific discovery. Previous
randomized algorithms exploit sparsity in PDE updates for acceleration. However
such methods are applicable to a limited class of decomposable PDEs, which have
sparse features in the value domain. We propose Reel, which accelerates the
learning of PDEs via random projection and has much broader applicability. Reel
exploits the sparsity by decomposing dense updates into sparse ones in both the
value and frequency domains. This decomposition enables efficient learning when
the source of the updates consists of gradually changing terms across large
areas (sparse in the frequency domain) in addition to a few rapid updates
concentrated in a small set of "interfacial" regions (sparse in the value
domain). Random projection is then applied to compress the sparse signals for
learning. To expand the model applicability, Taylor series expansion is used in
Reel to approximate the nonlinear PDE updates with polynomials in the
decomposable form. Theoretically, we derive a constant factor approximation
between the projected loss function and the original one with poly-logarithmic
number of projected dimensions. Experimentally, we provide empirical evidence
that our proposed Reel can lead to faster learning of PDE models (70-98%
reduction in training time when the data is compressed to 1% of its original
size) with comparable quality as the non-compressed models.

Click here to read this post out
ID: 400385; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Sept. 15, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 10
CC:
No creative common's license
Comments: