×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

arXiv:2404.16273v1 Announce Type: new
Abstract: We have developed novel algorithms for optimizing nonsmooth, nonconvex functions in which the nonsmoothness is caused by nonsmooth operators presented in the analytical form of the objective. The algorithms are based on encoding the active branch of each nonsmooth operator such that the active smooth component function and its code can be extracted at any given point, and the transition of the solution from one smooth piece to another can be detected via tracking the change of active branches of all the operators. This mechanism enables the possibility of collecting the information about the sequence of active component functions encountered in the previous iterations (i.e., the component transition information), and using it in the construction of a current local model or identification of a descent direction in a very economic and effective manner. Based on this novel idea, we have developed a trust-region method and a joint gradient descent method driven by the component information for optimizing the encodable piecewise-smooth, nonconvex functions. It has further been shown that the joint gradient descent method using a technique called proactive component function accessing can achieve a linear rate of convergence if a so-called multi-component Polyak-Lojasiewicz inequality and some other regularity conditions hold at a neighborhood of a local minimizer.

Click here to read this post out
ID: 823024; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: April 26, 2024, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 7
CC:
No creative common's license
Comments: