×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

The Adaptive Large Neighborhood Search (ALNS) algorithm has shown
considerable success in solving complex combinatorial optimization problems
(COPs). ALNS selects various heuristics adaptively during the search process,
leveraging their strengths to find good solutions for optimization problems.
However, the effectiveness of ALNS depends on the proper configuration of its
selection and acceptance parameters. To address this limitation, we propose a
Deep Reinforcement Learning (DRL) approach that selects heuristics, adjusts
parameters, and controls the acceptance criteria during the search process. The
proposed method aims to learn, based on the state of the search, how to
configure the next iteration of the ALNS to obtain good solutions to the
underlying optimization problem. We evaluate the proposed method on a
time-dependent orienteering problem with stochastic weights and time windows,
used in an IJCAI competition. The results show that our approach outperforms
vanilla ALNS and ALNS tuned with Bayesian Optimization. In addition, it
obtained better solutions than two state-of-the-art DRL approaches, which are
the winning methods of the competition, with much fewer observations required
for training. The implementation of our approach will be made publicly
available.

Click here to read this post out
ID: 129839; Unique Viewers: 0
Voters: 0
Latest Change: May 16, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>