×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

We examine online safe multi-agent reinforcement learning using constrained
Markov games in which agents compete by maximizing their expected total rewards
under a constraint on expected total utilities. Our focus is confined to an
episodic two-player zero-sum constrained Markov game with independent
transition functions that are unknown to agents, adversarial reward functions,
and stochastic utility functions. For such a Markov game, we employ an approach
based on the occupancy measure to formulate it as an online constrained
saddle-point problem with an explicit constraint. We extend the Lagrange
multiplier method in constrained optimization to handle the constraint by
creating a generalized Lagrangian with minimax decision primal variables and a
dual variable. Next, we develop an upper confidence reinforcement learning
algorithm to solve this Lagrangian problem while balancing exploration and
exploitation. Our algorithm updates the minimax decision primal variables via
online mirror descent and the dual variable via projected gradient step and we
prove that it enjoys sublinear rate $ O((|X|+|Y|) L \sqrt{T(|A|+|B|)}))$ for
both regret and constraint violation after playing $T$ episodes of the game.
Here, $L$ is the horizon of each episode, $(|X|,|A|)$ and $(|Y|,|B|)$ are the
state/action space sizes of the min-player and the max-player, respectively. To
the best of our knowledge, we provide the first provably efficient online safe
reinforcement learning algorithm in constrained Markov games.

Click here to read this post out
ID: 168936; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: June 2, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 8
CC:
No creative common's license
Comments: