PAC statistical model checking of mean payoff in discrete- and continuous-time MDP

Varování

Publikace nespadá pod Ústav výpočetní techniky, ale pod Fakultu informatiky. Oficiální stránka publikace je na webu muni.cz.
Autoři

KŘETÍNSKÝ Jan AGARWAL Chaitanya GUHA Shibashis PAZHAMALAI M.

Rok publikování 2025
Druh Článek v odborném periodiku
Časopis / Zdroj Springer
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www https://link.springer.com/article/10.1007/s10703-024-00463-0#ethics
Doi https://doi.org/10.1007/s10703-024-00463-0
Klíčová slova Markov decision processes; Statistical model checking; Mean payoff; Reinforcement learning
Popis Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.

Další info