PAC statistical model checking of mean payoff in discrete- and continuous-time MDP

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

KŘETÍNSKÝ Jan AGARWAL Chaitanya GUHA Shibashis PAZHAMALAI M.

Year of publication 2025
Type Article in Periodical
Magazine / Source Springer
MU Faculty or unit

Faculty of Informatics

Citation
web https://link.springer.com/article/10.1007/s10703-024-00463-0#ethics
Doi https://doi.org/10.1007/s10703-024-00463-0
Keywords Markov decision processes; Statistical model checking; Mean payoff; Reinforcement learning
Description Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info