Learning Algorithms for Verification of Markov Decision Processes

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

BRÁZDIL Tomáš CHATTERJEE Krishnendu CHMELÍK Martin FOREJT Vojtěch KŘETÍNSKÝ Jan KWIATKOWSKA Marta MEGGENDORFER Tobias PARKER David UJMA Mateusz

Year of publication 2025
Type Article in Periodical
Magazine / Source TheoretiCS
MU Faculty or unit

Faculty of Informatics

Citation
web https://theoretics.episciences.org/14694
Doi https://doi.org/10.46298/theoretics.25.10
Keywords Electrical Engineering and Systems Sciences-Systems and Control; Computer Science-Artificial Intelligence; Computer Science-Logic in Computer Science
Description We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of the state space, instead focussing on particularly relevant areas of the system, guided by heuristics. Our work builds on the previous results of Br{á}zdil et al., significantly extending it as well as refining several details and fixing errors. The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios. The first assumes that full knowledge of the MDP is available, in particular precise transition probabilities. It performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP without knowing the exact transition dynamics. Here, we obtain probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. In particular, the latter is an extension of statistical model-checking (SMC) for unbounded properties in MDPs. In contrast to other related approaches, we do not restrict our attention to time-bounded (finite-horizon) or discounted properties, nor assume any particular structural properties of the MDP.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info