A policy iteration algorithm for Markov decision processes skip-free in one direction

Lambert, J. and Houdt, B. Van and Blondia, C. (2005) A policy iteration algorithm for Markov decision processes skip-free in one direction. In: 1st International ICST Workshop on Tools for solving Structured Markov Chains.

[thumbnail of 1144.pdf] PDF
1144.pdf

Download (227kB)

Abstract

In this paper we present a new algorithm for policy iteration for Markov decision processes (MDP) skip-free in one direction. This algorithm, which is based on matrix analytic methods, is in the same spirit as the algorithm of White (Stochastic Models, 21:785-797, 2005) which was limited to matrices

Item Type: Conference or Workshop Item (UNSPECIFIED)
Date Deposited: 04 Mar 2026 08:39
Last Modified: 18 Apr 2026 06:30
URI: http://eprints.eai.eu/id/eprint/174

Actions (login required)

View Item
View Item