Open Access
Issue |
E3S Web Conf.
Volume 229, 2021
The 3rd International Conference of Computer Science and Renewable Energies (ICCSRE’2020)
|
|
---|---|---|
Article Number | 01047 | |
Number of page(s) | 11 | |
DOI | https://doi.org/10.1051/e3sconf/202122901047 | |
Published online | 25 January 2021 |
- J. Bather. (1973). Optimal decision procedures for finite Markov chains. Part II: Communicating systems. Advances in Applied Probability, 5(3), 521-540. https://doi.org/10.2307/1425832 [CrossRef] [Google Scholar]
- J. Bather. (1973). Optimal decision procedures for finite Markov chains. Part III: General convex systems. Advances in Applied Probability, 5(3), 541-553. https://doi.org/10.2307/1425833 [Google Scholar]
- Hou, Z., Filar, J. A., & Chen, A. (Eds.). (2013). Markov processes and controlled Markov chains. Springer Science & Business Media. [Google Scholar]
- KW. Ross, & R. Varadarajan. (1988). Markov decision processes with sample path constraints: the communicating case. Mathematics of Operations Research, 37(5), 780-790. https://doi.org/10.1287/opre.37.5.780 [CrossRef] [Google Scholar]
- KW. Ross, & R. Varadarajan. (1991). Multichain Markov decision processes with a sample path constraint: a decomposition approach. Mathematics of Operations Research, 16 (1), 195-207. https://doi.org/10.1287/moor.16.1.195 [CrossRef] [Google Scholar]
- M. Baykal-Gursoy, & KW. Ross. (1992). Variability sensitive Markov decision processes. Operations Research, 17(3), 558-571. https://doi.org/10.1287/moor.17.3.558 [Google Scholar]
- C. Daoui, & M. Abbad. (2007). On some algorithms for limiting average Markov decision processes. Operations Research, 35(2), 261-266. https://doi.org/10.1016/j.orl.2006.03.006 [Google Scholar]
- E. Altman. (1999). Constrained Markov Decision Processes. Chapman and Hall: London, U.K. [Google Scholar]
- J.S. Rosenthal. (2006). A first look at rigorous probability theory, A. World Scientific Publishing: Singapore. [Google Scholar]
- RA. Howard. (1960). Dynamic programming and Markov processes. MIT Press: Cambridge. [Google Scholar]
- M.L. Puterman. (1994). Markov decision processes discrete stochastic dynamic programming. John Wiley & Sons: New York. [Google Scholar]
- N. Bauerle, & U. Rieder. (2011). Markov decision processes with applications to finance. Springer Science & Business Media: Berlin Heidelberg. [Google Scholar]
- C. Derman. (1970). Finite state Markovian decision processes. Academic Press: New York. [Google Scholar]
- D.P. Bertsekas. (1976). Dynamic programming and stochastic control. Academic Press: New York. [Google Scholar]
- D.P. Bertsekas, & S.E. Shreve. (1978). Stochastic optimal control. Academic Press: New York. [Google Scholar]
- D.P. Bertsekas. (1995). Dynamic programming and optimal control I. Athena Scientific: Belmont, Massachusetts. [Google Scholar]
- D.P. Bertsekas. (1995). Dynamic programming and optimal control II. Athena Scientific: Belmont, Massachusetts. [Google Scholar]
- M. Herzberg, & U. Yechiali. (1994). Accelerating procedures of the value iteration algorithm for discounted Markov decision processes, based on a one-step look-ahead analysis. Operations Research, 42(5), 940-946. http://www.jstor.org/stable/171550 [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.