Claude Sammut is attempting to put together an Encyclopedia of Machine Learning. I volunteered to write one article on Efficient RL in MDPs, which I would like to invite comment on. Is something critical missing?
2 Replies to “Efficient Reinforcement Learning in MDPs”
Comments are closed.
It’s interesting: you translated “Efficient Reinforcement Learning in MDPs” as “EFFICIENT EXPLORATION IN REINFORCEMENT LEARNING”, but surely there are other aspects to the problem. Thoughts on things potentially missing:
1) Certainly efficient exploration is important, but it is not the be-all, end-all of efficient RL. Key points I think are worth including:
a) RL with fun-approximation, possibly utilizing hints in the form of baseline distributions or initial policies. Sample complexity and computational complexity with hidden state.
b) Similarly: exploration isn’t necessary (as in (a) ) if you get help from an expert:
Exploration and Apprenticeship Learning in Reinforcement Learning,
Pieter Abbeel and Andrew Y. Ng.
c) Models aren’t necessary. Your own work on bandit-style bound on the Q-function for exploration.
The translation was something I checked: they are thinking about MDPs. Point (c) is already covered with some discussion at the bottom of page 2.
I’ll add a bit of discussion about apprenticeship learning—it’s a good point.