Many-Objective Stochastic Path Finding Using Reinforcement Learning


n this paper, we investigate solutions to path finding problems with many conflicting objectives, and introduce a new model-free many objective reinforcement learning algorithm, called Voting Q-learning, that is capable of finding a set of optimal policies in an initially unknown, stochastic environment with several conflicting objectives. Current methods for solving this type of problem rely on Pareto dominance to determine which actions are optimal, which decreases in effectiveness as the number of objectives increases, ultimately selecting actions at random in environments where all potential actions are Pareto optimal. Alternative methods for addressing this problem require interaction with a decision maker or a priori knowledge of the problem structure for guidance towards optimal solutions, making them insufficient for fully autonomous use or problems where preferred solutions are initially unknown. As an alternative, we propose the use of voting methods from social choice theory to determine a set of Pareto optimal policies by aggregating preferences determined by the evaluation of environment conditions for each objective. We demonstrate the effectiveness of this method with multiple deterministic and stochastic many-objective path finding problems that are solved optimally without any advance knowledge of the problem or interaction with a decision maker, showing that our approach is the first to provide optimal performance for an autonomous, intelligent system operating in a many objective environment.