There is a disconnect between the relatively heavy computational requirements of the computer vision solutions, and the limited capacities available on mobile autonomous platforms such as mobile phones or drones on which these algorithms are run. In this work we propose to bridge this gap with a novel Markov Decision Process framework that adapts the parameters of the vision algorithms to the incoming video data rather than fixing them apriori.
We test our framework on a object detection and tracking task, showing significant benefits in terms of energy consumption without considerable loss in accuracy on a combination of publicly available and novel datasets.