In order to ensure safety in autonomous driving, it is necessary to perform object detection in real-time. Nevertheless, GPUs used in self-driving cars have to be cheap and power-efficient. It makes currently used object detection techniques incapable of performing this task.
A recent paper suggests combining network enhancement and pruning search with reinforcement learning. That way, the framework automatically generates unified schemes of network enhancement and pruning. The performance of models generated under the schemes is then fed back to the generator.
The system is flexible and can be customized down to the layer level. It is compiler-aware and takes into account the effects of compiler optimizations during the search space exploration. The experiments show that real-time 3D object detection can be achieved on devices like Samsung Galaxy S20. The performance is comparable with state-of-the-art works.
3D object detection is an important task, especially in the autonomous driving application domain. However, it is challenging to support the real-time performance with the limited computation and memory resources on edge-computing devices in self-driving cars. To achieve this, we propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques, to enable real-time inference of 3D object detection on the resource-limited edge-computing devices. Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically, without human expertise and assistance. And the evaluated performance of the unified schemes can be fed back to train the generator RNN. The experimental results demonstrate that the proposed framework firstly achieves real-time 3D object detection on mobile devices (Samsung Galaxy S20 phone) with competitive detection performance.