Tag Archives: Energy Optimization

Profiling the energy consumption of AGVs

J. Leng, J. Peng, J. Liu, Y. Zhang, J. Ji and Y. Zhang, rofiling Power Consumption in Low-Speed Autonomous Guided Vehicles, IEEE Robotics and Automation Letters, vol. 9, no. 7, pp. 6027-6034, July 2024 DOI: 10.1109/LRA.2024.3396051.

The increasing demand for automation has led to a rise in the use of low-speed Autonomous guided vehicles (AGVs). However, AGVs rely on batteries for their power source, which limits their operational time and affects their overall performance. To optimize their energy usage and enhance their battery life, it is crucial to understand the power consumption behavior of AGVs. This letter presents a comprehensive study on profiling power consumption in low-speed AGVs. The previous power consumption estimation models for AGVs were mostly based on physical formulas. We introduce a data-driven power consumption estimation model for each of the main components of the AGV, including the chassis, computing platform, sensors and communication devices. By conducting three actual driving tests, we show that the MAPE in estimating instantaneous power is 4.8%, a significant 8.1% improvement compared to using a physical model. Moreover, the MAPE for energy consumption is only 1.5%, which is 6.6% better than the physical model. To demonstrate the utility of our power consumption estimation models, we conduct two case studies – one is energy-efficient path planning and the other is energy-efficient perception task interval adjustment. This study demonstrates that integrating the power consumption estimation model into path planning reduces energy consumption by over 12%. Additionally, adjusting detection interval lowers computational energy consumption by 10.1%.

Q-learning with a variation of e-greedy to learn the optimal management of energy in autonomous vehicles navigation

Mojgan Fayyazi, Monireh Abdoos, Duong Phan, Mohsen Golafrouz, Mahdi Jalili, Reza N. Jazar, Reza Langari, Hamid Khayyam, Real-time self-adaptive Q-learning controller for energy management of conventional autonomous vehicles, Expert Systems with Applications, Volume 222, 2023 DOI: 10.1016/j.eswa.2023.119770.

Reducing emissions and energy consumption of autonomous vehicles is critical in the modern era. This paper presents an intelligent energy management system based on Reinforcement Learning (RL) for conventional autonomous vehicles. Furthermore, in order to improve the efficiency, a new exploration strategy is proposed to replace the traditional decayed \u03b5-greedy strategy in the Q-learning algorithm associated with RL. Unlike traditional Q-learning algorithms, the proposed self-adaptive Q-learning (SAQ-learning) can be applied in real-time. The learning capability of the controllers can help the vehicle deal with unknown situations in real-time. Numerical simulations show that compared to other controllers, Q-learning and SAQ-learning controllers can generate the desired engine torque based on the vehicle road power demand and control the air/fuel ratio by changing the throttle angle efficiently in real-time. Also, the proposed real-time SAQ-learning is shown to improve the operational time by 23% compared to standard Q-learning. Our simulations reveal the effectiveness of the proposed control system compared to other methods, namely dynamic programming and fuzzy logic methods.