Adaptive Traffic Signal Control of Bottleneck Subzone based on Grey Qualitative Reinforcement Learning Algorithm

Junping Xiang, Zonghai Chen

2015

Abstract

A Grey Qualitative Reinforment Learning algorithm is present in this paper to realize the adaptive signal control of bottleneck subzone, which is described as a nonlinear optimization problem. In order to handle the uncertainites in the traffic flow system, grey theory model and qualitative method were used to express the sensor data. In order to avoid deducing the function relationship of the traffic flow and the timing plan, grey reinforcement learning algorithm, which is the biggest innovation in this paper, was proposed to seek the solution. In order to enhance the generalization capability of the system and avoid the "curse of dimensionality" and improve the convergence speed, BP neural network was used to approximate the Q-function. We do three simulation experiments (calibrated with real data) using four evaluation indicators for contrast and analyze. Simulation results show that the proposed method can significantly improve the traffic situation of bottleneck subzone, and the algorithm has good robustness and low noise sensitivity.

References

  1. Ahmad A., Arshad R., Mahmud S. A., Khan G. M. and Hamed S. A., 2014. Earliest-Deadline-Based Scheduling to Reduce Urban Traffic Congestion. IEEE Transactions of Intelligent Transportation Systems. 15(4): 1510-1526..
  2. Baird L., 1995. Residual algorithms: Reinforcement learning with function approximation. In Proc. Int. Workshop Conf. Mach. Learn. 30-37.
  3. Chun-gui L., Meng W., Shu-hong Y., and Zeng-Fang Z., 2009. Urban traffic signal learning control using sarsa algorithm based on adaptive rbf network. In Proc. ICMTMA'09, international conference on measuring technology and mechatronics automation. 3: 658-661.
  4. Chunlin C., Daoyi D., Zonghai C., Haibo W., 2008. Grey Systems for Intelligent Sensors and Information. Processing Journal of Systems Engineering and Electronics. 19(4): 659-665.
  5. Chunlin C., Daoyi D., Zonghai C., Haibo W., 2008. Qualitative control for mobile robot navigation based on reinforcement learning and grey system. Mediterranean Journal of Measurement and Control. 4(1):1-5.
  6. Choy M. C., Srinivasan D. and Cheu R. L., 2006. Neural Networks for Continuous Online Learning and Control. IEEE Transactions on Neural Networks. 7(3): 261-272.
  7. Julong D., 1985. Grey Control System. Huazhong University of Science and Technology Press. Wuhan.
  8. Loch J. and Singh S., 1998. Using eligibility traces to find the best memoryless policy in partially observable Markov decision processes. In Proc. 15th Int. Conf. Mach. Learn. 323-331.
  9. Prashanth L. and Bhatnagar S., 2011. Reinforcement learning with average cost for adaptive control of traffic lights at intersections. In Proc. 14th Int. IEEE Conf. ITSC. 1640-1645.
  10. Shen G. J. and Kong X. J., 2009. Study on road network traffic coordination control technique with bus priority. IEEE Transactions on Syst.ems, Man and Cybernetics, Part C: Applications and Review. 39(3): 343-351.
  11. Shujie L., Zonghai C., 2011. Analysis and Prospect of Qualitative-Quantitative Representation Method of Uncertain Knowledge. System Simulation Technology & Application. 13: 1095-1103.
  12. Sutton R. McAllester S., Singh D., S., and Mansour Y., 2000. Policy gradient methods for reinforcement learning with function approximation. Adv. Neural Inf. Process. Syst. 12(22): 1057-1063.
  13. Teo K. T. K., Kow W. Y. and Chin Y.K, 2010. Optimization of Traffic Flow within an Urban Traffic Light Intersection with Genetic Algorithm. Second International Conference on Computational Intelligence, Modelling and Simulation. 172-177.
  14. Wei W., Zhang Y., Mbede J., Zhang Z., and Song J., 2001. Traffic signal control using fuzzy logic and moga. In Proc. IEEE Int. Conf. Syst.,Man, Cybern. 2: 1335- 1340.
  15. Yuanliang H., Zonghai C., Wangshen G, 2004. Grey Qualitative Simulation. The Journal of Grey System. 16 (1): 520.
  16. Yujie D., Jinzong H., Dongbin Z. and Fenghua Zhu, 2011. Neural Network Based Online Traffic Signal Controller Design with Reinforcement Training. 14th International IEEE Conference on Intelligent Transportation Systems Washington. 1045-1050.
Download


Paper Citation


in Harvard Style

Xiang J. and Chen Z. (2015). Adaptive Traffic Signal Control of Bottleneck Subzone based on Grey Qualitative Reinforcement Learning Algorithm . In Proceedings of the International Conference on Pattern Recognition Applications and Methods - Volume 2: ICPRAM, ISBN 978-989-758-077-2, pages 295-301. DOI: 10.5220/0005269302950301


in Bibtex Style

@conference{icpram15,
author={Junping Xiang and Zonghai Chen},
title={Adaptive Traffic Signal Control of Bottleneck Subzone based on Grey Qualitative Reinforcement Learning Algorithm},
booktitle={Proceedings of the International Conference on Pattern Recognition Applications and Methods - Volume 2: ICPRAM,},
year={2015},
pages={295-301},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005269302950301},
isbn={978-989-758-077-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Pattern Recognition Applications and Methods - Volume 2: ICPRAM,
TI - Adaptive Traffic Signal Control of Bottleneck Subzone based on Grey Qualitative Reinforcement Learning Algorithm
SN - 978-989-758-077-2
AU - Xiang J.
AU - Chen Z.
PY - 2015
SP - 295
EP - 301
DO - 10.5220/0005269302950301