Based on Haar-like feature and improved YOLOv4 navigation line detection algorithm in complex environment
DOI:
https://doi.org/10.15837/ijccc.2022.6.4910Keywords:
Haar-like feature, YOLOv4, image enhancement, driverless, visual navigationAbstract
In order to improve the detection accuracy of the navigation line by the unmanned automatic marking vehicle (UAMV) in the complex construction environment. Solve the problem of unqualified road markings drawn by the UAMV due to inaccurate detection during construction. A navigation line detection algorithm based on and improved YOLOv4 and improved Haar-like feature named YOLOv4-HR is proposed in this paper. Firstly, an image enhancement algorithm based on improved Haar-like features is proposed. It is used to enhance the images of the training set, make the images contain more semantic information, which improves the generalization ability of the network; Secondly, a multi-scale feature extraction network is added to the YOLOv4 network, which made model has a stronger learning ability for details and improves the accuracy of detection. Finally, a verification experiment is carried out on the self-built data set. The experimental results show that, compared with the original YOLOv4 network, the method proposed in this paper improves the AP value by 14.3% and the recall by 11.89%. The influence of factors such as the environment on the detection effect of the navigation line is reduced, and the effect of the navigation line detection in the visual navigation of the UAMV is effectively improved.
References
Ai, J.; Tian, R.; Luo, Q.; Jin, J.; Tang, B. (2019). Multi-scale rotation-invariant Haar-like feature integrated CNN-based ship detection algorithm of multiple-target environment in SAR imagery, IEEE Transactions on Geoscience and Remote Sensing, 57(12), 10070-10087, 2019.
https://doi.org/10.1109/TGRS.2019.2931308
Cai, Y.; Luan, T.; Gao, H.; Wang, H.; Chen, L.; Li, Y.; Sotelo, M. A. Li, Z. (2021). YOLOv4- 5D: An effective and efficient object detector for autonomous driving, IEEE Transactions on Instrumentation and Measurement, 70, 1-13, 2021.
https://doi.org/10.1109/TIM.2021.3065438
Ding, Y.; Li, W.; Zhu, Y. (2021). Lane Line Detection Based On YOLOv4, In2021 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), 1-5, 2021.
https://doi.org/10.1109/ICSPCC52875.2021.9565134
Fan, Y. C.; Yelamandala, C. M.; Chen, T. W.; Huang, C. J. (2021). Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network, Journal of Sensors, 2021, 2021.
https://doi.org/10.1155/2021/5576262
Guo, J.; Liu, C.; Cao, J.; Jiang, D. (2021). Damage identification of wind turbine blades with deep convolutional neural networks, Renewable Energy, 174, 122-133, 2021.
https://doi.org/10.1016/j.renene.2021.04.040
Haris, M.; Hou, J.; Wang, X. (2021). Multi-scale spatial convolution algorithm for lane line detection and lane offset estimation in complex road conditions, Signal Processing: Image Communication, 99, 116413, 2021.
https://doi.org/10.1016/j.image.2021.116413
He, K.; Zhang, X.; Ren, S.; Sun, J. (2020). Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778, 2016.
https://doi.org/10.1109/CVPR.2016.90
Haris, M.; Hou, J.; Wang, X. (2022). Lane Lines Detection under Complex Environment by Fusion of Detection and Prediction Models, Transportation Research Record, 2676(3), 342-359. 2022.
https://doi.org/10.1177/03611981211051334
Kuang, X.; Sui, X.; Liu, Y.; Chen, Q.; Gu, G. (2022). Single infrared image enhancement using a deep convolutional neural network, Neurocomputing, 332, 119-128, 2019.
https://doi.org/10.1016/j.neucom.2018.11.081
Ko, Y.; Lee, Y.; Azam, S.; Munir, F.; Jeon, M.; Pedrycz, W. (2021). Key points estimation and point instance segmentation approach for lane detection, IEEE Transactions on Intelligent Transportation Systems, 23(7), 8949-8958, 2021.
https://doi.org/10.1109/TITS.2021.3088488
Kavya, R.; Hussain, K. M. Z.; Nayana, N.; Savanur, S. S.; Arpitha, M.; Srikantaswamy, R. (2021). Lane Detection and Traffic Sign Recognition from Continuous Driving Scenes using Deep Neural Networks, 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), 1461-1467, 2021.
https://doi.org/10.1109/ICOSEC51865.2021.9591927
Li, W.; Qu, F., Liu, J.; Sun, F.; Wang, Y. (2020). A lane detection network based on IBN and attention, Multimedia Tools and Applications, 79(23), 16473--16486, 2020.
https://doi.org/10.1007/s11042-019-7475-x
Luo, S.; Zhang, X.; Hu, J.; Xu, J. (2020). Multiple lane detection via combining complementary structural constraints, IEEE Transactions on Intelligent Transportation Systems, 22(12), 7597- 7606, 2020.
https://doi.org/10.1109/TITS.2020.3005396
Li, X.; Yun, X.; Zhao, Z.; Zhang, K. ; Wang, X. (2022). Lightweight Deeplearning Method for Multi-vehicle Object Recognition. Information Technology and Control, Sensors, 51(2), 294-312, 2022.
https://doi.org/10.5755/j01.itc.51.2.30667
Lu, K.; Li, J.; Zhou, L.; Hu, X.; An, X.; He, H. (2018). Generalized haar filter-based object detection for car sharing services, IEEE Transactions on Automation Science and Engineering, IEEE Transactions on Geoscience and Remote Sensing, 15(4), 1448-1458, 2018.
https://doi.org/10.1109/TASE.2018.2830655
Muthalagu, R.; Bolimera, A., Kalaichelvi, V. (2020). Lane detection technique based on perspective transformation and histogram analysis for self-driving cars, Computers & Electrical Engineering, 85, 106653, 2020.
https://doi.org/10.1016/j.compeleceng.2020.106653
Neil, J.; Cosart, L.; Zampetti, G. (2021). Precise timing for vehicle navigation in the smart city: an overview, IEEE Communications Magazine, 58(4), 54-59, 2021.
https://doi.org/10.1109/MCOM.001.1900596
Qi, Y.; Yang, Z.; Sun, W.; Lou, M.; Lian, J.; Zhao, W.; Deng, X.; Ma, Y.; Y. Ma. (2022). A comprehensive overview of image enhancement techniques, Archives of Computational Methods in Engineering, 29, 583--607, 2019.
https://doi.org/10.1007/s11831-021-09587-6
Ryu, S. E.; Chung, K. Y. (2022). Detection Model of Occluded Object Based on YOLO Using Hard-Example Mining and Augmentation Policy Optimization, Applied Sciences, 11(15), 7093, 2022.
https://doi.org/10.3390/app11157093
Song, W.; Yang, Y.; Fu, M.; Li, Y.; Wang, M. (2018). Lane detection and classification for forward collision warning system based on stereo vision, IEEE Sensors Journal, 18(12), 5151-5163, 2018.
https://doi.org/10.1109/JSEN.2018.2832291
Tang, J.; Li, S.; Liu, P. (2021). A review of lane detection methods based on deep learning, Pattern Recognition, Pattern Recognition, Pattern Recognition, 111, 107623, 2021.
https://doi.org/10.1016/j.patcog.2020.107623
Viola, P.; Jones, M. (2001). Rapid object detection using a boosted cascade of simple features, In Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition, 2001.
Wu, J., Xu, H., Zhao, J. (2018). Automatic Lane identification using the roadside LiDAR sensors, IEEE Intelligent Transportation Systems Magazine, 12(1), 25-34, 2018.
https://doi.org/10.1109/MITS.2018.2876559
Wang, H.; Wang, Y., Zhao, X.; Wang, G., Huang, H.; Zhang, J. (2019). Lane detection of curving road for structural highway with straight-curve model on vision, IEEE Transactions on Vehicular Technology, 68(6), 5321-5330, 2019.
https://doi.org/10.1109/TVT.2019.2913187
Yoo, J.; Kim, D. (2021). Graph model-based lane-marking feature extraction for lane detection, Sensors, IEEE Transactions on Vehicular Technology, 21(13), 4428, 2021.
https://doi.org/10.3390/s21134428
Ye, X. Y.; Hong, D. S.; Chen, H. H.; Hsiao, P. Y.; Fu, L. C. (2022). A two-stage real-time YOLOv2-based road marking detector with lightweight spatial transformation-invariant classification, Image and Vision Computing, 102, 103978, 2020.
https://doi.org/10.1016/j.imavis.2020.103978
Yang, W.; Zhang, X.; Lei, Q.; Shen, D.; Xiao, P.; Huang, Y. (2020). Lane Position Detection Based on Long Short-Term Memory (LSTM), Sensors, 20(11), 3115, 2022.
https://doi.org/10.3390/s20113115
Zhou, T.; Yang, M.; Jiang, K.; Wong, H.; Yang, D. (2020). MMW radar-based technologies in autonomous driving: A review, Sensors, 20(24), 7283, 2020.
https://doi.org/10.3390/s20247283
Zhao, Z.; Wang, Q.; Li, X. (2020). Deep reinforcement learning based lane detection and localization, Neurocomputing, 413, 328-338.
https://doi.org/10.1016/j.neucom.2020.06.094
Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. (2019). Robust lane detection from continuous driving scenes using deep neural networks, IEEE transactions on vehicular technology, Signal Processing: Image Communication, 69(1), 41-54, 2019.
https://doi.org/10.1109/TVT.2019.2949603
Zhang, X.; Yang, W.; Tang, X.; Liu, J. (2018). A fast learning method for accurate and robust lane detection using two-stage feature extraction with YOLOv3, Sensors, 18(12), 4308, 2018.
https://doi.org/10.3390/s18124308
Zhang, Z. D.; Tan, M. L.; Lan, Z. C.; Liu, H. C.; Pei, L.; Yu, W. X. (2022). CDNet: a real-time and robust crosswalk detection network on Jetson nano based on YOLOv5, Neural Computing and Applications, 34, 10719-10730, 2021.
Additional Files
Published
Issue
Section
License
Copyright (c) 2022 Shenqi Gao, Shuxin Wang, Weigang Pan, Mushu Wang, Song Gao
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
ONLINE OPEN ACCES: Acces to full text of each article and each issue are allowed for free in respect of Attribution-NonCommercial 4.0 International (CC BY-NC 4.0.
You are free to:
-Share: copy and redistribute the material in any medium or format;
-Adapt: remix, transform, and build upon the material.
The licensor cannot revoke these freedoms as long as you follow the license terms.
DISCLAIMER: The author(s) of each article appearing in International Journal of Computers Communications & Control is/are solely responsible for the content thereof; the publication of an article shall not constitute or be deemed to constitute any representation by the Editors or Agora University Press that the data presented therein are original, correct or sufficient to support the conclusions reached or that the experiment design or methodology is adequate.