Proceedings of the 2024 10th International Conference on Architectural, Civil and Hydraulic Engineering (ICACHE 2024)

Progressive Automatic Method for Annotation of Concrete Crack Images

Authors
Donghui Xie1, Dandan Shi2, Qingning Chen1, *
1Ningbo Yuanshui Co., Ltd, Ningbo, China
2Ningbo Reservoir Management Center, Ningbo, China
*Corresponding author. Email: seventyoranges@163.com
Corresponding Author
Qingning Chen
Available Online 3 March 2025.
DOI
10.2991/978-94-6463-658-1_52How to use a DOI?
Keywords
Cracks; automatic annotation; progressive; optimization; contour extraction; deep learning
Abstract

Cracks are the most common manifestation of diseases in concrete dams. For dam cracks, many projects still use traditional measurement methods for detection, which are inefficient and subjective. To improve the accuracy and efficiency of crack detection, this paper presents a progressive automatic annotation algorithm that uses a three-stage process to annotate sample images of cracks. Firstly, draw black lines to simulate cracks on white paper, followed by application of edge detection to find crack contours. Secondly, the detected crack contours and sample information are integrated to generate an annotation file for training, thus obtaining the first-order weight file. Thirdly, calculate the Euclidean distance between the background area and the RGB components of the pixels in the detection area to optimize the mask and extract the crack coordinates. An 8-neighbor mask and the shared number are used at each coordinate point to systematically extract crack contours. And the crack sample information is integrated to automatically generate the image annotations for training, thus obtaining the second-order weight file for batch detection of concrete cracks. Finally, after optimizing and extracting mask contours, crack sample information is integrated to generate the image annotations for training and the subsequent batch detection, thereby, producing the third-level weight file. When used for the classification of images of black lines on white paper, black cracks on white rendered concrete, and cracks in concrete, the trained Mask region-based convolutional neural network (RCNN) model had a comprehensive evaluation index of 95.2%, 83.3%, and 79.2% respectively. The high detection rate shows that this model can be used effectively for fast detection of cracks in concrete structures.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2024 10th International Conference on Architectural, Civil and Hydraulic Engineering (ICACHE 2024)
Series
Advances in Engineering Research
Publication Date
3 March 2025
ISBN
978-94-6463-658-1
ISSN
2352-5401
DOI
10.2991/978-94-6463-658-1_52How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Donghui Xie
AU  - Dandan Shi
AU  - Qingning Chen
PY  - 2025
DA  - 2025/03/03
TI  - Progressive Automatic Method for Annotation of Concrete Crack Images
BT  - Proceedings of the 2024 10th International Conference on Architectural, Civil and Hydraulic Engineering (ICACHE 2024)
PB  - Atlantis Press
SP  - 515
EP  - 531
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6463-658-1_52
DO  - 10.2991/978-94-6463-658-1_52
ID  - Xie2025
ER  -