Wind turbine defect detection using deep learning
Wind turbine defect detection using deep learning
Corresponding Author:
Deepa Somasundaram
Department of Electrical and Electronics Engineering, Panimalar Engineering College
Chennai, Tamil Nadu, India
Email: [email protected]
1. INTRODUCTION
Wind energy is a rapidly growing sector in the renewable energy landscape, playing a critical role in
reducing carbon emissions and promoting sustainable energy generation [1]-[3]. Wind turbines are key assets
in this domain, but their complex structures and constant exposure to harsh environmental conditions make
them prone to damage. Regular inspection and maintenance of wind turbines are essential to ensure their
operational efficiency and prevent unexpected failures. Traditional inspection methods, such as manual
climbing or using ground-based equipment, pose several challenges, including safety risks, high costs, and
limited detection accuracy [4]-[7]. As such, innovative solutions are needed to improve the efficiency and
safety of wind turbine maintenance operations.
Unmanned aerial vehicles (UAVs) have emerged as an innovative tool for wind turbine inspection.
UAVs can fly close to turbine structures, capturing high-resolution images of blades, towers, and nacelles,
even in challenging locations that are difficult for humans to reach [8]-[10]. This technology allows for safer,
faster, and more comprehensive inspections compared to traditional methods. However, as noted
by Lei et al. [1], while UAVs can collect a significant amount of visual data, the manual review of this data is
still time-consuming and requires expert analysis to identify potential issues, such as cracks or corrosion on
turbine blades. The need for automating this process has led to the exploration of advanced algorithms in
machine learning and computer vision [11]-[13].
Fault diagnosis using machine learning algorithms is gaining traction in wind energy research. For
instance, Qu et al. [2] developed a fault detection system based on fuzzy logic for turbines, demonstrating the
potential of AI-driven techniques in this field. Similarly, the study by Rezaei et al. [3] employed modal-based
damage identification to address nonlinearities in wind turbine blades. While these studies primarily focused
on the mechanical aspects of wind turbines, they highlight the growing role of AI in predictive maintenance
and fault detection [14]-[19]. Building on these concepts, integrating object detection algorithms into UAV-
based wind turbine inspection offers the potential to automate the identification of structural defects, thus
reducing human intervention.
Object detection models, such as the You Only Look Once (YOLO) series, have revolutionized real-
time object detection by providing fast and accurate results. YOLOv8, the latest iteration, combines speed
with improved detection accuracy, making it suitable for applications like wind turbine defect detection.
According to Sun et al. [4], identifying damage in turbine blades using advanced machine learning
techniques can significantly enhance maintenance processes by pinpointing issues early. Leveraging
YOLOv8 in this context could automate the detection of critical defects, such as cracks, corrosion, or blade
misalignments, based on the visual data captured by UAVs, thus improving the efficiency of turbine
inspections [20]-[23].
Recent advancements in AI, particularly in the fields of object detection and anomaly detection,
have shown promising results in wind turbine maintenance. Wang et al. [5] proposed a two-stage anomaly
detection model to enhance fault detection in wind turbines, illustrating the benefits of combining machine
learning with real-time monitoring systems. By training object detection algorithms like YOLOv8 on wind
turbine images, it becomes possible to automate the identification of common issues such as blade damage or
structural wear. This approach not only reduces the time and expertise required for manual inspections but
also increases the overall reliability of the maintenance process.
Several previous studies have explored the integration of drone technology and computer vision for
turbine inspection. For example, Foster et al. [10] demonstrated the potential of drone footage for detecting
surface damage on turbines, showing the effectiveness of UAVs in capturing relevant data. However, the
application of cutting-edge object detection models like YOLOv8 for this specific task is still in its early
stages, making it a promising area of research. By refining these algorithms for wind turbine inspection,
operators can achieve higher precision and better maintenance outcomes, leading to reduced downtime and
enhanced energy output [24]-[25].
In summary, UAV-based inspection, combined with advanced object detection algorithms like
YOLOv8, offers a powerful solution to the challenges associated with traditional wind turbine maintenance.
This paper seeks to explore the potential of YOLOv8 in automating the defect detection process for wind
turbines. By leveraging recent developments in AI and drone technology, this approach could significantly
reduce the time, cost, and safety risks involved in turbine inspection, while improving the reliability of
renewable energy generation.
2. METHODOLOGY
The object detection system for wind turbines was implemented using the YOLOv8 algorithm. The
dataset consisted of wind turbine images with various defects, including cracks and surface corrosion. The
images were annotated to define the bounding boxes around turbine components and labeled for defect types.
The dataset was divided into training, validation, and testing sets, with a focus on training YOLOv8 to detect
defects efficiently.
The YOLOv8 model was configured using pre-trained weights and fine-tuned on the wind turbine
dataset. The data preprocessing included resizing images to a standard size to ensure compatibility with the
YOLO architecture, while maintaining the aspect ratios of the turbine components. Data augmentation
techniques, such as random rotation, flipping, and scaling, were applied to increase the variability of the
training set and improve model robustness.
The training process utilized a batch size of 1 and a learning rate optimized through grid search. The
model was trained for five epochs with a GPU accelerator to speed up computations. YOLOv8's architecture,
which consists of convolutional layers and anchor-based detection, enables it to quickly identify objects in
real time. After training, the best-performing model was saved and evaluated using the validation dataset to
measure its performance metrics, including precision, recall, and mean average precision (mAP).
The evaluation metrics provided insights into the model's ability to accurately detect wind turbine
defects. A confusion matrix was generated to assess the false positives and false negatives in defect detection.
The final model was deployed for inference on the test dataset to evaluate its real-world applicability and
effectiveness in detecting turbine defects. The object detection system for wind turbine defect identification
Wind turbine defect detection using deep learning (Deepa Somasundaram)
1350 ISSN: 2088-8694
using YOLOv8 involves several key steps, each enhanced by mathematical formulations and data-driven
optimization techniques. Figure 1 shows the flowchart of proposed method.
- Dataset collection: The dataset consists of high-resolution wind turbine images. Let I represent the image
set, where each image. Let I represent the image set, where each image I i. I is an array of pixel values.
The images contain defects such as cracks and corrosion, represented by bounding boxes
𝐵𝑖 = {(𝑥1, 𝑦1), (𝑥2, 𝑦2)} which are manually annotated.
- Image annotation: The images are annotated with bounding boxes Bi, and defect types are labeled as Li.
The labels are assigned to each bounding box such that 𝐷 = {(𝐵𝑖, 𝐿𝑖)} where D is the dataset of defect-
labeled bounding boxes.
- Data preprocessing: Each image Ii undergoes preprocessing. The resizing of images is crucial to ensure
compatibility with YOLOv8’s input size, represented by the transformation function:
𝐼𝑖 = 𝑓𝑟𝑒𝑠𝑖𝑧𝑒(𝐼𝑖, ℎ, 𝑤) (1)
where h and w are the height and width of the resized image. The aspect ratio is preserved during this
process. Preprocessing also includes normalization to scale pixel values between 0 and 1.
- Data augmentation: Data augmentation is applied to increase the variability of the training set.
Augmentation techniques such as random rotations 𝑅(θ)) and scaling (S(Sx, Sy), S are used, where:
this enhances model robustness by creating diverse training examples from the original images.
- Model training: The YOLOv8 model is trained using a loss function L i, which combines classification
loss Lclass, bounding box regression loss Lbbox, and object confidence loss Lconf. The total loss is given by:
where λ1, λ2, and λ3 are hyperparameters that balance the contributions of each component. The training is
performed using a batch size of 1 and an optimized learning rate η, which is fine-tuned using grid search.
- Model evaluation: The model is evaluated using performance metrics such as precision (P), recall (R),
and mean Average Precision (mAP). Precision and recall are computed as:
the mAP is calculated by averaging the precision over different recall levels.
- Inference on test data: The trained model is deployed for inference, where the bounding box predictions
Bi and their corresponding confidence scores Ci are generated for each test image Ii. The detection is
Int J Pow Elec & Dri Syst, Vol. 16, No. 2, June 2025: 1348-1355
Int J Pow Elec & Dri Syst ISSN: 2088-8694 1351
considered successful if the intersection over union (IoU) between the predicted and ground-truth
bounding boxes exceeds a threshold τ calculated as (4).
The final model’s real-world applicability is assessed based on these metrics. The flowchart displayed
outlines the complete process, from dataset collection to inference on test data, emphasizing each stage's role
in the wind turbine defect detection system. This approach integrates mathematical equations at each stage to
optimize the model's performance and ensure accurate defect identification. Figure 2 shows the visualized
sample images with corresponding annotations.
Int J Pow Elec & Dri Syst, Vol. 16, No. 2, June 2025: 1348-1355
Int J Pow Elec & Dri Syst ISSN: 2088-8694 1353
4. CONCLUSION
In conclusion, the application of YOLOv8 for wind turbine object detection offers a promising
solution to the challenges of maintaining and inspecting wind farms. The model’s high accuracy and real-
time detection capabilities can reduce the need for manual inspections, improving safety and reducing
operational costs. Furthermore, this approach allows wind farm operators to quickly identify and address
potential defects, ensuring continued optimal performance of turbines. With further improvements and
integration into UAV systems, YOLOv8 can revolutionize wind turbine maintenance, promoting the wider
adoption of renewable energy.
FUNDING INFORMATION
The authors confirm that the research was carried out independently without financial influence.
Name of Author C M So Va Fo I R D O E Vi Su P Fu
Deepa Somasundaram ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
M. Vanitha ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
T. Sathish Kumar ✓ ✓ ✓ ✓ ✓ ✓ ✓
I. Arul Doss ✓ ✓ ✓ ✓ ✓ ✓ ✓
Adaikalam
P. Kavitha ✓ ✓ ✓ ✓ ✓ ✓ ✓
R. Kalaivani ✓ ✓ ✓ ✓ ✓ ✓ ✓
DATA AVAILABILITY
Raw data is not publicly available due to privacy or institutional restrictions.
REFERENCES
[1] J. Lei, C. Liu, and D. Jiang, “Fault diagnosis of wind turbine based on long short-term memory networks,” Renewable Energy,
vol. 133, pp. 422–432, Apr. 2019, doi: 10.1016/j.renene.2018.10.031.
[2] F. Qu, J. Liu, H. Zhu, and B. Zhou, “Wind turbine fault detection based on expanded linguistic terms and rules using non-
singleton fuzzy logic,” Applied Energy, vol. 262, p. 114469, Mar. 2020, doi: 10.1016/j.apenergy.2019.114469.
[3] M. M. Rezaei, M. Behzad, H. Moradi, and H. Haddadpour, “Modal-based damage identification for the nonlinear model of
modern wind turbine blade,” Renewable Energy, vol. 94, pp. 391–409, Aug. 2016, doi: 10.1016/j.renene.2016.03.074.
[4] S. Sun, T. Wang, H. Yang, and F. Chu, “Damage identification of wind turbine blades using an adaptive method for compressive
beamforming based on the generalized minimax-concave penalty function,” Renewable Energy, vol. 181, pp. 59–70, Jan. 2022,
doi: 10.1016/j.renene.2021.09.024.
[5] A. Wang, Y. Pei, Z. Qian, H. Zareipour, B. Jing, and J. An, “A two-stage anomaly decomposition scheme based on multi-variable
correlation extraction for wind turbine fault detection and identification,” Applied Energy, vol. 321, p. 119373, Sep. 2022, doi:
10.1016/j.apenergy.2022.119373.
[6] S. A. Boyer, SCADA: Supervisory control and data acquisition. Research Triangle Park, NC, United States: International Society
of Automation, 2009.
[7] K.-S. Choi, Y.-H. Huh, I.-B. Kwon, and D.-J. Yoon, “A tip deflection calculation method for a wind turbine blade using
temperature compensated FBG sensors,” Smart Materials and Structures, vol. 21, no. 2, p. 025008, Feb. 2012, doi: 10.1088/0964-
1726/21/2/025008.
[8] X. Dai et al., “Dynamic head: Unifying object detection heads with attentions,” in Proceedings of the IEEE/CVF International
Conference on Computer Vision (ICCV), 2021, pp. 7373–7382.
[9] J. Deng, W. Li, Y. Chen, and L. Duan, “Unbiased mean teacher for cross-domain object detection,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 4091–4101.
Wind turbine defect detection using deep learning (Deepa Somasundaram)
1354 ISSN: 2088-8694
[10] A. Foster, O. Best, M. Gianni, A. Khan, K. Collins, and S. Sharma, “Drone footage wind turbine surface damage detection,” in
2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), IEEE, Jun. 2022, pp. 1–5, doi:
10.1109/IVMSP54334.2022.9816220.
[11] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “DSSD: deconvolutional single shot detector,” arXiv:1701.06659, 2017,
[Online]. Available: http://arxiv.org/abs/1701.06659
[12] T. Gao, X. Yao, and D. Chen, “SimCSE: simple contrastive learning of sentence embeddings,” Proceedings of the 2021
Conference on Empirical Methods in Natural Language Processing, 2021, pp. 6894–6910, doi: 10.18653/v1/2021.emnlp-
main.552.
[13] R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440–
1448.
[14] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic
segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 580–587,
doi: 10.1109/CVPR.2014.81.
[15] T. Guo et al., “Nacelle and tower effect on a stand-alone wind turbine energy output—A discussion on field measurements of a
small wind turbine,” Applied Energy, vol. 303, p. 117590, Dec. 2021, doi: 10.1016/j.apenergy.2021.117590.
[16] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904–1916, Sep. 2015, doi:
10.1109/TPAMI.2015.2389824.
[17] T.-Y. Hsu, S.-Y. Shiao, and W.-I. Liao, “Damage detection of rotating wind turbine blades using local flexibility method and
long-gauge fiber Bragg grating sensors,” Measurement Science and Technology, vol. 29, no. 1, p. 015108, Jan. 2018, doi:
10.1088/1361-6501/aa97f0.
[18] S. Hwang, Y.-K. An, and H. Sohn, “Continuous-wave line laser thermography for monitoring of rotating wind turbine blades,”
Structural Health Monitoring, vol. 18, no. 4, pp. 1010–1021, Jul. 2019, doi: 10.1177/1475921718771709.
[19] M. Kharrich et al., “Optimal design of an isolated hybrid microgrid for enhanced deployment of renewable energy sources in
Saudi Arabia,” Sustainability, vol. 13, no. 9, p. 4708, Apr. 2021, doi: 10.3390/su13094708.
[20] J. A. Carballo, J. Bonilla, L. Roca, and M. Berenguel, “New low-cost solar tracking system based on open source hardware for
educational purposes,” Solar Energy, vol. 174, pp. 826–836, Nov. 2018, doi: 10.1016/j.solener.2018.09.064.
[21] J. A. Carballo, J. Bonilla, M. Berenguel, J. Fernández-Reche, and G. García, “New approach for solar tracking systems based on
computer vision, low cost hardware and deep learning,” Renewable Energy, vol. 133, pp. 1158–1166, Apr. 2019, doi:
10.1016/j.renene.2018.08.101.
[22] X W. Li, W. Zhao, T. Wang, and Y. Du, “Surface defect detection and evaluation method of large wind turbine blades based on
an improved Deeplabv3+ deep learning model,” Structural Durability & Health Monitoring, vol. 18, no. 5, pp. 553–575, 2024,
doi: 10.32604/sdhm.2024.050751.
[23] X. Sun, G. Wang, L. Xu, H. Yuan, and N. Yousefi, “Optimal estimation of the PEM fuel cells applying deep belief network
optimized by improved archimedes optimization algorithm,” Energy, vol. 237, p. 121532, Dec. 2021, doi:
10.1016/j.energy.2021.121532.
[24] W. Abitha Memala, C. Bhuvaneswari, S. M. Shyni, G. Merlin Sheeba, M. S. Mahendra, and V. Jaishree, “DC-DC converter based
power management for go green applications,” International Journal of Power Electronics and Drive Systems, vol. 10, no. 4, pp.
2046–2054, Dec. 2019, doi: 10.11591/ijpeds.v10.i4.pp2046-2054.
[25] M. Aqib, R. Mehmood, A. Alzahrani, I. Katib, A. Albeshri, and S. M. Altowaijri, “Smarter traffic prediction using big data, in-
memory computing, deep learning and GPUs,” Sensors, vol. 19, no. 9, p. 2206, May 2019, doi: 10.3390/s19092206.
BIOGRAPHIES OF AUTHORS
Int J Pow Elec & Dri Syst, Vol. 16, No. 2, June 2025: 1348-1355
Int J Pow Elec & Dri Syst ISSN: 2088-8694 1355