Next Article in Journal
Calculations of Cross-Sections for Positron Scattering on Benzene
Next Article in Special Issue
Integration of Deep Learning Vision Systems in Collaborative Robotics for Real-Time Applications
Previous Article in Journal
Structural Designing of Supersonic Swirling Devices Based on Computational Fluid Dynamics Theory
Previous Article in Special Issue
Computer-Integrated Surface Image Processing of Hydrogen-Saturated Steel Wear Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm

1
MEtRICs Research Center, University of Minho, Campus of Azurém, 4800-058 Guimarães, Portugal
2
Algoritmi Research Centre, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
3
2Ai, School of Technology, IPCA, 4750-810 Barcelos, Portugal
4
2C2T Research Centre, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(1), 149; https://doi.org/10.3390/app15010149
Submission received: 21 October 2024 / Revised: 12 December 2024 / Accepted: 19 December 2024 / Published: 27 December 2024

Abstract

:
In textile manufacturing, ensuring high-quality yarn is crucial, as it directly influences the overall quality of the end product. However, imperfections like protruding and loop fibers, known as ‘hairiness’, can significantly impact yarn quality, leading to defects in the final fabrics. Controlling yarn quality in the spinning process is essential, but current commercial equipment is expensive and limited to analyzing only a few parameters. The advent of artificial intelligence (AI) offers a promising solution to this challenge. By utilizing deep learning algorithms, a model can detect various yarn irregularities, including thick places, thin places, and neps, while characterizing hairiness by distinguishing between loop and protruding fibers in digital yarn images. This paper proposes a novel approach using deep learning, specifically, an enhanced algorithm based on YOLOv5s6, to characterize different types of yarn hairiness. Key performance indicators include precision, recall, F1-score, mAP0.5:0.95, and mAP0.5. The experimental results show significant improvements, with the proposed algorithm increasing model mAP0.5 by 5% to 6% and mAP0.5:0.95 by 11% to 12% compared to the standard YOLOv5s6 model. A 10k-fold cross-validation method is applied, providing an accurate estimate of the performance on unseen data and facilitating unbiased comparisons with other approaches.

1. Introduction

Yarn is fundamental in the textile industry, used to produce fabrics, knits, and other materials. It is typically made from natural fibers (e.g., cotton, wool, or silk), synthetic fibers (e.g., polyester, polyamide, or acrylic), or a combination, twisted together to form a continuous single unit [1].
As the quality and performance of textile products depend on the yarn quality used in their production [2], companies face challenges related to the origin and quality of raw materials. The absence of effective systems for analyzing and controlling yarn quality can lead to significant losses, resulting in customer complaints and financial setbacks. One main issue with yarn is hairiness [3,4,5], which refers to short, fine fibers released from the yarn surface. These fibers can originate from loose fibers not properly incorporated, fibers that broke during processing, or low-quality fibers. Various types of hairiness (Figure 1) can be found in yarns, including protruding and loop fibers. The short fibers that protrude from the yarn surface, due to low-quality fibers or breakage during processing, are called protruding fibers.
Loop fibers are loop-shaped structures that extend outward from the yarn surface, often caused by inadequate processing, such as improper fiber blending or carding. Both types significantly impact the fabric quality and processing. Protruding fibers are visible on the yarn surface, causing issues during textile processing, while loop fibers can create an uneven yarn appearance and may cause problems in the weaving or knitting process [3,4,5].
Given the importance of these defects, it is important to understand the challenges they introduce during production. Key problems caused by loops and protruding fibers [6,7,8,9,10,11,12,13,14] include difficulties in fabric processing (e.g., tangling or weaving failures), yarn wear and breakage, potential damage to machinery, interference with dyeing or printing, decreased strength and durability, and inconsistencies in quality, leading to an uneven and unappealing final product, as shown in Figure 2.
To ensure yarn quality and minimize issues, it is essential to implement proper production practices, including quality control during manufacturing, the careful selection of raw materials, and the adoption of advanced processing technologies to reduce loops and protruding fibers [15]. Additionally, automatic inspection systems can help identify and remove defective yarns before they affect the final products.
Although some commercial solutions, such as USTER TESTER 3 [16], can detect a few parameters related to yarn hairiness, there are currently no systems that utilize deep learning and image processing to fully characterize yarn hairiness [17,18].
To address this gap, our research aims to develop a new methodology using artificial intelligence (AI) to characterize yarn hairiness. Coupled with a low-cost mechatronic system, this methodology enables continuous yarn quality monitoring, precise defect detection, and seamless integration into industrial production lines [17,18]. We have devised [19,20] a system employing AI-driven algorithms for image acquisition, processing, and yarn analysis, offering the following key features:
  • The provision of statistical data on analyzed yarns, generating an analysis report.
  • Comprehensive yarn characterization in a production report with a user-friendly interface.
  • The complete characterization of defects, including hairiness, neps, and thin and thick places.
  • AI integration for the enhanced detection and automatic classification of yarn hairiness, improving accuracy and classification.
To address these challenges, the deep learning YOLOv5s6 [21] detection algorithm was used to detect and characterize yarn hairiness. In recent years, deep learning has gained significant traction in various engineering domains [22,23,24,25,26,27,28,29]. In target detection, it has introduced solutions like Fast RCNN, Faster RCNN, SSD, and the YOLO family of algorithms [30]. Initially proposed by Redmon et al. [31] in 2016, the YOLO algorithm revolutionized target detection by enabling classification and localization through neural networks. The YOLO series has evolved from YOLOv1 to the latest iteration, YOLOv8 [32,33,34,35]. YOLOv5 stands out for its compact size, high speed, and superior accuracy, leveraging the mature PyTorch ecosystem for deployment. The YOLOv5 family includes four variants: YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x, with YOLOv5s6 being the lightest, at 14.4 MB [32,33].
The main contributions of our research in this paper are
  • The optimization of the YOLOv5s6 algorithm for yarn hairiness detection, resulting in significant improvements in accuracy, especially for protruding and loop fibers, with a 5–6% increase in mAP@0.5 and an 11–12% improvement in mAP@0.5:0.95.
  • The integration of advanced modules (C2f, Bot-Transformer, GeLU) for more effective defect detection, particularly in capturing and classifying complex hairiness in yarn images.
  • Validation through k-fold cross-validation with 10 splits, ensuring the model’s robustness and consistent performance in detecting loops and protruding fibers.
  • The complete and automatic characterization of yarn defects, providing a comprehensive analysis and classification of defects like “hairiness” using advanced Deep Learning techniques.
The use of AI, specifically deep learning (DL) techniques, in this work is justified by the capacity of this technology to overcome the limitations of traditional methods for detecting defects in yarns, especially for specific types of hairiness, such as loop fibers and protruding fibers.
The main reasons why the automatic detection and characterization of yarn hairiness can be improved using artificial intelligence are as follows:
  • Irregular Visual Characteristics of Hairiness Classes: DL can handle complex features, such as the irregular shape of loop fibers and protrusion fibers, which are difficult for traditional methods.
  • Accuracy and Adaptability: DL adapts to different types of yarns, colors, and lighting conditions, offering greater flexibility than optical sensors or conventional methods.
  • Simultaneous Classification: allows the detection and classification of multiple defects (loop fibers and protruding fibers) in a single step, optimizing the time and resources.
  • Superior Performance: our improved algorithm (YOLOv5s6-Hairiness) showed significant improvements in metrics such as mAP and accuracy, outperforming commercial systems and standard YOLO variants.
  • Scalability and Cost-Effectiveness: DL-based solutions are more economical and portable than systems like USTER TESTER, which are very expensive and less flexible.
  • Industrial Impact: DL improves the quality of the final yarn and fabric, reducing defects, waste, and operational costs, in addition to enabling real-time inspection.
This paper is organized as follows: Section 2 analyzes previous work on yarn and fabric analysis using improved deep learning algorithms and compares the developed system with existing ones. Section 3 describes the YOLOv5s6 architecture, the default network, and the proposed model, with a focus on the modified modules. Section 4 presents the experimental results and discussion. Finally, Section 5 provides conclusions and suggestions for future research.
Below is a diagram, in Figure 3, that presents the main stages of the development of the research described in this article. The diagram ranges from the identification of the hairiness problem in textile yarns and the limitations of existing solutions, through the literature review and definition of the objectives, to the practical steps of preparing the data set, developing the optimized YOLOv5s6 algorithm, a detailed experimental analysis, and, finally, the conclusions and proposals for future work.

2. Literature Review on Yarn Quality Analysis

Technological evolution has significantly advanced systems for measuring the yarn quality and using AI to detect defects. This review compares solutions and projects that have explored technologies for obtaining yarn characteristics, divided into two groups: image processing/computer vision [36] and image processing/computer vision with or without AI techniques [10,37,38].

2.1. Systems Based on Image Processing

Zhisong Li et al. [39] developed a computer vision system (referred as System A in this paper) to evaluate the yarn quality, focusing on the yarn diameter and hairiness defects. In their prototype, a Diameter Image Processing Unit (DIPU) was defined, selecting sampling points from the moving yarn. Each DIPU was segmented from the captured yarn images into foreground, background, and unknown regions. The Poisson Matting method [36] further processed the unknown region using a connectivity classifier to separate the yarn from the background. Yarn defects were analyzed using statistical methods to evaluate the yarn quality.
Another project, referred to as System E in this paper, was developed by Deng Zhang et al. [40] for yarn hairiness detection, focusing on accurately separating and detecting crossover fibers. This system relies solely on offline image processing. The goal was to develop an algorithm to improve the precision of the hairiness measurement by separating the crossover fibers. The results show that the algorithm measured hair lengths 11.1% longer than manual methods, while commercial devices underestimated hair length by 62.5%. In summary, this work introduces an algorithm that converts images into hair skeletons to accurately measure the crossed fiber lengths in yarns.

2.2. Systems Based on Computer Vision and Artificial Intelligence

Another system, referred to as System B in this paper, was developed by Noman Haleem et al. [41], employing image processing, computer vision, and AI. The authors emphasized real-time yarn testing to minimize the process latency and quickly assess the yarn quality. They proposed an online uniformity test system specifically designed to detect neps, using three computer vision models based on the Viola–Jones algorithm [42], developed with the OpenCV library [41]. The results were validated by comparing them with existing commercial uniformity testing equipment (USTER’s TESTER).
Regarding the application of AI to modeling yarn tensile properties, Adel El-Geiheini et al. [37] developed the referred in this paper System C, applying image processing techniques and artificial neural networks to model the tenacity and elongation of different yarn types. The authors used feed-forward neural networks trained with the backpropagation rule (the Levenberg–Marquardt algorithm in MATLAB 2015) [43], creating two systems: one for assessing the elongation and tenacity of cotton yarns and another for mixed yarns. By combining image enhancement with a multilayer neural network, they achieved favorable results in estimating various yarn characteristics.
Another project, referred to as System D in this paper, was developed by Manal R. Abd-Elhamied et al. [38], focusing on predicting cotton yarn characteristics using a combination of image processing and artificial neural networks. The research evaluated the yarn tenacity, elongation, linear mass, coefficient of mass variation, and the percentage of imperfections in spun and compact cotton yarns. All yarn characteristics used in this study were acquired from commercially available equipment.

2.3. Comparative Analysis of Existing Systems

A comparative analysis between all the studied systems referred to in this section is presented in Table 1.
By analyzing Table 1, some observations based on the features listed can be identified, namely:
(a) 
Mechatronic Prototype Development: System A has developed a mechatronic prototype. Systems B, C, and D do not have this feature.
(b) 
Non-destructive Prototype: none of the systems have a non-destructive prototype.
(c) 
Yarn Winding and Unwinding System: none of the systems include a yarn winding/unwinding capability.
(d) 
Image or Video Analysis: All systems use image analysis. None uses video.
(e) 
Use of Vision System or AI: Systems B, C, and D combine vision systems with AI, without deep learning techniques.
(f) 
Detect Defects: Systems A and E use only vision systems, while B, C, and D combine them with AI.
(g) 
Specific Yarn Quality Parameters:
  • System A detects thick spots, thin spots, neps, the yarn diameter, and hairiness coefficient.
  • System B detects the neps and mass variation coefficient.
  • System C measures elongation at the break and tenacity.
  • System D measures thick places, thin places, neps, the linear mass, coefficient of variation of mass, elongation at the break, and tenacity.
  • System E measures the lengths of the crossed fibers.
In summary, there is no system capable of obtaining most specific yarn quality parameters.
(h) 
Integration in production lines: only Systems A and B are integrable into production lines. C and D depend on the USTER TESTER 3.
(i) 
Online or offline image acquisition: Systems A and B allow online image acquisition; C, D, and E are only offline.
(j) 
Spectral analysis based on image processing: none of the systems perform image-based spectral analysis.
(k) 
Use of deep learning techniques to detect defects: none of the systems use deep learning techniques to detect defects and also specifically to detect and classify yarn hairiness.
(l) 
Yarn Hairiness Analysis: only System E analyzes yarn hairiness, but without covering other parameters.

2.4. Identification of Gaps and Proposed Solution

Systems A, B, C, D, and E present significant limitations in yarn analysis, preventing a comprehensive quality assessment. To overcome these limitations, a comprehensive system with a mechatronic prototype is proposed, which:
  • Detects various yarn characteristics and defects, including thick places, the yarn diameter, linear mass, volume, and number of loose fibers, among others.
  • Conducts an analysis based on images and videos.
  • Uses deep learning techniques for defect detection, particularly yarn hairiness classification.
  • Is non-destructive, low-cost, and easily integrates into textile production lines.
It is very important to note that this article focuses only on the detection and classification of yarn hairiness using deep learning techniques, compared to systems A, B, C, D, and E. In the following sections, the deep learning algorithms employed by the proposed system are presented, outlining their significance and functionality in detecting yarn hairiness.
By utilizing these algorithms, the proposed system provides precise analysis, addressing these challenges and providing a comprehensive solution for quality assessments in the textile industry.

3. Materials and Methods

3.1. Yarn Hairiness Identification Using Yolo Algorithm

In this work, a custom-improved YOLOv5s6-Yarn Hairiness is used to automatically identify hairiness defects and classification in yarn such as protruding and loop fibers. The choice of the YOLOv5s6 version over YOLOv8 was made for the following reasons [44]:
  • The preliminary tests showed that YOLOv5s6 outperformed other variants, including YOLOv8, in the overall results.
  • YOLOv5s6 was prioritized for its superior processing speed, crucial for the instant detection and classification of fibers with minimal latency.
  • Given the availability of GPU resources, YOLOv5s6 was optimal for its performance on GPUs, accelerating fiber detection.
  • YOLOv5s6 offers a competitive balance between accuracy and speed, meeting the needs of the application.
  • YOLOv5 is user-friendly, especially with its PyTorch foundation, facilitating easy deployment.
The following sections detail the YOLOv5s6 detection method, and the improvements implemented, including the Bot-Transformer Module, MHSA Block, and C2f module. The hyperparameters and changes made to the algorithm, as well as the activation functions used, are also discussed.

3.2. YOLOv5s6 Algorithm Structure

The YOLOv5s6 model consists of three parts (Figure 4), namely [45,46,47]:
(1)
BACKBONE: YOLOv5 uses a convolutional backbone to extract features from the input images, performing initial convolutions and extracting low-level representations.
(2)
NECK: some variants of YOLOv5 use a “neck”, which is a sequence of convolutional layers that help combine features from different scales, improving the detection of objects of different sizes.
(3)
HEAD OR DETECT: the “Head” or “Detect” is the final part of the model, where object detection takes place, predicting the bounding boxes and classes of objects present in the image.
YOLOv5 provides four different scales for its model: S, M, L, and X, representing Small, Medium, Large, and Extra Large. Each scale applies a different multiplier to the depth and width of the model, meaning the structure remains constant, but the size and complexity vary. The C3 module (Figure 3) includes three layers of standard convolution and several bottleneck modules, with the SiLU activation function used in the convolution module [45,46,47].
The fusion of local and global moment features enhances the accuracy, especially for complex multi-target detection [45,46,47].

3.3. Improved YOLOv5s6 Yarn Hairiness Structure

The YOLOv5s6-Yarn Hairiness deep neural network was designed with selected modules to detect and classify yarn hairiness. The improvements implemented were
  • The integration of the C2f Module of the YOLOv8 advanced version: the addition of the Cross-Stage Partial Networks Fusion (C2f) module enhances feature fusion at different stages, significantly improving the detection of fine details in yarn images, crucial for identifying hairiness types.
  • Bot-Transformer modules in the neck: these modules, using multi-head self-attention mechanisms, allow the model to focus on relevant features in complex yarn images, improving accuracy in detecting loops and protruding fibers.
  • GeLU activation function: Gaussian Error Linear Unit (GeLU) activation functions were used in certain layers to replace ReLU or SiLU, increasing the model’s sensitivity to detect loop and protruding fibers.
  • Optimization of hyperparameters: The anchor_t value was set to 5.0 to prioritize larger anchors, enhancing the model’s ability to detect varying sizes of hairiness in high-resolution images. Additionally, the scale factor was adjusted to 2.0, allowing the neural network to resize images and capture more details, making smaller features like loop and protruding fibers more detectable.
The YOLOv5s6 architecture was used among the available versions of YOLOv5 and YOLOv8. Several basic tests were performed, and this model was the one that gave the best overall results compared to other variants of the algorithm. Due to the remarkable performance of the YOLOv5s6 in hairiness detection, an improved YOLOv5s6 network called YOLOv5s6-Yarn Hairiness incorporating a Bot transformer, a C2f module, and a Gaussian Error Linear Unit (GeLU) activation function was developed, as shown in Figure 5.
When analyzing loop and protruding fibers in yarns using the YOLOv5s6 model, each block enhances the detection of these hairiness types:
(1) 
BACKBONE
  • Function: extracts relevant features from yarn images, capturing edges, textures, and patterns.
  • Impact: an efficient backbone improves the accuracy of detecting distinctive features of loop and protruding fibers.
(2) 
NECK
  • Function: combines information at different scales, supporting the detection of objects of varying sizes.
  • Impact: As loop and protruding fibers can vary in size, the Neck helps the model capture contextual information at multiple scales. This is especially useful for detecting smaller or more diffuse hairiness, improving accuracy by considering the context around these features.
(3) 
HEAD
  • Function: responsible for object detection, including generating predictions for bounding boxes and object classes.
  • Impact: The Head enables the model to accurately locate and classify fibers. Bounding box predictions define the position and size of hairiness, while class predictions differentiate between loop and protruding fibers.
Each block contributes critically to the detection process. The Backbone extracts features from the fibers, the Neck integrates information at multiple scales to capture contextual details, and the Head finalizes the detection, providing precise information to identify loop and protruding fibers accurately.

3.4. YOLOv5s6 Yarn Hairiness Improvements

The improved YOLOv5s6-Yarn Hairiness model introduces four key modifications compared to the default YOLOv5s6.

3.4.1. CBG Module—Activation Function

Here, the replacement of the SiLU activation function by GeLU is discussed [47]. The decision to switch to GeLU was based on better results obtained from a comprehensive mathematical analysis performed beforehand (Figure 5B).
This CBG block is used in several other blocks (Figure 5C) and in two different bottleneck blocks (Figure 5D).
The C3 module is a crucial part for feature extraction, consisting of three CBG modules and several stacked Bottleneck blocks. The C3_x notation indicates that there are x number of stacked Bottleneck blocks. In Figure 5E [45,46,47], the feature map is split into two branches after entering the C3 module. One branch passes through the CBG and Bottlenecks, while the other passes through the CBG only. The two branches are then concatenated and processed through another CBG module.
Each Bottleneck block consists of two CBG modules: the first performs a 1 × 1 convolution to halve the number of channels, and the second performs a 3 × 3 convolution to double them. Reducing the dimensionality first allows the convolutional kernel to better understand the feature information, while increasing the dimensionality helps extract more detailed features. The residual structure adds input and output, preventing the vanishing gradient problem. The main purpose of the C3 function is to capture complex patterns and details as the network depth increases. Here, 3 × 3 convolutions are widely used because they enable the neural network to learn richer, more abstract representations. After extensive testing, the GeLU activation function was selected for the optimized YOLOv5 algorithm for the following reasons:
  • Improved Non-linearities: GeLU captures complex patterns of loop and protruding fibers better than SiLU, enhancing the detection of various yarn hairiness types.
  • Mitigation of Vanishing Gradients: GeLU’s smooth derivatives prevent vanishing gradients, ensuring the more stable training of deep neural networks.
  • Improved Training Stability: by approximating the cumulative Gaussian distribution, GeLU enhances the training stability, leading to more precise weight adjustments and better detection accuracy.
  • Superior Performance: GeLU outperformed other activation functions in metrics, as shown in Section 4 of this study.

3.4.2. C2F Module

The last C3_1 module in the Backbone was replaced with the C2f [43] module from the latest YOLOv8 model due to its superior performance and optimization benefits. The C2f module, incorporating ideas from the C3 module and ELAN [47,51,52], was tested in different locations within the model to assess its impact on the network performance. The C2f modules are shown in Figure 5F [48,49,50].

3.4.3. Bot-Transformer Module and MHSA Block

The C3_1_F block in the Neck was replaced with the Bot-Transformer module [53] (BoTNet) in the neural network. BoTNet, based on ResNet, replaces the standard 3 × 3 convolution with a Multi-Head Self-Attention (MHSA) mechanism. This change significantly improves the detection and segmentation performance while reducing the number of model parameters [48,49,50]. The Bot-Transformer Module (Figure 5G(a,b) adds self-attention to various computer vision tasks, such as object detection and instance segmentation.
This mechanism enables the network to learn important relationships between different parts of images, improving the performance on complex tasks [48,49]. To apply BoTNet to YOLOv5, researchers adapted the C3 module of YOLOv5s6 by replacing the original Bottleneck with a Bottleneck Transformer, which incorporates the self-attention mechanism [48,49,50].
The MHSA block is the central component of the Bottleneck Transformer used in the BoTNet. The MHSA is responsible for incorporating self-attention into various computer vision tasks, such as object detection and instance segmentation. Its structure is illustrated in Figure 6.
Figure 6 illustrates the key components and operations of the MHSA block, shedding light on how this mechanism enhances the understanding of relationships within data sequences [48,49,50]:
  • q, k, and v Matrices: these matrices represent queries (q), keys (k), and values (v) and are integral to understanding input sequences.
  • qkT (Dot Product): qkT represents the dot product between queries (q) and the transposed keys (k). This dot product quantifies the similarity between queries and keys, serving as the foundation for calculating attention weights.
  • qrT (Relative Position): qrT symbolizes the dot product between queries (q) and transposed relative positions (r). The matrix r encodes positional or distance information between sequence elements.
  • qkT + qrT (Combined Attention): This expression represents the summation of attentions derived from queries and keys and positional information. This cumulative measure captures both the overall attention and attention relative to the sequence’s positions.
  • Softmax (qkT + qrT): The Softmax function is applied to the result from the previous step. Softmax normalizes values, mapping them to a range between 0 and 1. Larger values receive higher importance, directing attention towards the most relevant elements in the sequence.
  • Weighted Multiplication: the result of the Softmax operation is multiplied by the value matrices (v).
The C2f, Bot-Transformer, and GeLU modules were incorporated into the YOLOv5s6 model for their abilities to enhance feature fusion, focus on relevant features, and handle nonlinearities.
  • C2f (Cross-Stage Partial Networks Fusion): Improves feature fusion at different network stages, improves the gradient flow, and extracts detailed features, crucial for detecting fine details like protruding and loop fibers.
  • Bot-Transformer: uses multi-head self-attention to focus on relevant features, improving the detection and classification of hairiness defects and increasing the accuracy across various scales.
  • GeLu (Gaussian Error Linear Unit): Handles nonlinearities, increases sensitivity to fiber variations, stabilizes training, and improves the convergence speed, enhancing the overall model performance and accuracy by providing smooth activation, improving convergence, and ensuring a better gradient flow.
These modules were chosen over alternatives like deep convolution, ghost convolution, simple transformer blocks, CSP, and SPP bottleneck blocks for their superior ability to capture complex details. This capability is crucial for highlighting small details in high-resolution images, essential for accurately detecting loop and protruding fibers in yarn.

3.4.4. High-Level Hyperparameters

The adjustments to the high-level hyperparameters “anchor_t” and “scale” were made to enhance the model’s ability to learn and adapt to the “loop fibers” class. The modified values for these hyperparameters were [54,55,56]:
(1) 
anchor_t: 5.0
  • The threshold for assigning anchor boxes was set to 5.0, meaning the model considers anchors about five times larger than the original dimensions.
  • This adjustment was useful for detecting large or unusually sized objects, like loop and protruding fibers, in 1280 × 1280 pixel images.
  • Raising this parameter reduced false positives in noisy or complex backgrounds, enhancing the detection of larger fiber structures.
(2) 
scale: 2.0
  • Images were resized to approximately 200% to 300% of the standard resolution, increasing the scale to better detect fine details in fibers.
  • This scaling improved the model’s ability to identify details in high-resolution images, but required more computational resources.
  • These adjustments were crucial for accurate detection in our specific yarn dataset.
After several tests, the appropriate values for “anchor_t” and “scale” were chosen based on the specific characteristics of the yarn dataset and the nature of the loop and protruding fibers we aimed to detect.

3.5. Dataset Preparation

The proposed yarn hairiness dataset contributes to the research community by focusing on protruding and loop fibers, filling a gap with specific and comprehensive data on yarn hairiness. This dataset is a valuable tool for textile engineering, distinguished from existing datasets. It consists of images of purple cotton yarn with an average linear mass of 56.4 tex, captured at 22× magnification using a 1.6 Mpixel camera at 238 FPS, acquired with a custom-designed mechatronic prototype [18] (Figure 7).
Commercially available equipment, such as the USTER TESTER, is characterized by the following:
  • A high cost—exceeding EUR 100k;
  • A high weight, limiting portability;
  • Destructive sample analysis;
  • A limited capability to analyze yarn parameters.
While this article highlights the high efficiency and cost-effectiveness of the developed system, it acknowledges that a more explicit analysis of cost savings was not initially included. The total cost of the developed mechatronic system (hardware and software) is approximately EUR 1300, compared to the commercial USTER TESTER, which costs over EUR 100k. This means that the developed prototype is about 10 times less expensive, representing substantial savings in acquisition costs alone, without compromising the effectiveness in yarn quality monitoring.
The dataset [57,58], available in Mendeley Data, comprises 684 original images, which underwent augmentation to yield 1644 images. These images were accompanied by 11,037 annotations extracted from videos showcasing 100 m of purple cotton yarn. The videos were captured and hosted on Roboflow [59].
This dataset facilitates the development of machine learning models for yarn hairiness detection and advances AI in textile science. It was categorized into two annotation classes: protruding fibers and loop fibers. Using this dataset, tests and training were conducted to build a robust model for real-time hairiness identification and classification. The annotation was carried out in a polygonal mode [58,59,60,61] using LabelME [60] (Figure 8).
Of the 684 images in the dataset, 70% were used for training (480 images), 20% for validation (138 images), and the remaining 10% for testing (66 images). This dataset was generated using augmentation techniques [18,58]. An average of approximately 16 annotations per image was obtained, resulting in a total of 6733 annotations for loop fibers and 4304 annotations for protruding fibers (Table 2).
In the used dataset, specifically for training, Data Augmentation was performed through the Roboflow and the following techniques were selected:
  • Flip: horizontal, vertical;
  • Saturation: between −25% and +25%;
  • Blur: up to 5% of pixels;
  • Noise: up to 5% of pixels.

4. Experimental Results and Discussion

Currently, no existing systems employ methodologies similar to those presented in this study. Traditional systems like the USTER TESTER 3 and basic image processing methods do not support the deep learning-based detection and classification of yarn hairiness. As mentioned in Section 2, this makes direct comparisons with traditional methods impossible, highlighting the innovative nature of our system.
This study required a GPU with CUDA [62] support and the PyTorch framework for the experimental setup. The YOLOv5s6 deep learning model leveraged parallel computing for fast and accurate learning and object detection. The GPU with CUDA accelerated the performance, while PyTorch was used for model construction and training. Table 3 details the hardware configuration and the software configurations.
The definitions and specifications of the training parameters of the neural network are listed in Table 4.

4.1. Relevant Evaluation Metrics

In order to completely and objectively evaluate the performance of the proposed model, we used some of the most commonly used metrics in YOLOv5, as indicated below [58,59,60,61]:
  • Average Precision (AP)—Measures the detection accuracy at different confidence levels, calculating the area under the Precision–Recall curve. AP is computed as follows:
    A P = 1 n Σ r P r r
    where n is the number of threshold points, P(r) is the Precision at recall point r, and Δr is the recall difference between consecutive points.
  • Mean Average Precision (mAP)—The average of AP across different object classes, giving an overall performance metric. It is computed as follows:
    mAP = 1 C t = 1 c A P t
    where C represents the number of categories. Variants include mAP@.5 (IoU = 0.50) and mAP.5:95 (average over IoU thresholds from 0.5 to 0.95).
  • IoU (Intersection over Union)—measures the overlap between the predicted and ground truth bounding boxes:
    IoU = Intersection Area Union Area
  • Accuracy—The proportion of correct detections relative to total detections, calculated as:
    Accuracy = Number of Correct Detections ( Total Number of Detections )
    or
    Accuracy = (TP + TN)/(TP + TN + FP + FN)
    where
True Positive (TP): the number of samples correctly classified as positive.
False Positive (FP): the number of samples incorrectly classified as positive.
False Negative (FN): the number of samples incorrectly classified as negative.
True Negative (TN): the number of samples correctly classified as negative.
  • Precision—measures the proportion of correct positive detections:
    Precision = T P ( T P + F P )
  • Recall—measures the proportion of correct detections among true objects:
    Recall = T P ( T P + F N )
  • F1-Score—harmonic mean of Precision and Recall:
    F 1 Score = 2 × Precision × Recall Precision + Recall
  • Confusion Matrix—a table showing correct and incorrect detections for each class.
  • Performance metrics curves during training and validation model—graphs visualizing metrics during training and validation to assess model effectiveness and detect overfitting [63,64].

4.2. Confusion Matrix and Analysis

The confusion matrix was used to analyze the classification model’s performance, including True Positives, True Negatives, False Positives, and False Negatives. Figure 9 shows the confusion matrix for the YOLOv5s6-Yarn Hairiness model, which achieved an average accuracy of 65% for loop fibers and 75% for protruding fibers. This evaluation provided a thorough understanding of the model’s predictive capabilities for different fiber classifications.

4.3. Performance Metrics Curves

To check for overfitting, compare the training and validation losses shown in Figure 10:
  • Convergence of Losses: if both decrease and stabilize closely, the model generalizes well.
  • Divergence of Losses: if training loss decreases while validation loss increases, it likely indicates overfitting.
Based on the performance metrics curves for YOLOv5s6—Yarn Hairiness in Figure 10, we can assess potential overfitting. Key observations from the graphs include:
  • train/box_loss and val/box_loss: both box loss curves decrease consistently, with no significant difference, indicating good generalization.
  • train/obj_loss and val/obj_loss: object loss curves decrease with minor fluctuations in validation, but no significant difference, suggesting no clear overfitting.
  • train/cls_loss and val/cls_loss: classification losses decrease similarly for both curves, indicating good model generalization.
  • metrics/precision: accuracy steadily increases and stabilizes for both training and validation, with no large differences, indicating no overfitting.
  • metrics/recall: recall increases and stabilizes similarly in both sets, with a minimal difference between training and validation, indicating good generalization.
  • metrics/mAP_0.5 and metrics/mAP_0.5:0.95: mean accuracy (mAP) metrics increase steadily, with close proximity between the training and validation curves, indicating good generalization and no clear overfitting.
In conclusion, based on the graphs in Figure 10, there is no clear evidence of overfitting. The training and validation loss and metric curves are close, indicating that the YOLOv5s6 model generalizes well to the validation data. If overfitting exists, it is minimal. The metrics demonstrate the quality and performance of the neural network during training and testing, specifically for yarn hairiness detection.

4.4. Comparison of Results with Other Algorithms

Table 5 compares the proposed system, using the improved YOLOv5s6-Yarn Hairiness algorithm, with state-of-the-art object detection methods, highlighting the improvements in bold.
The metrics for the baseline algorithms (YOLOv5s6, YOLOv8, YOLOv7, YOLOv5n, and Fast RCNN) were obtained through direct implementation and evaluation using the same dataset and experimental conditions described in this article. For consistency and fairness in the comparisons:
  • All baseline models were trained and tested on the same dataset with identical augmentation techniques and hardware settings as the proposed model.
  • The default implementations of the models, as provided in their official repositories, were used for training and evaluation.
By following this approach, the performance comparisons are valid and accurately reflect the relative strengths and weaknesses of the proposed model and the original algorithms.
The mAP@0.5 of the optimized YOLOv5s6-Hairiness framework showed a 5.4% improvement over YOLOv5s6 Default, 7.5% over YOLOv8, 13.9% over YOLOv7, 7.4% over YOLOv5n, and 33.41% over Fast-RCNN. This demonstrates higher accuracy in detecting loop and protruding fibers proving the effectiveness of the implemented improvements.

4.5. Obtained Results

An ablation experiment was conducted to evaluate the effectiveness of each module using YOLOv5s6-Yarn Hairiness as the benchmark. The analysis was performed on the generated dataset, with results presented in the following tables. To further validate the improved model, several test images were selected. Figure 11 shows the detection results for the optimized YOLOv5s6-Yarn Hairiness algorithm and the original YOLOv5s6 at a confidence threshold of 0.2. Figure 12 presents the result at a threshold of 0.3 and Figure 13 at a threshold of 0.5.
An experimental study evaluated the impact of components like the C2f module, Bot-Transformer, GeLU activation function, and enhanced hyperparameters on model performance. The proposed model was also compared to state-of-the-art object detection frameworks, demonstrating higher accuracy.
Training a YOLOv5 model with data augmentation showed significant differences in the performance and generalization. The results in Table 6 are divided into two configurations:
(a)
Yarn hairiness detection with augmentation using the default YOLOv5s6 algorithm.
(b)
Yarn hairiness detection with augmentation using the proposed optimized YOLOv5s6—the Yarn Hairiness algorithm.
These configurations were analyzed to assess the impact of data augmentation and algorithm optimization on the model’s performance. Additionally, Table 6 presents the percentage improvement in metrics when comparing the optimized YOLOv5s6-Yarn Hairiness model with augmentation to the YOLOv5s6 Default model with augmentation.
The experimental results show that the proposed algorithm improves the model’s mAP@0.5 by 5–6% and mAP@0.5:0.95 by 11–12% compared to the YOLOv5s6 Default model. This approach has led to improvements in all analyzed metrics for yarn hairiness detection.
To ensure that the model adequately responds to real-time performance requirements for industrial applications, we evaluate the model’s latency, throughput, and required computational resources. The results are presented in Table 7.
The results presented in Table 7 demonstrate that the model is capable of operating in real time, with an average latency of 35 ms per frame and a throughput of 28.6 FPS, confirming its suitability for integration in industrial environments.

4.6. Visualizing Model Enhancements: Comparative Analysis with Heatmaps and Metrics

An experimental study was conducted which investigated the impact of integrating advanced modules—C2f, Bot-Transformer, and GeLU—into the YOLOv5 model. The goal was to demonstrate how these enhancements affect accuracy and efficiency. Heatmaps visually compare the areas of focus before and after module integration, while metrics from sub-chapter A showcase the improvements. This approach highlights both qualitative and quantitative benefits, offering insights into each module’s contribution to the overall model performance.

4.6.1. Heatmap Qualitative Analysis

In this section, we present a detailed heatmap figures analysis to visualize the impact of integrating various modules into the YOLOv5s6 model. This qualitative analysis was divided into the following combinations in Figure 14:

4.6.2. Comparative Analysis

From this analysis and the Figure 14a–j, the following specific comments are listed:
  • Original YOLOv5s6 Standard: the heatmaps show limited focus areas with low accuracy in detecting loop and protrusion fibers.
  • YOLOv5s6 + C2f (backbone): adding the C2f block improved the detection accuracy, especially for subtle patterns like protrusion fibers.
  • YOLOv5s6 + Bot-Transformer: the integration of the Bot-Transformer module enhanced the focus on the relevant image parts, improving loop fiber detection.
  • YOLOv5s6 + GeLU: the GeLU activation function increased the sensitivity to subtle texture variations, improving the accuracy in complex, noisy backgrounds.
  • YOLOv5s6 + C2f + Bot-Transformer: this combination significantly improved focus areas, providing more detailed and accurate detection in complex image parts.
  • YOLOv5s6 + C2f + GeLU: enhanced fine detail detection and sensitivity to texture variations, leading to a better overall performance.
  • YOLOv5s6 + Bot-Transformer + GeLU: refined focus areas, resulting in greater accuracy and the better handling of complex scenarios, such as protrusion fibers.
  • YOLOv5s6 + C2f + Bot-Transformer + GeLU: a comprehensive performance improvement in the focus, accuracy, and sensitivity to detecting loop and protrusion fibers.
  • YOLOv5s6 + Hyperparameters: hyperparameter tuning improved focus areas and detection accuracy, reducing false positives in noisy backgrounds, though some defects were only detected in models with previous combinations.
  • YOLOv5s6 Improved Hairiness: the final model shows significantly enhanced detection, identifying seven hairiness defects compared to only one in the original model.

4.7. Quantitative Metrics Comparison

Based on all the mentioned combinations, a quantitative comparison of performance metrics—including mAP_0.5:0.95, mAP_0.5, F1-Score, Recall, Precision, and Accuracy—is presented in Table 8 and Table 9 and shows the percentage increase in performance metrics for all combinations and the final algorithm compared to the YOLOv5s6 Default model with augmentation.

4.7.1. Comparative Analysis

From the analysis of Table 8 and Table 9, the following specific comments are listed:
YOLOv5s6 + C2f: A slight improvement in mAP@0.5:0.95 but reductions in mAP@0.5, the F1-Score, Recall, and Precision. Increased accuracy suggests better generalization, but an insufficient overall improvement.
YOLOv5s6 + Bot-Transformer: Improved Recall and F1-Score indicating better true positive recovery, but slight decreases in mAP@0.5 and Precision suggest false positives. Reduced accuracy indicates a need for optimization.
YOLOv5s6 + GeLu: enhanced mAP@0.5:0.95 and Recall but decreased Precision, improving sensitivity to variations but potentially increasing false positives.
YOLOv5s6 + C2f + Bot-Transformer: improved F1-Score and Precision but decreased Accuracy highlights the need to balance precision and recall.
YOLOv5s6 + C2f + GeLu: significantly improved Recall but decreased Precision; increased Accuracy suggests better generalization.
YOLOv5s6 + Bot-Transformer + GeLu: enhanced Recall and F1-Score but decreased Precision indicating a need for further optimization.
YOLOv5s6 + Bot-Transformer + GeLu + C2f: improvements in all metrics, especially Precision and Accuracy, which suggest a robust performance but still room for enhancing Recall.
YOLOv5s6 + Hyperparameters: a significant improvement across all metrics, particularly mAP@0.5:0.95 and Recall with a slight reduction in Precision indicating room for better balance.
YOLOv5s6 Improved Hairiness: the final model shows significant improvements in all metrics compared to the standard YOLOv5s6 validating the effectiveness of module combinations and optimizations.

4.7.2. Conclusions

The combination of C2f, Bot-Transformer, and GeLu modules, along with hyperparameter optimization, effectively demonstrates the improved performance of the YOLOv5s6 model in detecting hairiness in yarn. Each module contributed to different aspects of the performance, while hyperparameter optimization maximized these improvements. The final model YOLOv5s6 Hairiness Improved shows an enhanced capability in detecting and classifying loop and protruding fibers.

4.8. K-Fold Cross Validation

K-fold Cross Validation is a robust technique for model evaluation and hyperparameter tuning, providing a more accurate assessment of the model performance on unseen data. However, it is computationally intensive as it involves training and evaluating the model multiple times [65,66,67]. This technique helps identify and mitigate overfitting, ensuring good generalization to new data not seen during training [63,64]. The results of a k-fold Cross Validation with k = 10 for the improved algorithm are presented in Table 10.

4.8.1. Comparative Analysis

Based on the values presented in the k-fold cross-validation for loop and protruding fibers’ detection, the following specific comments are listed:
(1) 
mAP_0.5 and mAP_0.5:0.95
  • The average values range from approximately 0.43 to 0.79, indicating the model’s strong ability to accurately detect loop and protruding fibers in various scenarios.
(2) 
Precision
  • The average precision varies between approximately 0.69 and 0.74, showing the model makes a high number of correct detections. However, there is room for improvement, especially regarding false positives.
(3) 
Recall
  • Average recall ranges from approximately 0.73 to 0.77, demonstrating the good recovery of loop and protruding fibers. This is consistent with the goal of minimizing false negatives.
(4) 
Accuracy
  • The model shows consistent accuracy levels across the ten folds ranging from approximately 0.4800 to 0.5200, suggesting a stable performance across different data partitions. The average accuracy is about 0.5064, indicating that the model correctly classifies instances approximately 50.64% of the time.
(5) 
Variation in Results
  • Metric values vary slightly between folds which is expected due to the nature of the method. This variation underscores the importance of k-fold cross-validation in assessing model robustness and generalization.

4.8.2. Overfitting Analysis

The metric values vary slightly between different folds of the cross-validation which is expected due to the nature of the method. The values of the average metrics and standard deviations calculated for each metric across the 10 folds are presented in Table 10 of the article as follows:
mAP_0.5: average: 0.775360; standard deviation: 0.014710.
mAP_0.5:0.95: average: 0.454210; standard deviation: 0.007980.
Precision: average: 0.714740; standard deviation: 0.015290.
Recall: average: 0.748560; standard deviation: 0.013340.
Accuracy: average: 0.505190; standard deviation: 0.009970.
From this analysis, the following specific comments are listed:
  • Consistency of Metrics: The average metrics for mAP_0.5, mAP_0.5:0.95, precision, recall, and accuracy are consistent across the folds, indicating a stable model performance with low standard deviations. This suggests good generalization without overfitting.
  • Comparison between Metrics: mAP_0.5:0.95 is lower than mAP_0.5. As expected, balanced precision and recall indicate an effective balance between avoiding false positives and capturing true positives.
  • Overfitting Analysis: A significant difference between the training and validation metrics indicates overfitting. The small variation between folds suggests that the model is not overfitting, showing a consistent performance across data subsets.
  • Conclusion: The k-fold cross-validation analysis shows no clear evidence of overfitting. The YOLOv5s6-Hairiness model generalizes well to the validation data, maintaining a good balance between precision and recall. In summary, the results obtained in the k-fold cross-validation are encouraging with the approach showing good performance in the detection of loop fibers and protruding fibers.

5. Conclusions and Future Work

Considering the complexity of yarn hairiness detection and characterization, a novel approach based on an optimized algorithm YOLOv5s6-Yarn Hairiness was proposed. First, the C2f function was introduced into the backbone structure before the “SPPF” block. This insertion allowed the model to capture important contextual and spatial information at different scale levels. It enabled the detection network to extract higher-level features and reduce interference from non-essential feature information. Second, by adding the Bot-Transformer module in the Neck, which incorporates the self-attention mechanism, the network learned more complex and contextual relationships between different parts of the images. This led to richer and more discriminative feature representations beneficial for detecting fine and subtle details and improving the detection of overlapping objects. Third, the training data were enhanced using the GeLU activation function which provided a more complex non-linear representation compared to SiLU. This was advantageous for capturing more complex patterns and high-resolution image information in the input data. Finally, the hyperparameters such as anchor_t and scale were adjusted to improve the performance on the dataset as the input images had a very high resolution of 1280 × 1280 pixels. This required adjusting the detection process for loop fibers and protruding fibers which have smaller dimensions. Through the experiments, it was possible to conclude that the proposed YOLOv5s6-Yarn Hairiness algorithm improved the mAP0.5:0.95 by 11.55%, the mAP0.5 by 5.43%, and the recall by 7.36% compared to the standard YOLOv5s6 algorithm. Additionally, the algorithm demonstrated a 3.71% increase in the F1-score, a 1.11% increase in accuracy, and a minor improvement of 0.39% in precision.
These results are very promising and demonstrate a good performance. In summary, the proposed optimized method showed better object detection accuracy than the original YOLOv5, making it a more efficient and high-performing neural network for the detection of yarn hairiness. The enhanced YOLOv5s6-Yarn Hairiness structure exhibited a good performance in the detection of protruding fibers and, most importantly, showed a very good performance in the detection of loop fibers, which is a highly complex class due to its irregularity and spatial distribution. The complexity of loop fiber detection arises from several critical factors. First, their irregular shape, defined by loop-like structures with varying sizes and orientations, poses significant challenges for consistent detection. Second, their unpredictable arrangement along the surface of the yarn core adds another layer of difficulty. This non-uniform arrangement affects both loop fibers and protruding fibers, as their positioning on the yarn surface can obscure their visibility in images. Finally, loop fibers often blend into the yarn structure, overlapping and intertwining with other fibers, including protruding fibers, further complicating their detection.
As a future work, there is potential to further improve the enhancement methods for detecting loop fibers aiming to reduce noise or false detections in this class. It would be important to investigate the reasons for false negatives in more detail and seek model improvements to reduce such errors in loop fibers detection. One of the next steps in this work will be to expand the dataset to include a greater diversity of yarn types, textures, and environmental conditions, which will further validate the generalization of the model. In addition, exploring more suitable and novel blocks to improve the network structure and improve the accuracy of loop fibers detection should be considered.
Although the focus of this work was on optimizing YOLOv5s6 due to its balance between speed and accuracy, it was important in future work to compare the model in detail with other state-of-the-art models, such as EfficientDet and RetinaNet, to evaluate different trade-offs between the accuracy and computational cost, enabling a more comprehensive understanding of the system performance and applicability in diverse industrial scenarios. Also, another future work can be the integration of a two-step detection system, with the initial identification of “hairiness” followed by loop and protruding fibers’ classification, will be explored as a future work. This approach can improve the accuracy and optimize computational resources for more complex detections.
As previously mentioned, detecting loop fibers is a complex task and incorrect detection can negatively impact the metrics and compromise the performance of the neural network, making it a crucial area for improvement.

Author Contributions

F.P., H.L., L.P., R.V., F.S., J.M., V.C.: Conceptualization. Methodology. Writing—Original Draft. Writing—Review and Editing. R.V., F.S., J.M., V.C: Funding acquisition. Project administration. Supervision. Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by FCT—Fundação para a Ciência e Tecnologia (Portugal)—who partially financially supported this work through the RD Units Project Scope: UIDB/05549/2020, UIDP/05549/2020, UIDB/04077/2020, and UIDB/00319/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the University of Minho—Engineering School—but restrictions apply to the availability of these data, which were used under license for the current study and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission from the University of Minho—Engineering School.

Acknowledgments

The authors are grateful to engineer Joaquim Jorge, from the textile engineering department at the University of Minho, for all the support and availability given in this project.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Araújo, M.; Melo, E.M.C. Manual de Engenharia Têxtil; Fundação Calouste Gulbenkian: Lisboa, Portugal, 1987; Volume II. [Google Scholar]
  2. Kakde, P.C.M.; Patil, C. Minimization of Defects in Knitted Fabric. Int. J. Text. Eng. Process. 2016, 2, 13–18. [Google Scholar]
  3. Lord, P.R. 11—Quality and quality control. In Handbook of Yarn Production; Lord, P.R., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2003; pp. 276–300. [Google Scholar] [CrossRef]
  4. Carvalho, V.H.; Cardoso, P.J.; Belsley, M.S.; Vasconcelos, R.M.; Soares, F.O. Yarn Hairiness Characterization Using Two Orthogonal Directions. IEEE Trans. Instrum. Meas. 2009, 58, 594–601. [Google Scholar] [CrossRef]
  5. Pinto, R.; Pereira, F.; Carvalho, V.; Soares, F.; Vasconcelos, R. Yarn linear mass determination using image processing: First insights. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisboa, Portugal, 14–17 October 2019; pp. 198–203. [Google Scholar] [CrossRef]
  6. Xu, B.G. 1—Digital technology for yarn structure and appearance analysis. In Computer Technology for Textiles and Apparel; Hu, J., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2011; pp. 3–22. [Google Scholar] [CrossRef]
  7. Tyagi, G.K. 5—Yarn structure and properties from different spinning techniques. In Advances in Yarn Spinning Technology; Lawrence, C.A., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2010; pp. 119–154. [Google Scholar] [CrossRef]
  8. Wang, X.-H.; Wang, J.-Y.; Zhang, J.-L.; Liang, H.-W.; Kou, P.-M. Study on the detection of yarn hairiness morphology based on image processing technique. In Proceedings of the 2010 International Conference on Machine Learning and Cybernetics, Qingdao, China, 11–14 July 2010; pp. 2332–2336. [Google Scholar] [CrossRef]
  9. Wang, L.; Xu, B.; Gao, W. 3D measurement of yarn hairiness via multi-perspective images. In Proceedings of the Optics, Photonics, and Digital Technologies for Imaging Applications V, Proceedings of the SPIE Photonic Europe, Strasbourg, France, 22–26 April 2018; SPIE: Bellingham, WA, USA, 2018; pp. 292–309. [Google Scholar] [CrossRef]
  10. Sun, Y.; Li, Z.; Pan, R.; Zhou, J.; Gao, W. Measurement of long yarn hair based on hairiness segmentation and hairiness tracking. J. Text. Inst. 2017, 108, 1271–1279. [Google Scholar] [CrossRef]
  11. El Mogahzy, Y.E. 9—Structure and types of yarn for textile product design. In Engineering Textiles; El Mogahzy, Y.E., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2009; pp. 240–270. [Google Scholar] [CrossRef]
  12. Krupincová, G.; Meloun, M. Yarn hairiness versus quality of yarn. J. Text. Inst. 2013, 104, 1312–1319. [Google Scholar] [CrossRef]
  13. Kiron, M.I. Spin Finish in Textile. Textile Learner. Available online: https://textilelearner.net/spin-finish-in-textile/ (accessed on 23 July 2023).
  14. Busilienė, G.; Lekeckas, K.; Urbelis, V. Pilling Resistance of Knitted Fabrics. Mater. Sci. 2011, 17, 297–301. [Google Scholar] [CrossRef]
  15. Pereira, F.; Carvalho, V.; Soares, F.; Vasconcelos, R.; Machado, J. 6—Computer vision techniques for detecting yarn defects. In Applications of Computer Vision in Fashion and Textiles; Wong, W.K., Ed.; The Textile Institute Book Series; Woodhead Publishing: Sawston, UK, 2018; pp. 123–145. [Google Scholar] [CrossRef]
  16. Carvalho, V.; Soares, F.; Belsley, M.; Vasconcelos, R.M. Automatic yarn characterization system. In Proceedings of the 2008 IEEE SENSORS, Lecce, Italy, 26–29 October 2008; pp. 780–783. [Google Scholar] [CrossRef]
  17. Pereira, F.; Carvalho, V.; Vasconcelos, R.; Soares, F. A Review in the Use of Artificial Intelligence in Textile Industry. In Innovations in Mechatronics Engineering; Machado, J., Soares, F., Trojanowska, J., Yildirim, S., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 377–392. [Google Scholar] [CrossRef]
  18. Pereira, F.; Macedo, A.; Pinto, L.; Soares, F.; Vasconcelos, R.; Machado, J.; Carvalho, V. Intelligent Computer Vision System for Analysis and Characterization of Yarn Quality. Electronics 2023, 12, 236. [Google Scholar] [CrossRef]
  19. Pereira, F.; Oliveira, E.L.; Ferreira, G.G.; Sousa, F.; Caldas, P. Textile Yarn Winding and Unwinding System. In Innovations in Mechanical Engineering; Machado, J., Soares, F., Trojanowska, J., Ottaviano, E., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 347–358. [Google Scholar] [CrossRef]
  20. Caldas, P.; Sousa, F.; Pereira, F.; Lopes, H.; Machado, J. Automatic system for yarn quality analysis by image processing. J. Braz. Soc. Mech. Sci. Eng. 2022, 44, 565. [Google Scholar] [CrossRef]
  21. GitHub—Ultralytics/Yolov5: YOLOv5 in PyTorch > ONNX > CoreML > TFLite. Available online: https://github.com/ultralytics/yolov5 (accessed on 6 August 2023).
  22. Chen, S.; Tang, M.; Kan, J. Predicting Depth from Single RGB Images with Pyramidal Three-Streamed Networks. Sensors 2019, 19, 667. [Google Scholar] [CrossRef]
  23. Jiang, B.; Song, H.; He, D. Lameness detection of dairy cows based on a double normal background statistical model. Comput. Electron. Agric. 2019, 158, 140–149. [Google Scholar] [CrossRef]
  24. Li, Z.; Fan, B.; Xu, Y.; Sun, R. Improved YOLOv5 for Aerial Images Based on Attention Mechanism. IEEE Access. 2023, 11, 96235–96241. [Google Scholar] [CrossRef]
  25. Tan, S.; Lu, G.; Jiang, Z.; Huang, L. Improved YOLOv5 Network Model and Application in Safety Helmet Detection. In Proceedings of the 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Tokoname, Japan, 4–6 March 2021; pp. 330–333. [Google Scholar] [CrossRef]
  26. Liu, Z.; Gao, X.; Wan, Y.; Wang, J.; Lyu, H. An Improved YOLOv5 Method for Small Object Detection in UAV Capture Scenes. IEEE Access 2023, 11, 14365–14374. [Google Scholar] [CrossRef]
  27. Guo, Y.; Zhang, M. Blood Cell Detection Method Based on Improved YOLOv5. IEEE Access 2023, 11, 67987–67995. [Google Scholar] [CrossRef]
  28. Li, S.; Li, Y.; Li, Y.; Li, M.; Xu, X. YOLO-FIRI: Improved YOLOv5 for Infrared Image Object Detection. IEEE Access 2021, 9, 141861–141875. [Google Scholar] [CrossRef]
  29. Li, Y.; Cheng, R.; Zhang, C.; Chen, M.; Ma, J.; Shi, X. Sign language letters recognition model based on improved YOLOv5. In Proceedings of the 2022 9th International Conference on Digital Home (ICDH), Guangzhou, China, 28–30 October 2022; pp. 188–193. [Google Scholar] [CrossRef]
  30. Pagare, S.; Kumar, R. Object Detection Algorithms Compression CNN. YOLO and SSD. Int. J. Comput. Appl. 2023, 185, 34–38. [Google Scholar] [CrossRef]
  31. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified. Real-Time Object Detection. Presented at the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. Available online: https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html (accessed on 6 July 2024).
  32. Redmon, J.; Farhadi, A. YOLO9000: Better. Faster. Stronger. Presented at the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. Available online: https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html (accessed on 6 July 2024).
  33. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020. [Google Scholar] [CrossRef]
  34. Gašparović, B.; Mauša, G.; Rukavina, J.; Lerga, J. Evaluating YOLOV5. YOLOV6. YOLOV7. and YOLOV8 in Underwater Environment: Is There Real Improvement? In Proceedings of the 2023 8th International Conference on Smart and Sustainable Technologies (SpliTech), Split/Bol, Croatia, 20–23 June 2023; pp. 1–4. [Google Scholar] [CrossRef]
  35. Wu, T.; Dong, Y. YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition. Appl. Sci. 2023, 13, 12977. [Google Scholar] [CrossRef]
  36. Sun, J.; Jia, J.; Tang, C.-K.; Shum, H.-Y. Poisson matting. In ACM SIGGRAPH 2004 Papers, Proceeding of the SIGGRAPH’04, Los Angeles, CA, USA, 8–12 August 2004; Association for Computing Machinery: New York, NY, USA, 2004; pp. 315–321. [Google Scholar] [CrossRef]
  37. El-Geiheini, A.; ElKateb, S.; Abd-Elhamied, M.R. Yarn Tensile Properties Modeling Using Artificial Intelligence. Alex. Eng. J. 2020, 59, 4435–4440. [Google Scholar] [CrossRef]
  38. Abd-Elhamied, M.R.; Hashima, W.A.; ElKateb, S.; Elhawary, I.; El-Geiheini, A. Prediction of Cotton Yarn’s Characteristics by Image Processing and ANN. Alex. Eng. J. 2022, 61, 3335–3340. [Google Scholar] [CrossRef]
  39. Li, Z.; Zhong, P.; Tang, X.; Chen, Y.; Su, S.; Zhai, T. A New Method to Evaluate Yarn Appearance Qualities Based on Machine Vision and Image Processing. IEEE Access 2020, 8, 30928–30937. [Google Scholar] [CrossRef]
  40. Deng, Z.; Yu, L.; Wang, L.; Ke, W. An algorithm for cross-fiber separation in yarn hairiness image processing—The visual computer. Vis. Comput. 2024, 40, 3591–3599. [Google Scholar] [CrossRef]
  41. Haleem, N.; Bustreo, M.; Del Bue, A. A computer vision based online quality control system for textile yarns. Comput. Ind. 2021, 133, 103550. [Google Scholar] [CrossRef]
  42. Lu, W.; Yang, M. Face Detection Based on Viola-Jones Algorithm Applying Composite Features. In Proceedings of the 2019 International Conference on Robots & Intelligent System (ICRIS), Haikou, China, 15–16 June 2019; pp. 82–85. [Google Scholar] [CrossRef]
  43. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Watson, G.A., Ed.; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar] [CrossRef]
  44. Casas, E.; Ramos, L.; Bendek, E.; Rivas-Echeverría, F. Assessing the Effectiveness of YOLO Architectures for Smoke and Wildfire Detection. IEEE Access 2023, 11, 96554–96583. [Google Scholar] [CrossRef]
  45. Guo, P.; Meng, W.; Xu, M.; Li, V.C.; Bao, Y. Predicting Mechanical Properties of High-Performance Fiber-Reinforced Cementitious Composites by Integrating Micromechanics and Machine Learning. Materials 2021, 14, 3143. [Google Scholar] [CrossRef]
  46. Ghavami, N.; Hu, Y.; Gibson, E.; Bonmati, E.; Emberton, M.; Moore, C.M.; Barratt, D.C. Automatic segmentation of prostate MRI using convolutional neural networks: Investigating the impact of network architecture on the accuracy of volume measurement and MRI-ultrasound registration. Med. Image Anal. 2019, 58, 101558. [Google Scholar] [CrossRef] [PubMed]
  47. Niu, D.; Liang, Y.; Wang, H.; Wang, M.; Hong, W.-C. Icing Forecasting of Transmission Lines with a Modified Back Propagation Neural Network-Support Vector Machine-Extreme Learning Machine with Kernel (BPNN-SVM-KELM) Based on the Variance-Covariance Weight Determination Method. Energies 2017, 10, 1196. [Google Scholar] [CrossRef]
  48. Srinivas, A.; Lin, T.-Y.; Parmar, N.; Shlens, J.; Abbeel, P.; Vaswani, A. Bottleneck Transformers for Visual Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 16519–16529. Available online: https://openaccess.thecvf.com/content/CVPR2021/html/Srinivas_Bottleneck_Transformers_for_Visual_Recognition_CVPR_2021_paper.html (accessed on 6 July 2024).
  49. Hu, H.; Zhu, Z. Sim-YOLOv5s: A method for detecting defects on the end face of lithium battery steel shells. Adv. Eng. Inform. 2023, 55, 101824. [Google Scholar] [CrossRef]
  50. Roy, A.M.; Bhaduri, J. DenseSPH-YOLOv5: An automated damage detection model based on DenseNet and Swin-Transformer prediction head-enabled YOLOv5 with attention mechanism. Adv. Eng. Inform. 2023, 56, 102007. [Google Scholar] [CrossRef]
  51. Hendrycks, D.; Gimpel, K. Gaussian Error Linear Units (GELUs). arXiv 2023. [Google Scholar] [CrossRef]
  52. Yu, G.; Zhou, X. An Improved YOLOv5 Crack Detection Method Combined with a Bottleneck Transformer. Mathematics 2023, 11, 2377. [Google Scholar] [CrossRef]
  53. Huang, Y.; Fan, J.; Hu, Y.; Guo, J.; Zhu, Y. TBi-YOLOv5: A surface defect detection model for crane wire with Bottleneck Transformer and small target detection layer. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2024, 238, 2425–2438. [Google Scholar] [CrossRef]
  54. Liu, J.; Qiao, W.; Xiong, Z. OAB-YOLOv5: One-Anchor-Based YOLOv5 for Rotated Object Detection in Remote Sensing Images. J. Sens. 2022, 2022, 8515510. [Google Scholar] [CrossRef]
  55. Isa, I.S.; Rosli, M.S.A.; Yusof, U.K.; Maruzuki, M.I.F.; Sulaiman, S.N. Optimizing the Hyperparameter Tuning of YOLOv5 for Underwater Detection. IEEE Access 2022, 10, 52818–52831. [Google Scholar] [CrossRef]
  56. Van, H.-P.-T.; Hoang, V.-D. Insulator Detection in Intelligent Monitoring Based on Yolo Family and Customizing Hyperparameters. J. Tech. Educ. Sci. 2023, 18, 69–77. [Google Scholar] [CrossRef]
  57. Pereira, F.; Pinto, L.; Machado, J.; Soares, F.; Vasconcelos, R.; Carvalho, V. Yarn Hairiness—Loop & Protruding Fibers Dataset; Mendeley Data: London, UK, 2023. [Google Scholar] [CrossRef]
  58. Pereira, F.; Pinto, L.; Soares, F.; Vasconcelos, R.; Machado, J.; Carvalho, V. Online yarn hairiness—Loop & protruding fibers dataset. Data Brief. 2024, 54, 110355. [Google Scholar] [CrossRef] [PubMed]
  59. “Roboflow: Computer Vision Tools for Developers and Enterprises. Available online: https://roboflow.com/ (accessed on 6 July 2024).
  60. “Labeling with LabelMe: Step-by-Step GUIDE [Alternatives + Datasets]. Available online: https://www.v7labs.com/blog/labelme-guide/ (accessed on 6 July 2024).
  61. Mullen, J.F.; Tanner, F.R.; Sallee, P.A. Comparing the Effects of Annotation Type on Machine Learning Detection Performance. In Proceedings of the 2019 IEEECVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 855–861. [Google Scholar] [CrossRef]
  62. Lin, G.; Liu, K.; Xia, X.; Yan, R. An Efficient and Intelligent Detection Method for Fabric Defects based on Improved YOLOv5. Sensors 2023, 23, 97. [Google Scholar] [CrossRef]
  63. Li, L.; Spratling, M. Understanding and combating robust overfitting via input loss landscape analysis and regularization. Pattern Recognit. 2023, 136, 109229. [Google Scholar] [CrossRef]
  64. Li, H.; Rajbahadur, G.K.; Lin, D.; Bezemer, C.-P.; Jiang, Z.M. Keeping Deep Learning Models in Check: A History-Based Approach to Mitigate Overfitting. IEEE Access 2024, 12, 70676–70689. [Google Scholar] [CrossRef]
  65. Uddin, S.; Lu, H.; Rahman, A.; Gao, J. A novel approach for assessing fairness in deployed machine learning algorithms. Sci. Rep. 2024, 14, 17753. [Google Scholar] [CrossRef]
  66. Hassan, A.; Gulzar Ahmad, S.; Ullah Munir, E.; Ali Khan, I.; Ramzan, N. Predictive modelling and identification of key risk factors for stroke using machine learning. Sci. Rep. 2024, 14, 11498. [Google Scholar] [CrossRef]
  67. Aljalal, M.; Aldosari, S.A.; Molinas, M.; Alturki, F.A. Selecting EEG channels and features using multi-objective optimization for accurate MCI detection: Validation using leave-one-subject-out strategy. Sci. Rep. 2024, 14, 12483. [Google Scholar] [CrossRef]
Figure 1. Type of hairiness in yarn (loop fibers and protruding fibers) (cotton yarn with an average linear mass of 56.4 tex, with a magnification factor of 22×).
Figure 1. Type of hairiness in yarn (loop fibers and protruding fibers) (cotton yarn with an average linear mass of 56.4 tex, with a magnification factor of 22×).
Applsci 15 00149 g001
Figure 2. Problems in textile fabrics caused by yarn hairiness [13].
Figure 2. Problems in textile fabrics caused by yarn hairiness [13].
Applsci 15 00149 g002
Figure 3. Diagram illustrating the progression of the research as described in the article.
Figure 3. Diagram illustrating the progression of the research as described in the article.
Applsci 15 00149 g003
Figure 4. Network structure for YOLOv5s6 ((A): The complete structure with backbone, neck, and head modules. (B,C): Two unique variations of CSP blocks (C3); (D): CBS—Convolutional Batch Normalization layer and SILU activation function; (E): Other blocks with CBS; (F)—Two distinct bottleneck blocks) [45,46,47].
Figure 4. Network structure for YOLOv5s6 ((A): The complete structure with backbone, neck, and head modules. (B,C): Two unique variations of CSP blocks (C3); (D): CBS—Convolutional Batch Normalization layer and SILU activation function; (E): Other blocks with CBS; (F)—Two distinct bottleneck blocks) [45,46,47].
Applsci 15 00149 g004
Figure 5. Improved architecture of YOLOv5s6-Yarn Hairiness Algorithm [(A): The complete structure with backbone, neck, and head modules; (B): Module CBG in the optimized YOLOv5s6-Yarn Hairiness (C3); (C): Module CBG used in other blocks; (D): Module CBG used in other two different types of bottleneck blocks; (E): Two distinct types of CSP blocks (C3) [45,46,47]; (F): C2f module (bottom); (G): The architecture of Bot-transformer block. (a) BottleneckTransformer*x means there are x number of BottleneckTransformer blocks stacked, and each BottleneckTransformer is showed in (b) [48,49,50].
Figure 5. Improved architecture of YOLOv5s6-Yarn Hairiness Algorithm [(A): The complete structure with backbone, neck, and head modules; (B): Module CBG in the optimized YOLOv5s6-Yarn Hairiness (C3); (C): Module CBG used in other blocks; (D): Module CBG used in other two different types of bottleneck blocks; (E): Two distinct types of CSP blocks (C3) [45,46,47]; (F): C2f module (bottom); (G): The architecture of Bot-transformer block. (a) BottleneckTransformer*x means there are x number of BottleneckTransformer blocks stacked, and each BottleneckTransformer is showed in (b) [48,49,50].
Applsci 15 00149 g005
Figure 6. Multi-Head Self-Attention (MHSA) Layer used in the Bot-transformer Block [48,49,50].
Figure 6. Multi-Head Self-Attention (MHSA) Layer used in the Bot-transformer Block [48,49,50].
Applsci 15 00149 g006
Figure 7. Prototype mechanical system with camera [18].
Figure 7. Prototype mechanical system with camera [18].
Applsci 15 00149 g007
Figure 8. Annotations of an image of the yarn made by LabelME (loop fibers—green color—and protruding fibers—red color) [18,58].
Figure 8. Annotations of an image of the yarn made by LabelME (loop fibers—green color—and protruding fibers—red color) [18,58].
Applsci 15 00149 g008
Figure 9. Confusion Matrix for the proposed object detection model.
Figure 9. Confusion Matrix for the proposed object detection model.
Applsci 15 00149 g009
Figure 10. Performance metrics for the YOLOv5s6-Hairiness approach.
Figure 10. Performance metrics for the YOLOv5s6-Hairiness approach.
Applsci 15 00149 g010
Figure 11. Test results between proposed optimized YOLOv5s6-Yarn Hairiness (right image) and default YOLOv5s6 (left image) with 0.2 of confidence (cotton yarn with an average linear mass of 56.4 tex. with a magnification factor of 22×).
Figure 11. Test results between proposed optimized YOLOv5s6-Yarn Hairiness (right image) and default YOLOv5s6 (left image) with 0.2 of confidence (cotton yarn with an average linear mass of 56.4 tex. with a magnification factor of 22×).
Applsci 15 00149 g011
Figure 12. Test results between proposed optimized YOLOv5s6-Yarn Hairiness (right image) and original YOLOv5s6 (left image) with 0.3 of confidence (cotton yarn with an average linear mass of 56.4 tex. with a magnification factor of 22×).
Figure 12. Test results between proposed optimized YOLOv5s6-Yarn Hairiness (right image) and original YOLOv5s6 (left image) with 0.3 of confidence (cotton yarn with an average linear mass of 56.4 tex. with a magnification factor of 22×).
Applsci 15 00149 g012
Figure 13. Test results between proposed optimized YOLOv5s6-Yarn Hairiness (right image) and original YOLOv5s6 (left image) with 0.5 of confidence (cotton yarn with an average linear mass of 56.4 tex. with a magnification factor of 22×).
Figure 13. Test results between proposed optimized YOLOv5s6-Yarn Hairiness (right image) and original YOLOv5s6 (left image) with 0.5 of confidence (cotton yarn with an average linear mass of 56.4 tex. with a magnification factor of 22×).
Applsci 15 00149 g013
Figure 14. Heatmap for YOLOv5s6 Default model (a); Heatmap for YOLOv5s6-Hairiness with all the combinations. YOLOv5s6 default (a); YOLOv5s6 + C2f (backbone) (b); YOLOV5s6 + Bot-transformer (c); YOLOv5s6 + GeLu (d); YOLOv5s6 + C2f + Bot-transformer (e); YOLOv5s6 + C2f + GeLu (f); YOLOv5s6 + Bot-transformer + GeLu (g); YOLOv5s6 + Bot-transformer + GeLu + C2f (h); YOLOv5s6 + Hyper-parameters (i); YOLOv5s6 Hairiness Improved (j).
Figure 14. Heatmap for YOLOv5s6 Default model (a); Heatmap for YOLOv5s6-Hairiness with all the combinations. YOLOv5s6 default (a); YOLOv5s6 + C2f (backbone) (b); YOLOV5s6 + Bot-transformer (c); YOLOv5s6 + GeLu (d); YOLOv5s6 + C2f + Bot-transformer (e); YOLOv5s6 + C2f + GeLu (f); YOLOv5s6 + Bot-transformer + GeLu (g); YOLOv5s6 + Bot-transformer + GeLu + C2f (h); YOLOv5s6 + Hyper-parameters (i); YOLOv5s6 Hairiness Improved (j).
Applsci 15 00149 g014
Table 1. Features comparison between yarn quality analysis systems.
Table 1. Features comparison between yarn quality analysis systems.
FeatureA [39]B [41]C [37]D [38]E [40]
Mechatronic prototype developed?----
Non-destructive prototype?-----
Yarn winding and unwinding system?-----
Image or video analysis in yarn?ImageImageImageImageImage
Use of Vision System (VS) or Artificial Intelligence (AI) to detect defects in textile fabric or yarnVSVS + AIVS + AIVS + AIVS
Yarn twist orientation-----
Yarn twist step-----
Thick places---
Thin places---
Neps--
Yarn Diameter----
Linear mass----
Volume-----
Number of cables-----
Number of loose fibers-----
Mean deviation of mass U (%)-----
Coefficient of variation mass CV (%)---
Table 2. Classification of the dataset.
Table 2. Classification of the dataset.
DatasetClassesNumber of Annotations
TrainingLoop fibers
Protruding Fibers
4667
3035
ValidationLoop fibers
Protruding Fibers
1390
895
TestLoop fibers
Protruding Fibers
676
374
Table 3. The development environment and software tools environment.
Table 3. The development environment and software tools environment.
Hardware and Operating
System (OS) and Specific Environment
Specification
OSUBUNTU 22.04.2 LTS
CPUTWO CPUS INTEL(R) XEON(R) CPU @ 2.00 GHZ
GPUNVIDIA TESLA T4 WITH
RAM MEMORY16 GB
FRAMEWORKPYTORCH 1.13.1
CUDA12.0
CUDNN8700
PYTHON VERSION3.7.9
Table 4. Parameters settings of the training configuration.
Table 4. Parameters settings of the training configuration.
ParameterSpecification
Image size1280 × 1280 pixels
OptimizerStochastic gradient descent (SGD)
Learning rate0.01
Batch size16
Epochs100
Training time2 h 21 m 39 s
Table 5. Performance metrics comparison between proposed optimized yolov5s6-hairiness and other models with data augmentation.
Table 5. Performance metrics comparison between proposed optimized yolov5s6-hairiness and other models with data augmentation.
Algorithm/Metrics (with AUG)mAP_0.5:0.95mAP_0.5F1_ScoreRecallPrecisionAccuracy
Default
Yolov5s6
0.33940.66050.64460.65360.63580.4615
Proposed
YOLOv5s6-Yarn Hairiness
0.37860.69640.66850.70170.63830.4667
Yolov80.36300.64800.62550.62600.62500.4333
Yolov70.32710.61140.63310.68410.58480.4233
Yolov5n0.33120.64840.63150.64650.61710.4415
Fast RCNN0.24100.52200.31950.47400.24100.4325
Table 6. Performance metrics and percentage increase in the detection of yarn hairiness using the default YOLOv5s6 algorithm and the proposed optimized YOLOv5s6-Hairiness algorithm, both with augmentation.
Table 6. Performance metrics and percentage increase in the detection of yarn hairiness using the default YOLOv5s6 algorithm and the proposed optimized YOLOv5s6-Hairiness algorithm, both with augmentation.
AlgorithmmAP@0.5:0.95mAP@0.5F1_ScoreRecallPrecisionAccuracy
Default YOLOv5s60.33940.66050.64450.65360.63580.4615
Proposed YOLOv5s6-Yarn Hairiness0.37860.69640.66850.70170.63830.4666
Metrics Increase (%)+11.55+5.43+3.71+7.36+0.39+1.11
Table 7. System Performance Metrics.
Table 7. System Performance Metrics.
MetricResults
Model latency per frame35 ms
Throughput28.6 FPS
GPU utilization85%
GPU memory required8 GB
RAM usage6 GB
General hardware requirementsDedicated GPU with 16 GB VRAM
Table 8. Quantitative comparison of performance metrics.
Table 8. Quantitative comparison of performance metrics.
AlgorithmmAP@0.5:0.95mAP@0.5F1-ScoreRecallPrecisionAccuracy
Default YOLOv5s60.33940.66050.64450.65360.63580.4615
YOLOV5S6 + C2F0.34340.64270.64110.65140.63120.4622
YOLOV5S6 + BOT-TRANSFORMER0.33950.64870.65310.67680.63100.4579
YOLOV5S6 + GELU0.34810.65280.64500.66110.62970.4581
YOLOV5S6 + C2F + BOT-TRANSFORMER0.34390.65210.65290.65290.65290.4561
YOLOV5S6 + C2F + GELU0.34410.65860.64930.67840.62260.4657
YOLOV5S6 + BOT-TRANSFORMER + GELU0.34130.65480.64860.68800.61350.4675
YOLOV5S6 + BOT-TRANSFORMER + GELU + C2F0.34760.67410.65110.66030.64210.4654
YOLOV5S6 + HYPER-PARAMETERS0.37020.67730.66160.69300.63290.4686
Proposed YOLOv5s6-Yarn Hairiness0.37860.69640.66850.70170.63830.4666
Table 9. Metrics increases in percentage.
Table 9. Metrics increases in percentage.
Metrics increases (%)
Algorithm/
Combination
mAP_0.5:0.95mAP_0.5F1-ScoreRecallPrecisionAccuracy
YOLOv5s6 defaultN/AN/AN/AN/AN/AN/A
YOLOv5s6 + C2f1.18−2.70−0.53−0.34−0.720.15
YOLOV5s6 + Bot-transformer0.03−1.791.323.55−0.76−0.78
YOLOv5s6 + GeLu2.56−1.170.071.15−0.96−0.73
YOLOv5s6 + C2f + Bot-transformer1.33−1.271.29−0.112.69−1.17
YOLOv5s6 + C2f + GeLu1.39−0.290.733.79−2.080.91
YOLOv5s6 + Bot-transformer + GeLu0.56−0.860.635.26−3.511.30
YOLOv5s6 + Bot-transformer + GeLu + C2f2.422.061.011.030.990.85
YOLOv5s6 + Hyper-parameters9.082.542.646.03−0.461.54
YOLOv5s6 Hairiness Improved11.555.4353.717.360.391.11
Table 10. K-fold cross validation with k = 10 in optimized yolov5s6-yarn hairiness with augmentation.
Table 10. K-fold cross validation with k = 10 in optimized yolov5s6-yarn hairiness with augmentation.
K-Fold 10 Cross-Validation
(YOLOv5s6-Hairiness)
mAP_0.5mAP_0.5:0.95PrecisionRecallAccuracy
kfold-100.78600.45930.70400.76630.5050
kfold-90.77940.45410.69810.74960.5067
kfold-80.79010.45890.70940.76550.5067
kfold-70.79090.45750.73270.74260.5200
kfold-60.79390.45680.74390.72640.5050
kfold-50.78810.44770.70990.75280.5100
kfold-40.78720.45990.72000.75030.5084
kfold-30.75070.43300.69540.72890.4800
kfold-20.78620.45810.71430.7570.5084
kfold-10.79110.45680.71970.73620.5017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pereira, F.; Lopes, H.; Pinto, L.; Soares, F.; Vasconcelos, R.; Machado, J.; Carvalho, V. A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm. Appl. Sci. 2025, 15, 149. https://doi.org/10.3390/app15010149

AMA Style

Pereira F, Lopes H, Pinto L, Soares F, Vasconcelos R, Machado J, Carvalho V. A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm. Applied Sciences. 2025; 15(1):149. https://doi.org/10.3390/app15010149

Chicago/Turabian Style

Pereira, Filipe, Helena Lopes, Leandro Pinto, Filomena Soares, Rosa Vasconcelos, José Machado, and Vítor Carvalho. 2025. "A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm" Applied Sciences 15, no. 1: 149. https://doi.org/10.3390/app15010149

APA Style

Pereira, F., Lopes, H., Pinto, L., Soares, F., Vasconcelos, R., Machado, J., & Carvalho, V. (2025). A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm. Applied Sciences, 15(1), 149. https://doi.org/10.3390/app15010149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop