PAPERmaking! Vol11 Nr2 2025

T. Chirakitsakul et al.: Integration of Convolutional Neural Network and Image Processing

FIGURE 3. Workflow of STEP 2 Fibril patch selection. (a) A classification process for patches without fibers. (b) A classification process for patches containing fibers.

Patches containing fibers: We classify fiber patches containing fibrils from those containing only fibers by using a CNN called ResNet-50. The ResNet-50 model was fine-tuned on a training dataset of 5600 image patches, comprising of 2800 patches with fibrils (class 1) and 2800 patches without fibrils (class 0). The model achieved an average accuracy of 94.30% on the test set. Experimental results demonstrate that CNNs are well-suited for image classification tasks, which are inherently less complex than detecting small objects like fibrils. Object detection requires both classification and localization, making fibril detection a more challenging task. For simplicity, in subsequent discussions we refer to patches containing fibrils (class 1) as fibril patches. The fibril patches are further processed to remove fiber and fine regions in preparation for the fibril segmentation step. The presence of fibers and fines alongside with fibrils complicates the thresholding-based fibril segmentation process due to the high contrast between fibers and fibrils. When fibers are present with a fibril, fibril pixels will be misclassified as background. Fiber and fine regions are removed using a mask operator, with the fiber segmentation results (both Figure 2(b) and Figure 2(c)) as mask images. However, shading pixels along the edges of fibers and fines often remain after masking, leading to potential misclassification as fibrils. To address this issue, a morphological dilation operator with a disk-shaped structuring element of 1-pixel radius is applied to the mask images before masking. This step effectively eliminates shadow pixels without inadvertently removing fibrils.

Patches without fibers: Distinguishing fibrils from the background and noise is challenging due to their low contrast and non-uniform morphology. To address this, a combination of statistical analysis and CNNs is employed. First, patches with a clear background are filtered out using the standard deviation of intensity differences ( SD differences ), which is calculated as the standard deviation of intensity differences between neighboring pixels in both vertical and horizontal directions within an image patch. The SD differences is calculated as follow.

  1 N

N  i = 1

( I i − μ  I ) 2

(1)

SD differences =

where : • N : The total number of neighboring pixel pairs in the patch (both vertical and horizontal). •  I i : The intensity difference between two neighboring pixels i . – For vertical neighbors:  I i = | I ( x , y ) − I ( x , y + 1) | – For horizontal neighbors:  I i = | I ( x , y ) − I ( x + 1 , y ) | • μ  I : The mean of all intensity differences (  I i ) in the patch, calculated as:

N  i = 1

1 N

(2)

 I i

μ  I =

This metric reflects the variability or homogeneity of pixel intensities within the patch. A low SD differences value

74639

VOLUME 13, 2025

Made with FlippingBook. PDF to flipbook with ease