eISSN: 2084-9869
ISSN: 1233-9687
Polish Journal of Pathology
Current issue Archive Manuscripts accepted About the journal Supplements Editorial board Abstracting and indexing Subscription Contact Instructions for authors Publication charge Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
2/2022
vol. 73
 
Share:
Share:
Original paper

A novel pre-processing approach based on colour space assessment for digestive neuroendocrine tumour grading in immunohistochemical tissue images

Hana Rmili
1
,
Aymen Mouelhi
2
,
Basel Solaiman
3
,
Raoudha Doghri
4
,
Salam Labidi
1

1.
Research Laboratory of Biophysics and Medical Technologies, The Higher Institute of Medical Technologies of Tunis, University of Tunis el Manar, Tunis, Tunisia
2.
Laboratory of Signal Image and Energy Mastery, National Higher School of Engineers of Tunis, Tunis University, Tunis, Tunisia
3.
IMT Atlantique, LaTIM UMR 1101, UBL, Brest, France
4.
Pathology Anatomy and Cytology Service, Salah Azaiez Institute, University of Tunis El Manar, Tunis, Tunisia
Pol J Pathol 2022; 73 (2): 134-158
Online publish date: 2022/09/28
Article file
Get citation
 
PlumX metrics:
 

Introduction

Neuroendocrine tumours (NET) are a heterogeneous group of rare malignancies, but their incidence is increasing. The tumours can develop in any body part and are defined by their secretory properties. Neuroendocrine tumours that develop in the digestive system are called digestive neuroendocrine tumours. They represent 64% of cases. The remaining 28% develop in the bronchopulmonary system and 8% occur in other organs. Those malignancies appear commonly between the ages of 40 and 60 years [1]. The rarity of appearance and heterogeneity of that disease explains why the number of randomized studies and the level of evidence are low.
Tumour biopsy is the first step in diagnosing a digestive NET. The diagnosis is made after pathologists evaluate the immunohistochemical (IHC) tissue image. The histological evaluation ends with tumour grading.
According to the 2017 World Health Organization (WHO) classification, NETs can take one of the following grades: grade G1 NETs, G2 NETs, G3 NETs, neuroendocrine carcinomas, and mixed neuroendocrine non-neuroendocrine neoplasms (Table 1). Histological grading is based on both the differentiation and the proliferation index [2].
The Ki-67 labelling index is measured in 500 cells in areas with the highest nuclear labelling rate, i.e. in hotspots. Its value is associated with tumour grade and patient survival. The higher the proliferation index, the higher the grade and the lower the survival rate. The commonly used techniques for histological grading are eyeballing, eye screening evaluation, or manual counting of printed images. Those techniques are inefficient, subjective, and time-consuming. They are characterized by significant inter-observer variability and therefore possess poor reliability and reproducibility. Due to low contrast, the morphology of stained cells in images varies. This leads to over- or under-detection of cell nuclei. A major problem is the overlapping or touching of nuclei, as illustrated in Figure 1. Efficient computer-aided diagnosis (CAD) systems could remove the limitations mentioned above of traditional methods for IHC assessment. Pathologists have successfully applied those systems in the histological analysis [3–5]. Most of those CAD systems require 3 principal steps to determine the percentage of positive cells. The first concerns image quality enhancement to reduce the variation in staining information. The second step is dedicated to IHC tissue image segmentation. The third step is devoted to separating overlapping nuclei to avoid the quantification error.
The pre-processing technique is a preliminary task for reaching both high accuracy and relevant image segmentation. Several methods have been published to deal with contrast issues or colour inhomogeneity in microscopic images. Al-Lahham et al. [3] estimated the proliferation index using Ki-67 images and an automated system. The original additive colour mixing (R: red, G: green, and B: blue – RGB) images were converted to the L * a * β colour space by customizing colour modification and colour-space transformation. This resulted in excellent decoupling of both intensity and colour. In the research work by Ghane et al. [6], a novel automatic image segmentation of white blood cells (WBCs) was applied. For cell detection, authors performed a colour adjustment based on a colour conversion from the RGB to the CMYK colour representation, in which the contrast of the WBCs is better in the Y component. In addition, the L * a * β * colour space representation was selected for nuclei segmentation. In a study by Rahman et al. [7], the authors proposed a semi-automated detection and classification scheme for oral squamous cell carcinoma. Due to the non-uniform illumination of tissue slides, colour channelling was conducted by converting images from the RGB into the hyperspectral imaging (HSI) and the CMYK colour spaces. The C channel was chosen for further processing because it produced the best result. A novel segmentation and quantification approach to the IHC stained slides was proposed by Roszkowiak et al. [4]. It is based on a conversion from the RGB to the HSV colour space. That method allows the separation of data containing colour information from the data containing luminescence intensity information. Colour space conversion was also done by qualitative assessment [5].
The medical diagnostic process aims to identify histological structures and explore their different morphological appearance in terms of colour intensity, variation form, and density change. To analyse those imperfections, microscopic images should be segmented after pre-processing. Much effort was given to developing an automated algorithm for nuclear segmentation. Intensity thresholding is commonly used to determine the immune positivity of tissue sections in the region of interest. The threshold-based techniques are extensively employed for simplicity and their computational complexity [8]. The region-based methods have a good noise immunity resistance. Various results were obtained when different seeds were chosen. However, that approach is time and memory consuming [9, 10]. The contour, edge, and region-based segmentation methods were constrained by convergence, overlapped structures, and contour initialization [11].
Some other techniques like graph-cuts [12], Markov random fields [13], and geometric models [14, 15] were used for identifying stained cells. Furthermore, watershed-based algorithms are widely used in the segmentation of histopathological images [16]. Despite widespread use, watershed algorithms suffer from over-segmentation due to high regional minima produced by noise [17]. Many strategies have been proposed to overcome that issue including marker-controlled watersheds [18, 19], the enhanced 3D watershed algorithm [20], and the contour estimation base [21, 22]. The capability of the watershed algorithm to deal with overlapped nuclei is considered the basic method for nuclei separation. In [18], the authors detected nuclei seed markers using the modified super-pixel segmentation approach. Each nucleus developed a supersedure to include more information and enhance segmentation performance. In [23], a good algorithm based on concave points and ellipse fitting was proposed, where the contour is divided into separate segments through concave points. The ellipse fitting aims to process the different contour segments into separate cells. The other techniques comprise deep learning and convolutional neural networks [24]. However, those approaches require a large dataset that is computationally “expensive” to train.
Our work focuses on a novel IHC image pre-processing approach for colour space evaluation. Both qualitative and quantitative criteria justify the choice of the colour component. The modified versions of standard algorithms are proposed to reduce the complex nature of microscopic images. The adaptive local threshold approach based on a modified Laplacian filter was adopted for our study. This was done to minimize the implementation complexity, highlight the edges of nuclei, and intensify details in tumour slides. In addition, we propose an improved watershed algorithm to deal with separating the overlapping cell nuclei. That algorithm is based on a concave vertex graph that yields good results without losing any geometrical features of cells.
This paper is structured as follows: in section 1, we introduce the problem, review the related works, and summarize our main contribution. In section 2, the proposed scheme for digestive neuroendocrine tumour segmentation is discussed in detail. Section 3 is devoted to presenting experimental results and to discussing them. Finally, in section 4, conclusions and future directions are formulated.

Material and methods

The proposed nuclei segmentation scheme

As illustrated in Figure 2, the overall flowchart of the proposed work is carried out according to the hierarchical knowledge levels. The image processing is conducted at the pixel level, based on converting the original image into distinct colour spaces, considering the experts’ information. In this part, it is worth noticing that the choice of the colour space described in publications was arbitrary. Most researchers used some colour space without presenting arguments that would justify their choices. Therefore, we evaluate the contribution of each of those colour representation spaces. We achieve that goal by computing the contrast and the error retention rate (ERR) for each colour representation space.
Afterward, we proceed to the primitive visual level, at which both detection and segmentation of cancer nuclei are effectuated, as shown in Figure 2. This step is carried out by detecting the concave points and selecting the optimal path that separates the overlapping nuclei. At the object level, we aim to separate the adjacent nuclei and eliminate non-tumour cells. Here, morphological information obtained from the experts is applied.
Finally, the scene level is dedicated to the evaluation, interpretation, and comparison of the proposed automated segmentation approach with the expert’s segmentation ground truth. At that step, we use several assessment metrics. The main steps of the proposed scheme are described in the following subsections.

Dataset description

To prove the efficacy of the proposed approach, 2 datasets were analysed. The first dataset (Dataset 1) includes the NET tissues collected from 15 patients treated at Salah Azaiez Institute of Oncology, Tunisia, between 2017 and 2019. Those stained biopsies images were acquired using an OLYMPUS DP21 microscope using a magnifying factor of ×10, ×20, and ×40 with a resolution of 1600 × 1200 pixels. All grades of malignancy were studied. In total, a database of 70 NETs was analysed. An experienced pathologist carefully selected the slides to determine the pathologies’ diversity. Immunohistochemical tissue images were paraffin-embedded tumour slides stained with the biomarker Ki-67, a nuclear proliferation-related protein.
Both visual assessment (i.e. eyeballing) and counting was undertaken by 2 experts. The pathologists manually analysed the particularities in images in which 2 major colours were identified. Our study used a special type of IHC staining to determine tumour grading. It was applied to stain tissue for clear visualization and differentiation of cell nuclei. The positive Ki67 nuclei were stained using diaminobenzidine (DAB), which appears as a granular brown stain with a remarkable intensity variation, whereas negative Ki-67 cells appear in blue stained by haematoxylin.
The other benign cells with specific elliptical form and small size were identified (i.e. stromal cells and lymphocytes). The example of our collected datasets consists of distinct colour and brightness variations in different high-power fields (Fig. 1).
Our approach was also tested on the online available Dataset 2 [18], which comprised H-DAB-stained tissue microarray (TMA) slides stained for the biomarker p53. These slides were scanned using a Hamamatsu scanner and came from various cancer specimens. Samples were randomly captured from 23 TMA whole scanned slides, using a magnification of 40. They contained some forms of irregular staining captured from different cores and a high number of overlapping nuclei (Fig. 3). This dataset named DataSeg was explored for the nucleus segmenting assessment, which comprised 52 images with 200 × 200 pixels. The number of the manually labelled nuclei in the DataSeg, in which a pathologist confirmed the outlined contour of each nucleus, was 1265.

Pre-processing approach

Colour is the main valuable information that an IHC image can hold, which can inform us about the characteristics of these images. Hence, it is urgent to analyse this information and study its variations in different conditions. Unfortunately, many issues must be faced during the acquisition of the histological tissue slide, like the variations in lighting conditions and the staining heterogeneity [25]. The collected slides included various intensities and diverse compactness.
A pre-processing step was applied to select the relevant colour space representation to eliminate the staining difference. Moreover, only one colour component was chosen instead of 3 to reduce the quantity of data to be processed and limit the computational complexity [4]. Thus, conversion to several colour models, like RGB, HSI [5], CMYK [7], and CIELab [6], [26], was necessary.
The colour above provided coherent characteristics of varied information related to the IHC images. The choice of appropriate channel should be based on both quantitative and qualitative metrics. This leads to optimal results for further processing. As shown in Table 2, numerous studies relied only on some subjective visual aspects leading to inaccuracy. Therefore, such images should be carefully pre-processed to evaluate the contribution of each channel component. In our study, colour space conversion was conducted as the first step, then pertinence evaluation of each colour space was applied to select the specified channel for further processing. Finally, contrast enhancement was proposed before starting higher-level processing tasks (compare Fig. 4).

Colour space conversion

In addition to using of the most available colour space, several other colour representation spaces, such as XYZ, HSI, Lab, Luv, and CMYK, differ in their colour data representation. The main purpose of this step is to convert the stained IHC images into distinct colour spaces to study the variations in terms of the appearance of cells across the slide. A detailed mathematical transformation from one colour space to another is given in Appendix A [32]. Other colour space systems (YIQ, I1I2I3, etc.) exist, but we have only cited the main spaces used in medical image processing.
The diversity between different components of each colour space is shown in Figure 5. It should be noticed that all representations do not appear in the same way regarding the contrast disparity and colour non-uniformity. Furthermore, choosing colour space can be the essential decision-making element determining performance. It possesses principal advantages related to its representation. Several colour spaces have been designed to treat problems encountered in various applications.

Colour space evaluation and selection

Error retention rate

After converting the original image to various colour spaces, it is crucial to evaluate its impact. First, we assess the qualitative criterion used by the pathologist. This criterion is estimated in 50 patches for each colour representation system. The goal is to identify the maximum number of stained nuclei and compare it with the number labelled by the expert pathologist. In some channels, the deterioration of image details that affects the interpretation phase is retained. In the other channels, some details are increased due to the conversion process. For that reason, a visual criterion called the “retention rate” (RR) is proposed to define the number of objects correctly preserved in the converted image that perfectly matches the ground truth. A pathologist counts the latter value.
To further clarify that process, we consider the patch example shown in Figure 6. Our expert detected 11 objects of interest (i.e. positive nuclei marked in red). This value varies depending on the colour space. For example, channel G of the RGB colour space enables the identification of 15 objects. This number is greater than the reference value given by the expert.
In contrast to this finding, channel K in CMYK space enables the identification of 4 out of 11 objects only. This denotes information loss. It is worth noticing that some colour spaces produce over- or under-detection of nuclei. To quantify that difference, we calculated the absolute value of ERR for the objects of interest from both the RR and the ground truth reference value. Table 3 summarizes our findings.
The reference value of RR found by the pathologist was equal to 826. Next, we computed the number of detected objects RR in several studied colour components. Those values varied substantially. Consequently, we observed over-detection in certain colour channels (e.g. the components R, L, X) and under-detection in the others (e.g. the components C, * v, K). Furthermore, we calculated the percentage of ERR. That calculation quantifies the difference between the reference value mentioned above and the number of nuclei detected automatically in each colour space representation (RR). According to Table 3, the lowest significant error value of the ERR was from the HSI colour space including the saturation component S.

Contrast computing

The use of the subjective, qualitative evaluation of the colour spaces is insufficient. It is necessary to move to an objective, quantitative assessment. The latter is based on computing the overall contrast for each colour system. We calculate the contrast value using the grey-level co-occurrence matrix (GLCM) to guarantee a satisfactory result in the object’s extraction.
The grey-level co-occurrence matrix is a matrix created from a greyscale image. Both intensity distribution and relative location of pixels in the image are used to determine texture features. It was initially proposed by Haralick et al. [33], who extracted 14 different texture features from the GLCM matrix. Indeed, the grey-level co-occurrence matrix G is an L × L matrix, where L indicates the maximum number of grey levels in the image. The entry G (u, v) represents the probability that the grey level value “u” co-occurs with the value “v” at a distance “d” and the predefined angle for a given number of times. This study extracted the GLCM contrast at the distance d = 1 between neighbouring pixels. In addition, the 4 angles of 0°, 45°, 90°, and 135° were taken into consideration. Finally, the GLCM contrast was computed according to equation 1.
The intensity contrast between a pixel and its neighbour is determined over the whole image through the contrast computing value and tested on distinct colour space representations, as shown in Table 3.
Subsequently, the contrast of various colour spaces was determined to assess the descriptive texture characteristics within each space. Finally, the contrast values were normalized for comparison.
The minimum contrast value was observed in the L * u * v colour space. The maximum value was found for the HSI colour space (Table 3). Furthermore, the selected space (i.e. the HSI) was chosen as the most suitable colour space representation for our segmentation method, owing to its intense colour region transitions and the highest normalized contrast value.

Pertinence degree

To simplify the global evaluation, we propose combining the quantitative and the qualitative criteria in only one parameter called the pertinence degree (PD). We can easily define the contribution of each channel using their PD values. The colour space that highlights stained cells and preserves the desired tissue objects can thus be selected. The following formula proposes the following linear combination between those 2 criteria.
The determination of  is performed by studying the impact of the variation of its value on PD value (this will be shown in the experimental results section [section 3.1]). Through this formula, we were able to combine the 2 evaluation criteria of contrast and ERR into a single criterion which is PD, i.e. for each colour space we linearly coupled its contrast value and its object detection error “ERR” using a linear coefficient “”, whose value depends on its PD variation. If the  value is higher than 0.5 and tends to 1, we prioritize the quantitative criterion of contrast compared to the error value. However, if this coefficient is less than 0.5 and tends to 0, we privilege the qualitative criterion of error complement. If the  value is equal to 0.5, we give a similar interest to these 2 studied criteria.
According to Table 3, the highest value of PD is devoted to the “S” colour channel, which is equal to 0.997 compared to other colour space components. Moreover, this value correlates to the highest contrast value (C = 1) and the lowest object detection error value (ERR = 0.6%). As a conclusion of the pre-processing step, the choice of the colour representation system is based on the highest value of the PD which lies within the range of (0, 1); it tends to be 1 if the colour space is relevant and well-adapted in stain variability environment.

Contrast enhancement

To increase the contrast of IHC images and to enhance the visibility of the morphological cells of microscopic images, several contrast enhancement techniques have been proposed in the literature [34, 35]. In this paper, the colour adjustment method based on the colour transform proposed in [36] is applied to adjust the image contrast by expanding the dynamic range of intensity values it contains. New maximum and minimum intensity values are required to execute the process [37]. The proposed approach is a linear scaling function applied to the image pixel values (i.e. contrast stretching commonly used in much research and defined with this formula [36]).
Where p(x,y) is the new luminance value for pixel (x,y), q(x,y) is the luminance level from the processed luminance image, fmax is the maximum luminance level value in the input image, and fmin is the minimum luminance level value in the input image. The contrast stretching preserves image brightness with a minimum loss of image information, as shown in Figure 7, which maintains only nuclei by eliminating the background.

Nuclear segmentation of the immunohistochemical tissue images

Adaptive local thresholding and morphological processing

Given the variability in colour and intensity of the IHC stained tissue, selecting only one threshold fitting the whole image remains challenging. For this reason, there is an urgent need to perform adaptive local thresholding due to its capacity to reduce the unrepresentative pixel value effects. An adaptive threshold is chosen for each pixel depending on the intensity distribution in its local neighbourhood. This step is achieved by subtracting each pixel’s intensity by the neighbourhood’s median value. The average area of cell nuclei determines the size of the neighbourhood. The latter relies on the magnification factors ×20 and ×40. Accordingly, the values of 22 and 44 are empirically determined for different IHC stained images.
To improve the results of cancer cell segmentation, a modified version of the Laplacian filter is used to extract the nuclei regions in images with a defined threshold, as demonstrated in our previous work [38]. The proposed modification aims to obtain uniform regions from binary images of the stained IHC tissue. In the inherent Laplacian filter, only sensible inner contours and intensity variation details of cancer cell tissues are obtained.
This study applies morphological operations to refine the segmentation process by controlling nuclei form variability. In this context, the dilatation operation is first used to cover missing pixels particularly in the borders and to extend the nuclei regions; then, closing is employed to improve the smoothness of nuclei borders. These 2 morphological operations are made using a flat disk-shaped structuring element. Finally, filling holes in the nucleus are applied.
Figure 8 shows an IHC stained tissue example, with magnifying factor ×40, which contains several significant steps of our proposed segmentation based on the previously selected colour space (Fig. 8C). The unrepresentative pixel impact is minimized by applying the adaptive local thresholding described in Figure 8D. Thus, using the modified Laplacian filter (Fig. 8E), we obtained a binary image of stained nuclei, which contains uniform and specific extracted regions. Morphological techniques, i.e. dilatation, closing, and hole filling, are applied to the image to overcome the irregular nuclear structure issues, as shown in Figure 8F.

Separation of the overlapping nuclei

Besides staining inhomogeneity and illumination variations, overlapping nuclei in NET tissue slides are challenging. To overcome this issue in image analysis, several approaches have been proposed. The separation method is based on clustering. However, clustering results in low precision and the appearance of “incorrect” areas [39]. The contour estimation approach creates contours that do not match the desired cell shape.
Moreover, watershed-based techniques are used to separate touching structures. However, these approaches are limited by over-segmentation or under-segmentation. Those effects are caused by the histological noise, which generates a high number of regional minima [40, 41]. For those reasons, various enhancements have been suggested, such as the marker-controlled watershed or the merging region watershed.
To further improve the accuracy of nuclei segmentation, we propose the enhanced watershed method [42] based on a concave vertex graph. First, the regions with overlapping nuclei are extracted using high concavity points of cell contours for localization of curve candidates. Then, the watershed method is applied to the hybrid distance transformed image. A concave vertex graph is generated using the separating edges and concave points. Afterward, the shortest path in the graph is computed to identify the ideal separation curves. Both the outer boundary of the clustered nuclei and the inner edges determine the results of segmentation. Figure 9 illustrates the separation process of complex samples of cell nuclei.

Removal of normal-appearing cells

The elimination of lymphocytes and stromal cells should be performed to refine the segmentation results and provide accurate positive nuclei quantification. This step was achieved by removing normal-appearing cells that responded to a predefined shape criterion. The morphological criterion was based on the histological analysis. According to the expert interpretation, stromal cells were identified by their elliptic form, whereas lymphocytes were recognized by their small size compared to the stained cells.
The stromal cell form represented the ratio between the ellipse’s minor and the major axis. At the same time, the size of lymphocytes was quantified by the cell area, i.e. the number of pixels in the analysed nuclei region.
According to the pathologist’s prediction, cells with a ratio lower than the decision threshold and an area lower than the average area of all selected nuclei were removed. These decision thresholds, described in detail in [37], were empirically determined and found to be dependent on the magnifying factors. An example of the segmentation improvement due to the elimination of nuclei of normal-appearing cells is shown in Figure 9 (column 3).

Experimental results and discussion

The fundamental challenges faced in this study were variations in colour intensity in histological images and alterations in the lighting conditions. Therefore, a novel pre-processing method is proposed in our research. The main contribution of this approach is to evaluate colour spaces based on both qualitative and quantitative values.
We demonstrated that the saturation component of the HSI colour space yielded the best metrics value in terms of the normalized contrast. Moreover, this colour channel was characterized by the lowest ERR. The pertinence degree reached the value of 0.997. Thus, the PD is a novel criterion that helps select the optimal colour space adapted to the various stain heterogeneities existing in the databases.
Immunohistochemical segmentation based on the selected colour channel can help pathologists during image analysis. To prove the relevance of our segmentation approach, we used different sources of histological images. Indeed, 2 labelled datasets were used to obtain the experimental findings. Dataset 1 comprised 70 histological images stained with Ki-67, which were seen under various magnifications. Dataset 2 was composed of 52 H-DAB-stained TMA images. Our algorithms were validated on Datasets 1 and 2, summarized in Figures 10 and 11, respectively.

Parameter configuration

In this section, the influence of the variation of the linear coefficient  values on PD is evaluated. Different values of  between 0.3 and 0.7 were chosen to assess their impact on PD values for each colour space (Fig. 12). This experience shows a slight PD variation due to the high correlation between the contrast and ERR values. As a result, we used an equitable fusion of qualitative and quantitative criteria (i.e. based on linear coefficient  = 0.5) to select the best-specified channel for further processing. However, it is discernible that the saturation component from the HSI colour space has the highest PD value with various  coefficients compared with all other colour space channels.

Performance evaluation

To show the effectiveness of the proposed strategy for IHC cancer grading, segmentation results were evaluated using both the object- and pixel-wise metrics. The quantitative assessment was performed by comparing the number of nuclei manually scored by the experienced pathologist, using the most common object-level criteria for object detection. This involved recall (sensitivity or true positive rate), precision (positive predictive value), and F1-score (the detection of cell nuclei) (Appendix B). The precision and recall values close to 1 indicate a good segmentation performance. Moreover, the F1-score, which is the harmonic average of both precision and recall, reached its best value at 1 and worst at 0. The performance evaluation of computer-assisted Ki-67 compared to the expert’s assessment applied to the overall database is detailed in Table 4.
Both the quantitative performance evaluation based on counting the positive nuclei and the qualitative performance by applying the pixel-wise metrics were conducted by comparing segmentation results in binary images with the corresponding ground truth [41]. The pixel-wise criterion, introduced by Cui et al. [43], comprises the missing detection (MD) rate, the false detection rate (FDR), the under-segmentation rate, and the over-segmentation rate (Appendix B).
In this approach [43], a false positive result (FP) (Appendix B) can be caused by false detection FD. For example, lymphocytes are recognized as positive nuclei. In addition, an FP can be caused by the over-segmentation (OS), in which one ground-truth nucleus is segmented into several nuclei. Furthermore, false negative is divided into 2 errors. One of them is the MD. This means that nuclei are detected as lymphocytes or otherwise. The second error is called the under-segmentation, which comprises several ground truth nuclei recognized as a single nucleus.
Additionally, using these metrics, we can evaluate the segmentation quality in terms of correctly detected nuclei by the proposed segmentation method and assess the nuclei splitting process. This qualitative performance assessment is summed up in Figure 13. The pixel-level criteria are used to assess the segmentation algorithm’s accuracy in predicting the shape and size of the identified nuclei. The commonly used one is the Dice similarity coefficient (DSC), as shown in Table 4 (Appendix B).
The DSC lies within the range of (0, 1), with 1 indicating that the segmented image is identical to the ground truth and tending to 0 when the difference between the 2 images is highly significant.

Experimental results and comparative study

The performance evaluation of the proposed segmentation approach was a mandatory step to demonstrate the efficiency of our work. Based on the quantitative metrics illustrated in Table 4, the high nuclei counting precision of the proposed scheme, compared to the ground truth segmentation, can be seen in its reasonable total processing time. Consequently, we reported the F1-score value equal to 0.986 for Dataset 1 and 0.989 for Dataset 2. It should be noticed that both values are close to the result obtained by the pathologist.
The pixel-level metrics reported in Figure 13 confirmed the pertinence of segmentation quality owing to the high performance of the separation of the overlapping nuclei. According to Figure 13, it is worth noticing that our approach achieved a perfect compromise between the under- and over-segmentation error. Hence, the FDR value in Dataset 1 is more significant than the one in Dataset 2. This explains the low capacity for discriminating between the positive nuclei and the other particles. Because these metric values tend to 0, the nuclei detection and the separation error decrease. The latter effects were negatively correlated with both the recall and the precision values. These 4 criteria can help pathologists in choosing the appropriate automatic segmentation systems designed for a particular purpose.
The efficiency of the proposed approach can be shown by comparing its performance with performance published elsewhere. It can be seen in Table 5 that our method outperforms 2 methods widely used in research. The H-minima technique applied the H-minima transform algorithm to the binary image’s distance transform, followed by the watershed algorithm. The marker-controlled watershed segmentation is based on the gradient magnitude function. Subsequently, it is necessary to use a combination of the morphological operations as well as identification of the local maxima for computation of the foreground markers. Our approach achieved better performance in terms of both F1 score and the Dice coefficient.

Conclusions

Both staining heterogeneity and irregularity of tissue structures cause a deficiency in the IHC images, which are characterized by a granular brown stain, with intensity variations when the Ki-67 biomarker was studied. Furthermore, those image acquisitions were affected by the overlapping cell nuclei, which contributes to some errors during the quantitative measurements. This work required an accurate and efficient IHC quantifier for high-throughput decision-making. The proposed segmentation method with a novel pre-processing approach enabled us to choose the proper colour space. That space improved the segmentation results. Then, the selected colour channel was applied to highlight positive cells. Those cells could now be easily distinguished from the other structures present in the image. Our approach based on the component with the highest PD value resulted in increased accuracy. The experimental metrics also confirmed that finding.
Immunohistochemical image analysis was tested on 2 databases that possessed different magnifying factors and a large number of overlapping nuclei. The study of digestive NETs acquired satisfactory results in both datasets, with an overall DSC value of 0.979. This value was more significant than the values obtained by the other algorithms used for the performance comparison.
The evaluation of the proposed automated segmentation process was conducted to measure the quantitative performance depending on counting the positive nuclei and the object-level criteria, and to assess the segmentation quality using the pixel-wise metrics. With the better separation of touching cells, the extraction of the accurate number of positive cancer cell nuclei gave promising results. It also enabled a precise grading of digestive NETs.
This work can be extended. Other possible improvements can be included in future works. For example, according to the WHO classification, one can determine the Ki-67 proliferation index and use it for cancer grading.
The authors declare no conflict of interest.

References

1. Bellizzi M. Immunohistochemistry in the diagnosis and classification of neuroendocrine neoplasms: what can brown do for you? Hum Pathol 2020; 96: 8‑33.
2. Sobecki M, Mrouj K, Camasses A, et al. The cell proliferation antigen Ki-67 organises heterochromatin. Elife 2016; 5: e13722.
3. Al-Lahham HZ, Alomari RS, Hiary H, et al. Automating proliferation rate estimation from Ki-67 histology images. Med Imaging 2012; 8315,: 669‑675.
4. Roszkowiak L, Korzynska A, Siemion K, et al. System for quantitative evaluation of DAB&H-stained breast cancer biopsy digital images (CHISEL). Sci Rep 2021; 11: 9291.
5. Gomolka RS, Korzynska A, Siemion K, et al. Automatic method for assessment of proliferation index in digital images of DLBCL tissue section. Biocybern Biomed Eng 2019; 39: 30‑37.
6. Ghane N, Vard A, Talebi A, et al. Segmentation of white blood cells from microscopic images using a novel combination of k-means clustering and modified watershed algorithm. J Med Signals Sens 2017; 7: 92‑101.
7. Rahman TY, Mahanta LB, Choudhury H, et al. Study of morphological and textural features for classification of oral squamous cell carcinoma by traditional machine learning techniques. Cancer Rep 2020; 3: e1293.
8. Kim YJ, Romeike BFM, Uszkoreit J, et al. Automated nuclear segmentation in the determination of the Ki-67 labeling index in meningiomas. Clin Neuropathol 2006; 25: 67‑73.
9. Adams R, Bischof L. Seeded region growing. IEEE Trans Pattern Anal. Mach. Intell 1994; 16: 641‑647.
10. Gonzalez RC, Woods RE. Digital image processing (2nd ed.). Prentice Hall 2002.
11. Hameed KAS, Banumathi A, Ulaganathan G. Performance evaluation of maximal separation techniques in immunohistochemical scoring of tissue images. Micron 2015; 79: 29‑35.
12. Boykov Y, Funka-Lea G. Graph Cuts and Efficient N-D Image Segmentation. Int J Comput Vis 2006; 70: 109‑131.
13. Luck L, Carlson KD, Bovik AC, Richards-Kortum RR. An image model and segmentation algorithm for reflectance confocal images of in vivo cervical tissue. IEEE Trans Image Process 2005; 14: 1265‑1276.
14. Masmoudi H, Hewitt SM, Petrick N, Myers KJ, Gavrieli- des MA. Automated quantitative assessment of HER-2/neu immunohistochemical expression in breast cancer. IEEE Trans Med Imaging 2009; 28: 916‑925.
15. Naik S, Doyle S, Agner S, Madabhushi A, Feldman M, Tomaszewski J. Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. IEEE Inter Symposium Biomed Imaging 2008; 284‑287.
16. Veta M, van Diest PJ, Kornegoor R, Huisman A, Viergever MA, Pluim JPW. Automatic nuclei segmentation in H&E stained breast cancer histopathology images. PLoS One 2013; 8: e70221.
17. Vincent L, Soille P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans Pattern Anal Mach Intell 1991; 13: 583‑598.
18. Shu J, Liu J, Zhang Y, et al. Marker controlled superpixel nuclei segmentation and automatic counting on immunohistochemistry staining images. Bioinformatics 2020; 36: 3225‑3233.
19. Veta M, Huisman A, Viergever MA, van Diest PJ, Pluim JPW. Marker-controlled watershed segmentation of nuclei in H&E stained breast cancer biopsy images. IEEE International Symposium on Biomedical Imaging 2011; 618‑621.
20. Lin G, Adiga U, Olson K, Guzowski JF, Barnes CA, Roysam B. A hybrid 3D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytom Part J Int Soc Anal Cytol; 2003; 56: 23‑36.
21. Al-Jaboriy SS, Sjarif NNA, Chuprat S, Abduallah WM. Acute lymphoblastic leukemia segmentation using local pixel information. Pattern Recognit Lett 2019; 125: 85‑90.
22. Laosai J, Chamnongthai K. Classification of acute leukemia using medical-knowledge-based morphology and CD marker. Biomed Signal Process Control 2018; 44: 127‑137.
23. Bai X, Sun C, Zhou F. Splitting touching cells based on concave points and ellipse fitting. Pattern Recognit 2009; 42: 2434‑2446.
24. Aresta G, Araújo T, Kwok S, et al. BACH: Grand challenge on breast cancer histology images. Med Image Anal 2019; 56: 122‑139.
25. Rmili H, Solaiman B, Mouelhi A, Doghri R, Labidi S. Nuclei segmentation approach for digestive neuroendocrine tumors analysis using optimized color space conversion. 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) 2020; 1‑6.
26. Dundar MM, Badve S, Bilgin G, et al. Computerized classification of intraductal breast lesions using histopathological images. IEEE Trans Biomed Eng 2011; 58; 1977‑1984.
27. Roszkowiak L, Zak J, Siemion K, Pijanowska D, Korzynska A. Nuclei detection with local threshold processing in DAB&H stained breast cancer biopsy images. Computer Vision and Graphics, Cham 2020; 164‑175.
28. Di Ruberto C, Puztu L. White blood cells identification and counting from microscopic blood image. Int J Med Health Biomed Pharm Eng 2013; 7: 15‑22.
29. Rogalsky JE, Ioshii SO, de Oliveira LF. Automatic ER and PR scoring in Immunohistochemistry H-DAB breast cancer images. Anais do Simpósio Brasileiro de Computação Aplicada à Saúde (SBCAS) 2021; 313‑322.
30. Huang DC, Hung KD, Chan YK. A computer assisted method for leukocyte nucleus segmentation and recognition in blood smear images. J Syst Softw 2012; 85: 2104‑2118.
31. Singhal V, Singh P. Local binary pattern for automatic detection of acute lymphoblastic leukemia. 2014 Twentieth National Conference on Communications (NCC) 2014; 1‑5.
32. Lezoray O. Segmentation d’images par morphologie mathématique et classification de données par réseaux de neurones: application à la classification de cellules en cytologie des séreuses. Université de Caen Basse-Normandie 2000.
33. Haralick RM, Shanmugam K, Dinstein I. Textural features for image classification. IEEE Trans Syst Man Cybern 1973; SMC-3: 610‑621.
34. Alsubaie N, Trahearn N, Raza SEA, Snead D, Rajpoot NM. Stain deconvolution using statistical analysis of multi-resolution stain colour representation. Plos One 2017; 12: e0169875.
35. Roy S, Kumar Jain A, Lal S, Kini J. A study about color normalization methods for histopathology images. Micron 2018; 114: 42‑61.
36. Dzulkifli FA. Identification of suitable contrast enhancement technique for improving the quality of astrocytoma histopathological images. ELCVIA Electron Lett Comput Vis Image Anal 2021; 20: 84-98.
37. Sayadi MM, Fnaiech F. A novel morphological segmentation method for evaluating estrogen receptors’ status in breast tissue images. 1st International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) 2014; 177‑182.
38. Mouelhi H, Rmili JB, Ali M, Sayadi R, Doghri, Mrad K. Fast unsupervised nuclear segmentation and classification scheme for automatic allred cancer scoring in immunohistochemical breast tissue images. Comput Methods Programs Biomed 2018; 165: 37‑51.
39. Mohapatra S, Patra D, Satpathy S. An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images. Neural Comput Appl 2014; 24: 1887‑1904.
40. Fatonah N, Tjandrasa H, Fatichah C. Identification of acute lymphoblastic leukemia subtypes in touching cells based on enhanced edge detection. Int J Intell Eng Syst 2020; 13: 204‑215.
41. Roszkowiak L, Korzynska A, Pijanowska D, Bosch R, Lejeune M, Lopez C. Clustered nuclei splitting based on recurrent distance transform in digital pathology images. EURASIP J Image Video Process 2020; 2020: 26.
42. Mouelhi M, Sayadi F, Fnaiech F, Mrad K, Romdhane KB. A new automatic image analysis method for assessing estrogen receptors’ status in breast tissue specimens. Comput Biol Med 2013; 43: 2263‑2277.
43. Cui Y, Zhang G, Liu Z, Xiong Z, Hu J. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med Biol Eng Comput 2019; 57: 2027‑2043.
Copyright: © 2022 Polish Association of Pathologists and the Polish Branch of the International Academy of Pathology This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.