The segmentation problem of an image of N pixels is formulated in [8] as a partition of the N pixels among M classes, such that the assignment of the pixels minimizes a criterion function. The SCHNN classifier structure consists of a grid of N × M neurons with each row representing a pixel and each column representing a cluster. The network classifies the image of N pixels of P features among M classes, in a way that the assignment of the pixels minimizes the following criterion function:

where *R*
_{
kl
}is the Mahalanobis distance measure between the *k*
^{th}pixel and the centroid of class *l*, *R*
_{
kl
}is also equivalent to the error committed when a pixel *k* is assigned to a class *l*. The index *n* in is the power or weight of the considered error in the energy function of the segmentation problem, and *V*
_{
kl
}is the output of the *kl*
^{th}neuron. *N*
_{
kl
}is a N × M vector of independent high frequency white noise source used to avoid the network being trapped in early local minimums. The term *c*(*t*) is a parameter controlling the magnitude of noise which is selected in a way to provide zero as the network reaches convergence. The minimization is achieved by using SCHNN and by solving the motion equations satisfying:

where *U*
_{
kl
}is the input of the *k*
^{th}neuron, and *μ*(*t*) is a scalar positive function of time, used as heuristically motivated stopping criterion of SCHNN, and is defined as in [6] by:

*β*(*t*) = *t*(*T*
_{
s
}- *t*) (4)

where *t* is the iteration step, and *T*
_{
s
}is the pre-specified convergence time of the network which has been found to be 120 iterations [6]. The network classifies the feature space, without teacher, based on the compactness of each cluster calculated using Mahalanobis distance measure between the *k*
^{th}pixel and the centroid of class *l* given by:

where *X*
_{
k
}is the P-dimensional feature vector of the *k*
^{th}pixel (here P = 3 with respect to the RGB color space components), is the P-dimensional centroid vector of class *l*, and Σ_{
l
}is the covariance matrix of class *l*. The segmentation algorithm is described as follows [8].

**Step 1** Initialize the input of the neurons to random values.

**Step 2** Apply the following input-output relation, establishing the assignment of each pixel to only and only one class.

**Step 3** Compute the centroid and the covariance matrix Σ_{
l
}of each class *l* as follows:

where *n*
_{
l
}is the number of pixels in class *l*, and the covariance matrix is then normalized by dividing each of its elements by .

**Step 4** Update the inputs of each neuron by solving the set of differential equations in (2) using Eulers approximation:

**Step 5** if *t* <*T*
_{
s
}, repeats from **Step 2**, else terminated.

For this study, a total of 20 liver tissue sections were provided by the pathological division of National Cancer Center in Tokyo. These sections were taken using needle biopsy, stained with hematoxylin and then magnified with an optical microscope. Figure 1 shows a true RGB color image of liver tissue of 768 × 512 pixels. We have used the above described SCHNN classifier with the image components in the R.G.B color space. The number of classes is fixed to five based on medical information. These classes are the contour of the image, the cell's nuclei, the cytoplasm, the fibrous tissues, and the class of both blood sinus and fat cells.

Figure 2 shows the curves of SCHNN energy function during the segmentation of the sample shown in Figure 1 with *T*
_{
s
}values between 30 and 120 iterations. Similar curves were obtained for the rest of the images of the dataset. As it is illustrated in Figure 2. The curve corresponding to *T*
_{
s
}= 120 iterations gives the optimal solution, the same as it is with MRI data [6].

In order to study the effect of the weight of the Mahalanobis distance *R*
_{
kl
}in the cost function (2), we have provided a simple modification to the above algorithm as follows:

**Step 1** Use the same random initialization N × M matrix, as input of the neurons, when minimizing the energy function (1) with different error's weight *n*.

This condition is added to the algorithm in order to make sure that the random field does not have any effect on the generated results.

**Step 2** trough **Step 5** remain the same.