# Modification of the mean-square error principle to double the convergence speed of a special case of Hopfield neural network used to segment pathological liver color images

- Rachid Sammouda†
^{1}and - Mohamed Sammouda†
^{2}Email author

**4**:22

**DOI: **10.1186/1472-6947-4-22

© Sammouda and Sammouda; licensee BioMed Central Ltd. 2004

**Received: **14 March 2004

**Accepted: **12 December 2004

**Published: **12 December 2004

## Abstract

### Background

This paper analyzes the effect of the mean-square error principle on the optimization process using a Special Case of Hopfield Neural Network (SCHNN).

### Methods

The segmentation of multidimensional medical and colour images can be formulated as an energy function composed of two terms: the sum of squared errors, and a noise term used to avoid the network to be stacked in early local minimum points of the energy landscape.

### Results

Here, we show that the sum of weighted error, higher than simple squared error, leads the SCHNN classifier to reach faster a local minimum closer to the global minimum with the assurance of acceptable segmentation results.

### Conclusions

The proposed segmentation method is used to segment 20 pathological liver colour images, and is shown to be efficient and very effective to be implemented for use in clinics.

## Background

Segmentation is an important step in most applications that use medical image data. For example, segmentation is a prerequisite for quantification of morphological disease manifestations and for radiation treatment planning [1, 2], for construction of anatomical models [3], for definitions of flight paths in virtual endoscopies [4], for content-based retrieval by structure [5], and for volume visualization of individual objects [2].

A Number of algorithms based on approaches such as histogram analysis, regional growth, edge detection and pixel classification have been proposed in other articles of medical image segmentation. In recent years, Artificial Neural Networks (ANNs) have been proposed as an attractive alternative solution to a number of pattern recognition problems. In our previous works [6], we have explored the potential of a Special Case of Hopfield Neural Network (SCHNN) in segmenting cerebral images obtained using the Magnetic Resonance Imaging (MRI) technique.

Hopfield network for the optimization applications consists of many interconnected neuron elements. The network minimizes an energy function of the form:

where N is the number of neurons, *V*
_{
k
}is the output of the *k*
^{
th
}neuron, *I*
_{
k
}is the bias term, and *T*
_{
kl
}is the interconnection weight between the *k*
^{
th
}and *l*
^{
th
}neurons. The energy function used in the segmentation problem is slightly different from the one defined by Hopfield and the arguments are given in [7].

The results that have been obtained in [6] were preferable to those obtained using Boltzmann Machine (BM) and the conventional ISODATA clustering technique. Also, in [8] we have shown that SCHNN is also able to make crisp segmentation of pathological liver colour images. However, during our study attempt to improve the segmentation process, we found that SCHNN segmentation results depend strongly on some parameters in the energy function formulating the classification problem. A summery of this study follows.

## Methods

The segmentation problem of an image of N pixels is formulated in [8] as a partition of the N pixels among M classes, such that the assignment of the pixels minimizes a criterion function. The SCHNN classifier structure consists of a grid of N × M neurons with each row representing a pixel and each column representing a cluster. The network classifies the image of N pixels of P features among M classes, in a way that the assignment of the pixels minimizes the following criterion function:

where *R*
_{
kl
}is the Mahalanobis distance measure between the *k*
^{
th
}pixel and the centroid of class *l*, *R*
_{
kl
}is also equivalent to the error committed when a pixel *k* is assigned to a class *l*. The index *n* in
is the power or weight of the considered error in the energy function of the segmentation problem, and *V*
_{
kl
}is the output of the *kl*
^{
th
}neuron. *N*
_{
kl
}is a N × M vector of independent high frequency white noise source used to avoid the network being trapped in early local minimums. The term *c*(*t*) is a parameter controlling the magnitude of noise which is selected in a way to provide zero as the network reaches convergence. The minimization is achieved by using SCHNN and by solving the motion equations satisfying:

where *U*
_{
kl
}is the input of the *k*
^{
th
}neuron, and *μ*(*t*) is a scalar positive function of time, used as heuristically motivated stopping criterion of SCHNN, and is defined as in [6] by:

*β*(*t*) = *t*(*T*
_{
s
}- *t*) (4)

where *t* is the iteration step, and *T*
_{
s
}is the pre-specified convergence time of the network which has been found to be 120 iterations [6]. The network classifies the feature space, without teacher, based on the compactness of each cluster calculated using Mahalanobis distance measure between the *k*
^{
th
}pixel and the centroid of class *l* given by:

where *X*
_{
k
}is the P-dimensional feature vector of the *k*
^{
th
}pixel (here P = 3 with respect to the RGB color space components),
is the P-dimensional centroid vector of class *l*, and Σ_{
l
}is the covariance matrix of class *l*. The segmentation algorithm is described as follows [8].

**Step 1** Initialize the input of the neurons to random values.

**Step 2** Apply the following input-output relation, establishing the assignment of each pixel to only and only one class.

**Step 3** Compute the centroid
and the covariance matrix Σ_{
l
}of each class *l* as follows:

where *n*
_{
l
}is the number of pixels in class *l*, and the covariance matrix is then normalized by dividing each of its elements by
.

**Step 4** Update the inputs of each neuron by solving the set of differential equations in (2) using Eulers approximation:

**Step 5** if *t* <*T*
_{
s
}, repeats from **Step 2**, else terminated.

*T*

_{ s }values between 30 and 120 iterations. Similar curves were obtained for the rest of the images of the dataset. As it is illustrated in Figure 2. The curve corresponding to

*T*

_{ s }= 120 iterations gives the optimal solution, the same as it is with MRI data [6].

In order to study the effect of the weight of the Mahalanobis distance *R*
_{
kl
}in the cost function (2), we have provided a simple modification to the above algorithm as follows:

**Step 1** Use the same random initialization N × M matrix, as input of the neurons, when minimizing the energy function (1) with different error's weight *n*.

This condition is added to the algorithm in order to make sure that the random field does not have any effect on the generated results.

**Step 2** trough **Step 5** remain the same.

## Results

*n*in equation (2). As aforementioned, the pre-specified convergence time of SCHNN is fixed to

*T*

_{ s }= 120 iterations. However, we can clearly see from Figure 3 that with a higher value of

*n*in Equation (2), the same convergence point or a close position is reached in half the time of the one reached with

*n*= 2 and

*T*

_{ s }= 120 iterations. So, this raises the following question: what is the type of relation between the variable

*n*in (2) and the pre-specified convergence time

*T*

_{ s }?

*n*that corresponds to the optimum solution with

*T*

_{ s }= 120 iterations. From Figure 4, it can be seen that

*n*= 6 gives the optimum solution with

*T*

_{ s }= 120. Similar figures to Figure 3 and Figure 4 were obtained with the rest of the images in the dataset.

## Discussion

### Analysis of the pre-specified convergence time effect

*n*, in Equation (2). When both (

*Ts*and

*n*) used together they give a local optima in the energy landscape of SCHNN. Figure 5 shows the curves linking the convergence values of SCHNN with respect to the value of

*n*in Equation (2) that are obtained with Ts values 120, 60, and 30. We realized that the curves corresponding to Ts = 120 and Ts = 60 intersect in their optimum solutions obtained with

*n*= 6, and the two curves are similar when n is in the range 5–10. However, the curve corresponding to Ts = 30, shows higher error at convergence of all values of

*n*.

### Analysis of the SCHNN random initialization effect

*n*in (2) when it takes the value of six where SCHNN gives an optimum and acceptable results that agree with the pathological experts point of views. However, with other values of n, the random initialization may affect the solution of the problem, or in other words, may affect the error of the SCHNN at convergence as shown in Figure 6.

## Conclusions

We analyzed the effect of considering the mean-square error in formulating the segmentation problem of multidimensional medical images. We have shown, empirically, that considering an integer power equal to six, of the error in the energy function of the problem, helped SCHNN to converge twice as fast as the same optimal solution obtained with the mean-square error algorithm. This result is promising to make our segmentation method useful for a Computer Aided Diagnosis (CAD) system for liver cancer and the like. In our future work, we will study deeply the effect of the random initialization and its effect on the segmentation result and on the SCHNN classifier.

## Notes

## Declarations

### Acknowledgements

The authors want to thank the Research Center at the University of Sharjah for supporting this work. Also the authors thank Dr. Maher Moussa in the English Department of the University of Sharjah for editing this paper.

## Authors’ Affiliations

## References

- Chaney E, Pizer S: Defining anatomical structures from medical images. Semin Radiat Oncol. 1992, 2: 215-225.View ArticlePubMedGoogle Scholar
- Tracton G, Chaney E, Rosenman J, Pizer S: Mask: Combining 2-D and 3-D segmentation methods to enhance functionality. SPIE Conf Medical Imaging Bellingham, WA. 1994, 98-109.Google Scholar
- Brinkley JF: A flexible, generic model for anatomic shape: Application to intyeractive two-dimensional medical image segmentation and matching. Computer Biomed Res. 1993, 26 (2): 121-142. 10.1006/cbmr.1993.1008.View ArticleGoogle Scholar
- Lorensen W, Ferenc A, Kikinis R: The exploration of cross-sectional data with a virtual endoscope. Interactive Technology and the New Paradigm for Health Care, Japan. 1995, IOS Press, 221-230.Google Scholar
- Orphanoudakis SC, Chronaki C, Kostomanolakis S: I/Sup 2/C: A system for the indexing, storage, and retrieval of medical images by content. Med Inform. 1994, 19: 109-122.View ArticleGoogle Scholar
- Sammouda R, Niki N, Nishitani H: A Comparison of Hopfield Neural Network and Boltzmann Machine in Segmenting MR Images of the Brain. IEEE Transactions on Nuclear Science. 1996, 43 (6): 3361-3368. 10.1109/23.552753.View ArticleGoogle Scholar
- Amartur SC, Piraino D, Takefuji Y: "Optimization neural networks for the segmentation of magnetic resonance images,". IEEE Transactions on Medical Imaging. 1992, 11 (2):Google Scholar
- Sammouda M, Sammouda R, Niki N: Liver cancer detection system based on the analysis of digitized color images of tissue samples obtained using needle biopsy. International Journal of Information Visualization. 2002, Pagrave Press, 1 (2): 130-138. 10.1057/palgrave.ivs.9500012.Google Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/4/22/prepub

### Pre-publication history

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.