# Context-sensitive autoassociative memories as expert systems in medical diagnosis

- Andrés Pomi
^{1}Email author and - Fernando Olivera
^{2}

**6**:39

**DOI: **10.1186/1472-6947-6-39

© Pomi and Olivera; licensee BioMed Central Ltd. 2006

**Received: **15 May 2006

**Accepted: **22 November 2006

**Published: **22 November 2006

## Abstract

### Background

The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences.

### Methods

We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings.

### Results

We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93,3%; and Cohen's kappa index 0,84.

### Conclusion

Context-dependent associative memories can operate as medical expert systems. The model is presented in a simple and tutorial way to encourage straightforward implementations by medical groups. An application with real data, presented as a primary evaluation of the validity and potentiality of the model in medical diagnosis, shows that the model is a highly promising alternative in the development of accuracy diagnostic tools.

## Background

The extreme complexity of contemporary medical knowledge together with the intrinsic fallibility of human reasoning, have led to sustained efforts to develop clinical decision support systems, with the hope that bedside expert systems could overcome the limitations inherent to human cognition [1]. Despite the foundational hopes have not been fulfilled [2], the unaltered and increasing necessity for reliable automated diagnostic tools and the important benefit to society brought by any success in this area make every advance valuable.

To further the research on computer-aided diagnosis begun in the 1960s, models of neural networks [3] have been added to the pioneering work on artificial-intelligence systems. The advent of artificial neural networks with ability to identify multidimensional relationships in clinical data might improve the diagnostic power of the classical approaches. A great proportion of the neural network architectures applied to clinical diagnosis rests on multilayer feed-forward networks instructed with backpropagation, followed by self-organizing maps and ART models [4, 5]. Although they perform with significant accuracy, this performance nevertheless remained insufficient to dispel the common fear that they are "black-boxes" whose functioning cannot be well understood and, consequently, whose recommendations cannot be trusted [6].

The associative memory models, an early class of neural models [7] that fit perfectly well with the vision of cognition emergent from today brain neuroimaging techniques [8, 9], are inspired on the capacity of human cognition to build semantic nets [10]. Their known ability to support symbolic calculus [11] makes them a possible link between connectionist models and classical artificial-intelligence developments.

This work has three main objectives: a) to point out that associative memory models have the possibility to act as expert systems in medical diagnosis; b) to show in a simple and straightforward way how to instruct a minimal expert system with associative memories; and c) to encourage the implementation of this methodology at large scale by medical groups.

Therefore, in this paper we address – in a tutorial approach – the building of associative memory-based expert systems for the medical diagnosis domain. We favour a comprehensive way and the possibility of a straightforward implementation by medical groups over the mathematical details of the model.

## Methods

### Context-dependent autoassociative memories with overlapping contexts

Associative memories are neural network models developed to capture some of the known characteristics of human memories [12, 13]. These memories associate arbitrary pairs of patterns of neuronal activity mapped onto real vectors. The set of associated pairs is stored superimposed and distributed throughout the coefficients of a matrix. These matrix memory models are content-addressable and fault-tolerant, and are well known to share with humans the ability of generalization and universalization [14].

In the attempt to overcome a serious problem of these classical models – their impossibility to evoke different associations depending on the context accompanying a same key stimulus- Mizraji [15] developed an extension of the model that performs adaptive associations. Context-dependent associations are based on a kind of second order sigma-pi neurons [16], and showed an interesting versatility when they were incorporated in modules employed to implement chains of goal-directed associations [17], disambiguation of complex stimuli [18], logical reasoning [19, 20], and multiple criteria classification [21].

A context-dependent associative memory M acting as a basic expert system is a matrix

$\text{M}={\displaystyle \sum _{\text{i}=1}^{\text{k}}{\text{d}}_{\text{i}}({\text{d}}_{\text{i}}\otimes {\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}{)}^{\text{T}}}}\left(1\right)$

where d_{i} are column vectors mapping k different diseases (the set {d} is chosen orthonormal), and s_{j(i)} are column vectors mapping signs or symptoms accompanying the i disease (also an orthonormal set). The sets of symptoms corresponding to each disease can overlap.

The Kronecker product (⊗) between two matrices A and B is another matrix defined by

A ⊗ B = a(i, j)·B (2)

denoting that each scalar coefficient of matrix A, a(i, j), is multiplied by the entire matrix B. Hence, if A is nxm dimensional and B is kxl dimensional, the resultant matrix will have the dimension nkxml.

Note that if d are n-dimensional and s are k-dimensional vectors, the memory is a rectangular nxnm matrix. Also, the memory M can be viewed as resulting from the Kronecker product (⊗) *enlargement* of each element of a nxn square autoassociative matrix d_{i} d_{i}
^{T} by a row column representing the sum of corresponding signs and symptoms:

$\text{M}={\displaystyle \sum _{\text{i}=1}^{\text{k}}{\text{d}}_{\text{i}}{\text{d}}_{\text{i}}^{\text{T}}}\otimes {\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}^{\text{T}}}\left(3\right)$

By feeding the context-sensitive autoassociative module M with signs or symptoms, the system retrieves the set of possible diseases associated with such set of symptoms, or a single diagnosis if the criteria suffice.

At resting conditions the system is grounded in an indifferent state g. If each disease was instructed only one time, in the mathematics of the model this implies the priming of the memory with a linear combination in which every disease has an equal weight

$\text{M}(\text{g}\otimes {\text{I}}_{\text{n}\times \text{n}})={\displaystyle \sum _{\text{i}}<{\text{d}}_{\text{i}},\text{g}>{\text{d}}_{\text{i}}}{({\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}})}^{\text{T}}={\displaystyle \sum _{\text{i}}{\text{d}}_{\text{i}}}{({\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}})}^{\text{T}}\left(4\right)$

where $\text{g}={\displaystyle \sum _{\text{i}}{\text{d}}_{\text{i}}}$ and I is the nxn identity matrix. From (4) it is evident that, after the priming, the context-dependent memory becomes a classical memory associating symptoms with diseases. If a set of sufficient concurrent signs and symptoms is presented to the waiting memory (σ = ∑s), after iteration, a final diagnosis results.

_{j(i)}} corresponding to each disease were disjoint sets, then any single symptom s

_{j(i)}would be patognomonical and sufficient to univocally diagnose d

_{i}. Otherwise, the output will be a linear combination of possible diseases, each one weighed according to the scalar product between the set of actual symptoms (σ) and the set of symptoms corresponding to each different disease: $\sum _{\text{i}}<{\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}}$, σ > d

_{i}. See Figure 1 and its legend. Forcing the sum of scalar products to unity, this output provides a probabilistic mapping of the possible diseases associated with the clinical presentation.

### NUMERICAL EXAMPLE

#### How to instruct the memory

$\begin{array}{cc}Diseases& \begin{array}{l}Signs\&\\ symptoms\end{array}\\ \left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]& 0.5*\left[\begin{array}{cccc}1& 1& 1& 1\\ 1& -1& 1& -1\\ 1& 1& -1& -1\\ 1& -1& -1& 1\end{array}\right]\\ \begin{array}{ccc}{d}_{1}& {d}_{2}& {d}_{3}\end{array}& \begin{array}{cccc}{s}_{1}& {s}_{2}& {s}_{3}& {s}_{4}\end{array}\end{array}$

According to the table and equation (1), we instruct the memory by adding a matrix for each disease. For the first disease we have d_{1}d_{1}
^{T} ⊗ (s_{1} + s_{3} + s_{4})^{T}

$\begin{array}{l}\left[\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\otimes [\begin{array}{cccc}1.5& 0.5& -0.5& 0.5\end{array}]=\hfill \\ =\left[\begin{array}{cccccccccccc}1.5& 0.5& -0.5& 0.5& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]\hfill \end{array}$

In the same way we will have two other matrices for the other diseases. The sum of the three matrices constitutes the memory M.

$\text{M}=\left[\begin{array}{cccccccccccc}1.5& 0.5& -0.5& 0.5& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& -1& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 1.5& -0.5& 0.5& 0.5\end{array}\right]$

#### How the system works

See also Figure 1 and its legend.

**Time step 1**

Initial state of the system: Indifferent vector g^{T} = (d_{1} + d_{2} + d_{3}) = [1 1 1]

A first clinical data (s_{3}) arrives: s_{3} ^{T} = [0.5 0.5 -0.5 -0.5]

Preprocessing of input vectors is performed: h = g ⊗ s_{3}

h^{T} = [0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5 -0.5]

Resulting associated output: Mh (a linear combination of possible diagnoses)

$\text{output}(1)=\left[\begin{array}{c}1\\ 1\\ 0\end{array}\right]$

Resulting probabilistic map (each coefficient of the output vector is divided by the sum of them all):

$\text{prob}(1)=\left[\begin{array}{c}0.5\\ 0.5\\ 0\end{array}\right]$

**Time step 2**

A new symptom (s_{2}) arrives: s_{2} ^{T} = [0.5 -0.5 0.5 -0.5]

Preprocessing of input vectors is performed: h = output(1) ⊗ s_{2}

h^{T} = [0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0 0 0 0]

Resulting associated output (Mh):

$\text{output}(2)=\left[\begin{array}{c}0\\ 1\\ 0\end{array}\right]$

Resulting probabilistic map:

$\text{prob}(1)=\left[\begin{array}{c}0\\ 1\\ 0\end{array}\right]$

#### Final result

The system has arrived to an only final diagnosis that corresponds to disease 2.

### REAL DATA APPLICATION – diagnosing late-onset neonatal sepsis

Late-onset sepsis (invasive infection occurring in neonates after 3 days of age) is an important and severe problem in infants hospitalized in neonatal intensive care units (NICUs) [22]. The clinical signs of infection in the newborn are variable, and the earliest manifestations are often subtle and nonspecific. In the presence of a clinical suspicion of sepsis an early and accurate diagnosis algorithm would be of outstanding value but is not yet available [23]. In a recent retrospective study that included 47 neonates with clinical diagnosis of suspected sepsis, Martell and collaborators [24] assessed a group of clinical and laboratory variables – surgical history, metabolic acidosis, hepatomegalia, abnormal white blood cell (WBC) count, hyperglycemia and thrombocytopenia-determining their sensitivity, specificity, likelihood ratio and post-test probability. Sepsis was defined as a positive result on one or more blood cultures in a neonate with clinical diagnosis of suspected sepsis. A prevalence of 34% was found for their NICU.

We instructed a context-dependent autoassociative memory according to equation (3) with data published in [24] in order to evaluate its capacity to recognize patients with or without sepsis. As a test-set, we used 15 cases of suspected neonatal sepsis coming from the same NICU (personal observations of one of us-AP). From equation (3) it is clear that the different clinical presentation of the individual cases are added up and resumed in the vector ($\sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}^{\text{T}}$) representing the characteristic signs of each illness condition. We trained the memory instructing two terms d_{i} corresponding to the two final diagnoses of confirmed sepsis and absence of sepsis.

M = [septic] [septic]^{T} ⊗ [attributes _ septic]^{T} + [healthy] [healthy]^{T} ⊗ [attributes _ healthy]^{T}

The column vectors used for the septic and healthy conditions were [1 0]^{T} and [0 1]^{T} respectively.

TP = sensitivity × E

FP = (sensitivity/LR) × NE

TN = specificity × NE

FN = N - (TP+FP+TN).

[attributes_septic]^{T} = [0.0604 0.4225 0.0604 0.4225 0.1509 0.3320 0.0604 0.0604 0.3621 0.0604 0.4225 0.0604 0.4225]

[attributes_healthy]^{T} = [0.0142 0.4248 0.0142 0.4248 0.0566 0.3823 0.0354 0.0177 0.3859 0.0283 0.4106 0.0283 0.4106].

The memory M resumes the cumulated experience in suspected late-onset sepsis of this particular NICU through the clinical presentations of one year hospitalized neonates.

^{T}and the absence of acidosis with the vector [0 0 01 0 0 0 0 0 0 0 0 0]

^{T}. For each patient of the test set we added the vectors corresponding to the confirmed presence or absence of any sign. These 15 vectors representing the clinical presentation of the neonates with the diagnosis of suspected sepsis are shown in Figure 5.

- i)
The vector with the clinical presentation is presented to the memory M. The output, [result_vector], is a linear combination of the vectors septic [1 0]

^{T}and healthy [0 1]^{T}:

[result_vector] = M * ([indifferent_vector] ⊗ [clinical presentation])

The [indifferent_vector] is the sum of septic and healthy vectors: [1 1]^{T}.

ii) A diagnosis results from the evaluation of the coefficients of the two-dimensional [result_vector]. If the first coefficient is greater than the second the case is classified as sepsis. If the second coefficient is the largest the patient is classified as non-septic.

## Results

### A context-dependent memory model acting as a minimal expert system

In this work we show a minimal, context-dependent, memory nucleus able to support diagnostic abilities. Our expert system consists of an autoassociative memory with overlapping contexts and feedback loop that makes the output able to be reinjected into the memory at the next time step (Figure 1).

A memory M acting as a basic expert system is a matrix (equation 3)

$\text{M}={\displaystyle \sum _{\text{i}=1}^{\text{k}}{\text{d}}_{\text{i}}{\text{d}}_{\text{i}}^{\text{T}}}\otimes {\displaystyle \sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}^{\text{T}}},$

where the d_{i} are column vectors mapping k different diseases (the set {d} is chosen to be orthonormal), s_{j(i)} are column vectors mapping signs and symptoms accompanying the i disease (also an orthonormal set), and ⊗ is the Kronecker product [25]-see **Methods**-. Note that if d are n-dimensional vectors (n ≥ k), and s are m-dimensional, then d_{i}d_{i}
^{T} are square symmetric matrices, and the memory M is a rectangular matrix of dimensions nxnm.

### The instruction of the expert

The cognitive functioning shown by this kind of neural network model is based on the establishment of context-dependent associations. The instruction of the expert therefore consists in the instruction of the memory that stores these associations.

Each disease is instructed to the memory together with its characteristic signs and symptoms (these can include the results of laboratory exams, imaging studies, etc). For this to be done, the first step is to code each disease to be instructed with a different orthonormal vector. The same must be done with the set of signs, symptoms and paraclinical results that could accompany that set of diseases, also coding them with different column vectors of any orthonormal basis of adequate dimension.

Once the signs and symptoms corresponding to each disease have been identified and expressed as orthogonal vectors, the construction of the memory can commence. According to equation (1) this instruction consists in the superposition (the addition) of different rectangular matrices, each one corresponding to a different disease.

The instruction of the memory can be developed along two different paths. a. *Learning from the textbook*. In this case, the expert is instructed according to the updated academic knowledge of each disease. One first disease is taken, which is coded by the column vector d_{i}, and the outer product of this vector is made by itself (a square matrix is constructed that contains this autoassociation). At the same time, all the signs and symptoms characteristic of this disease are identified and the vectors coding them are added up ($\sum _{\text{j}(\text{i})}{\text{s}}_{\text{j}}$). Finally the Kronecker product between the square matrix and the transpose of the vector-sum is performed. An analogous procedure is accomplished for any pathology. Each new resulting rectangular matrix of dimension nxnm is added to the previous ones already stored in memory M (a minimal numerical example is presented in section **Methods**-How to instruct the memory-). b. *Learning by experience*. This is a case-based way of instructing the memory. It allows the expert to progressively capture the prevalence of the different diseases in a community. Once finalized the previous instruction, the memory is fed with the actual clinical findings of each particular patient assisted by the physician, attributing this particular constellation of signs and symptoms to the corresponding final diagnosis. The matrices resulting from new patients are progressively added to the memory. This type of representation implies two essential distinctions from the previous *learning-from-the-textbook* memory. Pathologies are not equally weighed in the memory but their representations depend on the frequency of presentation of cases in the population. In addition, for each disease the different symptoms also are not equally weighed: those corresponding to the more frequent clinical presentations will be strengthened.

### Medical queries

Once the training phase is finalized, the system is ready to be used. The presentation of a first sign or symptom initiates a medical query. The availability of a new clinical or laboratory finding causes the expert to advance one more step in its diagnostic decision. Although we have many new signs and symptoms, in order to obtain a progressive narrowing of the set of possible diagnoses they must be presented to the expert one per time. At each step, the new data are entered into the memory along with the set of possible diagnoses until that moment. Finally, if the whole set of signs and symptoms available until the moment is sufficient, the system will arrive to a unique diagnosis.

We then follow the system operation. The starting point is when the first clinical data appears. The vector corresponding to this symptom is multiplied by means of the Kronecker product times the vector that represents the set of possible diagnoses (in the starting point it is an indifferent vector). If the memory was instructed with equally-weighed pathologies the indifferent vector is the sum of all the vectors of diseases stored in the memory. If, on the contrary, the memory was instructed on the basis of individual cases, the indifferent vector will be the same linear combination of the vector diseases stored (the weight of each disease corresponds to the one of its frequency of presentation). The resulting column vector is now multiplied by the memory matrix. The exit vector contains either a univocal diagnosis (if the clinical data are sufficient) or a certain linear combination of vectors corresponding to several diseases. If a unique diagnosis was not arrived at, when one has a new sign or symptom, its corresponding vector will enter the memory after making its Kronecker product by the exit vector of the previous step. The process is repeated and stops when a final diagnosis is reached or when new clinical data is not available (see the continuation of the numerical example in section **Methods**-How the system works-).

Even if at a certain state a final diagnosis has not been reached, the outcome of the system nevertheless represents a probabilistic mapping of the possible diagnoses, each one with its respective probability in agreement with the data available until the moment. In order to obtain such a map in a direct way it is convenient to choose as disease vectors the columns of an identity matrix of suitable dimension. In that case, in each exit vector the positions of the coefficients different from zero mark the different possible diagnoses. Applying a normalization to this exit vector in such a way that the sum of their components is one, the value of each coefficient different from zero represents the probability of each one of those diagnoses. Otherwise, these probabilities can be obtained by multiplying the exit vector by the orthonormal matrix that codifies the diseases.

### A reduced model for the diagnosis of late-onset neonatal sepsis

The system described in section **Methods** classified the patients of the test-set (N = 15) as follows (S = sepsis; NS = non-septic):

$\begin{array}{ccccccccccccccc}1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12& 13& 14& 15\\ S& NS& NS& S& S& S& S& S& NS& NS& S& S& S& S& S\end{array}$

## Discussion and conclusions

We have shown here that context-dependent associative memories could act as medical decision support systems. The system implies the previous coding of a set of diseases and its corresponding semiologic findings in individual basis of orthogonal vectors. The model presented in this communication is only a minimal module able to evaluate the probabilities of different diagnoses when a set of signs and symptoms is presented to it.

This expert system based on an associative memory shares with programs using artificial intelligence a great capacity to quickly narrow the number of diagnostic possibilities [1]. Also, it is able to cope with variations in the way that a disease can present itself.

Beginning with a textbook-instructed memory, the system evolves accommodating (superimposing in the memory) new manifestations of disease gathered over time. This process of continued network education based on empirical evidence leads to databases representative of the different patient populations with its own geo-demographical characteristics.

This model can be easily improved in various directions. The functioning of the system described up to now can be considered a passive phase (in the sense that it consists on an automatic evaluation of the available information). By adding another module to the system, consisting of a simple memory that associates diseases with the set of its findings, the expert can enhance its diagnostic performance. Remaining two or three different diagnostic hypothesis within the previous passive phase of diagnosis refinement, this new module can be fed with the vectors mapping each one of these diseases to elicit its associated set of clinical findings. The set of absent features supporting one or the other disease determines what information must be sought next.

Another important expansion of the expert allows giving up the strong assumption that all the findings correspond to a unique disease. Our context-dependent memory stops and gives a null vector when contradictory data are proportioned. To prevent such behaviour, a module akin to a novelty filter could be interposed within the recursion with the following properties: if a vector with only zero coefficients arrives, this module associates the whole set of diseases, avoiding lying aside relevant diagnoses and concurrent pathologies. However, this theme needs further investigation: as for almost every expert system [26], the clustering of findings and their attribution either to only one disease or to several disorders is a major challenge.

The primary implementation of a reduced version of the model with the aim of classifying septic or non-septic neonates showed the highly satisfactory capacity of the model to be applied to real data. We conclude that context-sensitive associative memory model is a promising alternative in the development of accuracy diagnostic tools. We expect that its easy implementation stimulate groups of medical informatics to develop this expert system at real scale.

## Declarations

### Acknowledgements

We thank Dr. Eduardo Mizraji for useful comments and Dr. Julio A. Hernández for revision and improvement of the manuscript.

## Authors’ Affiliations

## References

- Szolovits P, Patil RS, Schwartz WB: Artificial Intelligence in medical diagnosis. Annals of Internal Medicine. 1988, 108: 80-87.View ArticlePubMedGoogle Scholar
- Schwartz WB, Patil RS, Szolovits P: Artificial Intelligence in medicine: Where do we stand?. New England Journal of Medicine. 1987, 316: 685-688.View ArticlePubMedGoogle Scholar
- Arbib MA, Ed: The Handbook of Brain Theory and Neural Networks. 1995, Cambridge, MA: MIT PressGoogle Scholar
- Cross SS, Harrison RF, Lee Kennedy R: Introduction to neural networks. The Lancet. 1995, 346: 1075-1079. 10.1016/S0140-6736(95)91746-2.View ArticleGoogle Scholar
- Lisboa PJG: A review of evidence of health benefit from artificial neural network in health intervention. Neural Networks. 2002, 15: 11-39. 10.1016/S0893-6080(01)00111-3.View ArticlePubMedGoogle Scholar
- Baxt WG: Application of artificial neural networks to clinical medicine. The Lancet. 1995, 346: 1135-1138. 10.1016/S0140-6736(95)91804-3.View ArticleGoogle Scholar
- Kohonen T: Associative Memory: A System-Theoretical Approach. 1977, New York: Springer-VerlagView ArticleGoogle Scholar
- Friston KJ: Imaging neuroscience: Principles or maps?. Proc Natl Acad Sci USA. 1998, 95: 796-802. 10.1073/pnas.95.3.796.View ArticlePubMedPubMed CentralGoogle Scholar
- McIntosh AR: Towards a network theory of cognition. Neural Networks. 2000, 13: 861-870. 10.1016/S0893-6080(00)00059-9.View ArticlePubMedGoogle Scholar
- Pomi A, Mizraji E: Semantic graphs and associative memories. Physical Review E. 2004, 70: 066136-10.1103/PhysRevE.70.066136.View ArticleGoogle Scholar
- Mizraji E: Vector logics: the matrix-vector representation of logical calculus. Fuzzy Sets and Systems. 1992, 50: 179-185. 10.1016/0165-0114(92)90216-Q.View ArticleGoogle Scholar
- Anderson JA, Cooper L, Nass MM, Freiberger W, Grenander U: Some properties of a neural model for memory. AAAS Symposium on Theoretical Biology and Biomathematics. 1972, Milton, WA. Leon N Cooper Publications, [http://www.physics.brown.edu/physics/researchpages/Ibns/Cooper%20Pubs/040_SomePropertiesNeural_72.pdf]Google Scholar
- Cooper LN: Memories and memory: a physicist's approach to the brain. International J Modern Physics A. 2000, 15: 4069-4082. [http://journals.wspc.com.sg/ijmpa/15/1526/S0217751X0000272X.html]Google Scholar
- Cooper LN: A Possible Organization of Animal Memory and Learning. Proceedings of the Nobel Symposium on Collective Properties of Physical Systems. Edited by: Lundquist B & S. 1973, New York: Academic PressGoogle Scholar
- Mizraji E: Context-dependent associations in linear distributed memories. Bulletin Math Biol. 1989, 51: 195-205.View ArticleGoogle Scholar
- Valle-Lisboa JC, Reali F, Anastasía H, Mizraji E: Elman topology with sigma-pi units: An application to the modelling of verbal hallucinations in schozophrenia. Neural Networks. 2005, 18: 863-877. 10.1016/j.neunet.2005.03.009.View ArticlePubMedGoogle Scholar
- Mizraji E, Pomi A, Alvarez F: Multiplicative contexts in associative memories. BioSystems. 1994, 32: 145-161. 10.1016/0303-2647(94)90038-8.View ArticlePubMedGoogle Scholar
- Pomi-Brea A, Mizraji E: Memories in context. BioSystems. 1999, 50: 173-188. 10.1016/S0303-2647(99)00005-2.View ArticlePubMedGoogle Scholar
- Mizraji E, Lin J: A dynamical approach to logical decisions. Complexity. 1997, 2: 56-63. 10.1002/(SICI)1099-0526(199701/02)2:3<56::AID-CPLX12>3.0.CO;2-S.View ArticleGoogle Scholar
- Mizraji E, Lin J: Fuzzy decisions in modular neural networks. Int J Bifurcation and Chaos. 2001, 11: 155-167. 10.1142/S0218127401002043.View ArticleGoogle Scholar
- Pomi A, Mizraji E: A cognitive architecture that solves a problem stated by Minsky. IEEE on Systems, Man and Cybernetics B (Cybernetics). 2001, 31: 729-734. 10.1109/3477.956034.View ArticleGoogle Scholar
- Stoll BJ, Hansen N, Fanaroff AA, Wright LL, Carlo WA, Ehrenkranz RA, Lemons JA, Donovan EF, Stark AR, Tyson JE, Oh W, Bauer CR, Korones SB, Shankaran S, Laptook AR, Stevenson DK, Papile L-A, Poole WK: Late-Onset Sepsis in Very Low Birth Weight Neonates: The Experience of the NICHD Neonatal Research Network. Pediatrics. 2002, 110: 285-291. 10.1542/peds.110.2.285.View ArticlePubMedGoogle Scholar
- Rubin LG, Sánchez PJ, Siegel J, Levine G, Saiman L, Jarvis WR: Evaluation and Treatment of Neonates with Suspected Late-Onset Sepsis: A Survey of Neonatologists' Practices. Pediatrics. 2002, 110 (4): e42-10.1542/peds.110.4.e42.View ArticlePubMedGoogle Scholar
- Perotti E, Cazales C, Martell M: Estrategias para el diagnóstico de sepsis neonatal tardía. Rev Med Uruguay. 2005, 21: 314-320. [http://www.rmu.org.uy/revista/2005v4/art11.pdf]Google Scholar
- Van Loan CF: The ubiquitous Kronecker product. Journal of Computational and Applied Mathematics. 2000, 123: 85-100. 10.1016/S0377-0427(00)00393-9.View ArticleGoogle Scholar
- Szolovits P, Pauker SG: Categorical and probabilistic reasoning in medicine revisited. Artificial Intelligence. 1993, 59: 167-180. 10.1016/0004-3702(93)90183-C.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/6/39/prepub

### Pre-publication history

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.