Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

A survey on computer aided diagnosis for ocular diseases

  • Zhuo Zhang1, 2Email author,
  • Ruchir Srivastava1,
  • Huiying Liu1,
  • Xiangyu Chen1,
  • Lixin Duan1,
  • Damon Wing Kee Wong1,
  • Chee Keong Kwoh2,
  • Tien Yin Wong3 and
  • Jiang Liu1
Contributed equally
BMC Medical Informatics and Decision Making201414:80

DOI: 10.1186/1472-6947-14-80

Received: 6 December 2013

Accepted: 12 August 2014

Published: 31 August 2014

Abstract

Background

Computer Aided Diagnosis (CAD), which can automate the detection process for ocular diseases, has attracted extensive attention from clinicians and researchers alike. It not only alleviates the burden on the clinicians by providing objective opinion with valuable insights, but also offers early detection and easy access for patients.

Method

We review ocular CAD methodologies for various data types. For each data type, we investigate the databases and the algorithms to detect different ocular diseases. Their advantages and shortcomings are analyzed and discussed.

Result

We have studied three types of data (i.e., clinical, genetic and imaging) that have been commonly used in existing methods for CAD. The recent developments in methods used in CAD of ocular diseases (such as Diabetic Retinopathy, Glaucoma, Age-related Macular Degeneration and Pathological Myopia) are investigated and summarized comprehensively.

Conclusion

While CAD for ocular diseases has shown considerable progress over the past years, the clinical importance of fully automatic CAD systems which are able to embed clinical knowledge and integrate heterogeneous data sources still show great potential for future breakthrough.

Keywords

Computer Aided Diagnosis (CAD) Ocular diseases Review Clinical data Ocular imaging Genetic information

Background

Patients with ocular diseases are often unaware of the asymptomatic progression of the said disease [1] until at a later stage when treatment is less effective in preventing vision impairment [2]. Though regular eye screenings enable early detection and timely intervention of such diseases, it would put a significant strain on limited clinical resources. Computer Aided Diagnosis (CAD) systems, which automate the process of ocular disease detection, are urgently needed to alleviate the burden on the clinicians.

Owing to the fast pace of technological advancements in both hardware and software, many CAD systems have been developed for the diagnosis of ocular diseases over the past years, though most of them are still undergoing evaluation or clinical validation. For example, Fujita et al. [3] discussed an emerging CAD system using retinal fundus images for the detection of glaucoma, diabetic retinopathy (DR) and hypertensive retinopathy. Their project has entered the final stage of development, and commercialized CAD systems ought to appear by its completion.

Though such fully automated systems are not yet on the market, semi-automated and manual computer systems incorporating these CAD systems are relatively widely used, with several clinical publications already reporting on their usage. Examples of the development of such systems include IVAN [4] from University of Wisconsin and more recently SIVA from National University of Singapore [5] for semi-automated vascular analysis. Software packages allowing for processing of data garnered from these systems also exist: ADRES 3.0 by Perumalsamy et al. [6] is used for the grading of DR and has been commercialised and deployed for use in diabetic centres and general physician clinics in India; the Singapore Eye Research Institute has also been running clinical trials for the diagnosis of several ocular diseases (e.g., pathological myopia (PM), DR and age related macular degeneration (AMD)) using a uniform set of ophthalmic image reading and analysis protocols [7].

This survey covers three types of data for CAD systems: clinical data, image based data and genetic data. Clinical data refers to a patient’s demographic information (e.g., age, race etc.) and data acquired from clinical laboratory tests or exams, e.g. intra-ocular pressure (IOP), but excludes data acquired from digital imaging or genomic tests (Section “Result: CAD of ocular diseases based on clinical data”). Image based data refers to images captured using an imaging device for observing the pathology in the affected part of the eye (details are in Section “Imaging modalities”). Genetic information refers to any data obtained from an individual’s DNA, genes or proteins (Section “Result: predicting ocular diseases based on genetic information”). These definitions are specific to this paper and may vary depending on context. Of the three data types, CAD systems using clinical data has already been widely studied in the clinical field [810]. As far as CAD using genetic information is concerned, recent advancements in genotyping technology have made individual genetic information more commonly available, but it is still unfeasible to utilise genetic information for CAD systems on a large scale presently. Perhaps with time, genetic information will find its rightful place in medicine by supplementing phenotypic clinical data with validated genetic interpretations [11]. We cover genetic data as a possible input to future CAD systems. A considerable amount of the survey is focused on the usage of image based data in CAD systems as they are by far the most important type of data in ocular disease diagnosis.

There have been surveys on retinal imaging in the area of ocular research [12, 13]. However, there lacks a broader literature survey on using CAD for ocular disease diagnosis. This has motivated us to write a systematic review of recently developed methods for CAD in ocular research.

Methods

In this work, we review research and development on automatic ocular disease diagnosis in the light of three data types, viz. clinical, image and genetic. For each data type, we investigate the algorithms and available databases developed for different ocular diseases. The associated publications were retrieved from two literature databases, PubMed and IEEEXplore. Considering the works which use images as data, to understand the major image modalities used for CAD applications and the trends of research areas, we summarize the statistics of image-based studies conducted on various ocular diseases. We examine the biomedical databases to extract the known genetic information regarding ocular diseases.

The results of the review are presented in three sections: Sections “Result: CAD of ocular diseases based on clinical data” and “Result: CAD of ocular diseases based on imaging” describe the CAD of ocular diseases based on clinical data and ocular imaging respectively. Section “Result: predicting ocular diseases based on genetic information” concerns studies relating genomic informatics to disease prediction. Furthermore, in Section “Discussion” we discuss the observed trends in the field and the possibility of CAD systems based on integrated data sources.

Result: CAD of ocular diseases based on clinical data

One of the pioneer research works on Clinical Decision Support Systems (CDSS), CASNET [14] (causal-associational network), was developed in late 1970s to assist in the diagnosis of glaucoma. Clinical data used in CASNET covered symptoms reported by the patient, e.g., ‘ocular pain’, ‘decreased visual acuity’ and various eye examination results, e.g. visual acuity, IOP, anterior chamber depth, angle closure, pupil abnormality and corneal edema [15]. CASNET used a descriptive model of the disease process for logical interpretations of clinical findings for glaucoma. The model representing pathophysiological mechanisms had the form of a semantic net with weighted links. It represented early medical expert systems, providing a framework describing the knowledge of expert consultants and simulating various aspects of the cognitive process of clinicians.

In 2002, Chan et al. [16] reported the first implementation of Support Vector Machines (SVM) in glaucoma diagnosis. Clinical data used in the research was the output from Standard Automated Perimetry (SAP), a common computerized visual field test. The authors compared the performance of a number of machine learning algorithms with SAP output. The machine learning algorithms studied included multilayer perceptron (MLP), SVM, Linear and Quadratic Discriminant Analysis (LDA and QDA), Parzen window, mixture of Gaussian (MOG), and mixture of generalized Gaussian (MGG). It was observed that machine-learning-type classifiers showed improved performance over the best indexes from SAP. The authors also discussed the advantage of using feature selection to further improve the classification accuracy with a potential to reduce testing time by diminishing the number of visual field location measurements.

In 2011, Bizios et al. [17] conducted a study investigating the data fusion methods and techniques for simple combinations of parameters obtained from SAP and measurements of the Retina Nerve Fibre Layer Thickness (RNFLT) obtained from Optical Coherence Tomography (OCT) for diagnosis of glaucoma using Artificial Neural Networks. The results showed that the diagnostic accuracy from a combination of fused SAP and OCT data was higher than using either of the two alone. This was the first reported study using fused data for glaucoma diagnosis.

A recent study [18] investigates the relationship between the central corneal thickness (CCT), Heidelberg Retina Tomography II (HRTII) structural measurements and IOP using an innovative non-linear multivariable regression method, in order to define the risk factors in future glaucoma development.

Two recent works on ocular disease diagnosis based on clinical data need to be mentioned here. Liu et al. [19] developed an automatic glaucoma diagnosis and screening architecture, automatic glaucoma diagnosis through medical imaging informatics (AGLAIA-MII), which combined subjects’ personal data, imaging information from Digital Fundus Photographs (DFPs), and patients’ genome information for glaucoma diagnosis. Features from each data source were extracted automatically. Subsequently, these features were passed to a multiple kernel learning (MKL) framework to generate a final diagnosis outcome. In another work, Zhang et al. [20] proposed a computer-aided diagnosis framework for Pathological Myopia (PM) based on Biomedical and Image Informatics. These heterogeneous data sources contained fundus images, demographic/clinical and genetic data. Their system combined these potentially complementary pieces of information to enhance the understanding of the disease, providing a holistic appreciation of the multiple risks factors as well as improving the diagnostic outcomes. A data-driven approach was proposed to exploit the growth of heterogeneous data sources to improve assessment outcomes.

Other less prevalent diseases which are detected using clinical data are briefly explained in the following:

Trachoma:

Most people with trachoma in its initial stages display no signs or symptoms. Clinically the diagnosis of trachoma can be done by using magnifiers and a flashlight (physical examination) or through a cultural sample of bacteria from the eye tested in a laboratory [21].

Onchocerciasis:

Onchocerciasis is the 2nd leading cause of infectious blindness worldwide. Also called ‘river blindness’, it is a skin and eye disease caused by the parasitic worm and spread by blackflies that breed in fast-flowing water. The two common diagnostic techniques are skin biopsies and serological assays [22].

Clinical databases

There are a number of large scale or population-based eye studies conducted in various countries. For example,

  •  Blue Mountains Eye Study (Australia) [23]

  •  Singapore Malay Eye Study [24]

  •  Singapore Indian Eye Study [25]

  •  Singapore Chinese Eye Study [26]

Many research works conducted on various ocular diseases have been published based on the data collected in these eye studies. However, the data is not publicly available in research community.

Result: CAD of ocular diseases based on imaging

In ophthalmology, ocular imaging has developed rapidly during the past 100 over years and play an critical role in clinical care and ocular disease management [27]. Large-scale systematic research and development of CAD from radiology and medical images began in the early 1980s. The first report on retinal image analysis was published in 1973, focusing on vessel segmentation [28]. In 1984, Baudoin et al. [29] described an image analysis method for detecting lesions related to DR.

Over the past 20 years, developments in image processing relevant to ophthalmology have paved the way for the development of automated diagnostic systems for many diseases such as DR [30], AMD [31], glaucoma [32] and cataract [33]. These diagnostic systems offer the potential to be used in large-scale screening programs, with significant resource savings, as well as freedom from observer bias and fatigue. This section briefly mentions such CAD systems based on ocular imaging. Details are mentioned in Appendix B Details on methods for disease detection. The imaging modalities used by these systems are first introduced below.

Imaging modalities

Figure 1 shows the anatomy of eye. The visible parts of the eye include the transparent cornea, the sclera, the iris and the pupil. A ray of light, passes through the cornea and anterior chamber, followed by the pupil, the lens and the vitreous before finally focusing on the retina [12].
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig1_HTML.jpg
Figure 1

Ocular Anatomy and various image modalities. An illustration of the parts of the eye and the imaging modalities associated with them.

Various medical imaging devices have been developed to capture the different parts of the eye. These imaging modalities are developed based on various technologies and the captured images are used to observe various pathological signs. Table 1 lists the anatomical structure(s) and the associated disease(s) each imaging modality is able to observe.
Table 1

Imaging modalities and diseases to observe

Imaging modalities

Technology

Targets

Diseases observed

Retina Fundus

2D; considerably larger areas of the fundus

Interior surface of the eye (retina;

DR, glaucoma, AMD

 

than can be seen at one time with handheld

optic disc; macular; posterior pole)

 
 

ophthalmoscopes

  

OCT

3D; high resolution cross-sectional imaging

Cornea thickness, retinal nerve fibre

Glaucoma, macular

  

layer tissue, macular thickness

degeneration and edema

Heidelberg Retina

2D; confocal scanning laser ophthalmoscope

Retina

Glaucoma

Tomography (HRT)

   

Slit Lamp

2D; high-intensity light source stereoscopic

Eyelid, scelra, conjunctiva, iris,

Cataract

 

magnified view of the eye structures

lens, cornea

 

RetCam

2D; wide angle imaging

Anterior segment, anterior chamber

Anterior segment lesions,

   

Retinopathy of Prematurity

Scanning laser

High resolution cross-sectional imaging

Thickness of RNFL

Glaucoma

polarimetry (SLP)

   

Though the eye fundus has been observed since 1850 with the invention of the ophthalmoscope by the German physician Hermann Von Helmholtz [34], it was not until the mid 1920s that the Carl Zeiss Company made available the first commercial fundus camera. In the late 1950s fundus photography became ubiquitous in the practice of ophthalmology for general fundus examination and as a means for recording, storing, and indexing images of a patient with relatively simple and affordable equipment [13]. In recent years, other important imaging modalities, such as fluorescent angiography, stereo fundus photography and confocal laser ophthalmoscopy have appeared to enhance diagnostic and observational capabilities in ophthalmology [35].

Major image modalities used for CAD applications and other research trends are shown in Figure 2. These statistics are obtained by searching the IEEEXplore publication database and demonstrates the trend of research areas and major imaging modalities for ocular research. Figure 2(a) shows the number of publications related to various ocular imaging modalities, while Figure 2(b) shows the number of publications on CAD for ocular diseases using retinal images. The keywords associated with the search are mentioned in the legend of the corresponding figures. It is observed from Figure 2(a) that of all the imaging modalities, DFP has been attracting the most interest. This observation is further substantiated by a distribution of the works surveyed in this paper (Table 2) wherein the works are arranged according to the disease and the associated imaging modality. Note that imaging modalities or diseases with very few associated works have not been included.
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig2_HTML.jpg
Figure 2

Publication trends for ocular disease detection. (a) Number of publications each year for different ocular imaging modality (b) Number of publications each year for different ocular disease detection using retinal image (queries to IEEEXplore are as on May 2013).

Table 2

A distribution of works on CAD of major ocular diseases based on imaging

Modality

AMD

Cataract

DR

Glaucoma

PM

OCT

[31, 36]

 

[37]

[3840, 4446, 49]

[4143, 47, 48]

Slit Lamp

 

[33, 5058]

   

SLP

   

[5962]

 

Retina Fundus

[6365, 7476, 8486, 9395, 102104, 111, 112, 119121, 127, 128, 135, 136, 142144, 147]

 

[6668, 7779, 8789, 9698, 105107, 113115, 122124, 129131, 137139, 145, 146, 148158]

[32, 69, 70, 8082, 9092, 99101, 108110, 116118, 125, 126, 132134, 140, 141]

[20, 7173, 83]

HRT

   

[159162]

 

The possible reasons for this observation are two fold. First, information extracted from the eye fundus could be useful in detecting a variety of diseases such as heart disorders, stroke, hypertension, peripheral vascular disease and DR [13]. Furthermore, the availability of inexpensive fundus imaging cameras makes eye examination simple and cost effective. Another modality which is gaining interest in the research community is OCT. First proposed in 1991 [163], OCT has been widely applied in medical imaging especially for imaging the eye. The most important advantage of OCT compared with DFP is that it provides quantifiable depth information enabling a 3D scan of the target part. Therefore it is possible to detect pathologies with topological changes in-vivo. Although a powerful tool [164], in early years, the progress of OCT-based ocular disease detection has been constrained by the speed of OCT imaging. Early version of OCT required lengthy amounts of time to capture an image. In recent years, with the progress of spectral domain OCT (SD-OCT), which needs only 6 seconds to take a high resolution image, OCT-based ocular disease detection methods are increasing in popularity [165]. A brief description of image databases using DFP and OCT is presented in Appendix A Image databases. In terms of the diseases, the most studied disease is DR, followed by glaucoma and AMD (Figure 2(a)).

The images associated with the above mentioned modalities often need preprocessing to remove noise and improve contrast before they can be analyzed further using CAD methods.

Image preprocessing

Some of the common preprocessing methods are histogram equalization [79, 87], shade correction [88, 89, 96], convolution with a Gaussian mask [97], median filtering [98] and blood vessel removal [105, 106].

Most of the contrast enhancement techniques use histogram equalization [79, 87]. Shade correction is often used to normalize illumination [88, 89, 96]. For noise reduction, the commonly used techniques are convoluting with a Gaussian mask [97] or using a median filter [98]. Some of the methods also use blood vessel removal as a preprocessing step since they can be detected as false positives while detecting red lesions, especially MAs [105, 106].

The choice of a suitable preprocessing method depends on the desired effect. Antal and Hajdu [107] experimentally showed that contrast limited adaptive histogram equalization [113] effectively improves local contrast but also introduces noise. Similarly, vessel removal is used to reduce false positives which can be found during red lesion detection. Considering this subjective nature of the preprocessing methods [107], proposed to choose the best pair of preprocessing and segmentation methods through a fusion algorithm.

The remaining part of this section surveys the works on detecting the major ocular diseases, focusing mainly on DR, PM, AMD and glaucoma since these diseases are investigated more than others. Also, for these diseases, DFP is still the main stream modality, but OCT is rapidly gaining widespread adoption. Therefore we focus on these two modalities. The works on other diseases, such as cataract and corneal opacity, will be reviewed briefly in the section Other diseases (Section “Other diseases”).

Diagnostic methods for diseases

This section briefly introduces causes and symptoms for the major ocular diseases, methods of detecting them from images and a brief discussion on the state-of-the-art and possible future directions. More details on the algorithms are mentioned in Appendix B Details on methods for disease detection.

Diabetic retinopathy

Causes and symptoms
DR is a side effect of diabetes which is caused when the blood vessels in the eye start getting blocked due to high sugar content in the blood [166]. Reduced blood supply to the retina can even cause blindness [98]. Symptoms of DR include lesions appearing on the retinal surface. These lesions are visible in a DFP. Figure 3(a) and (b) show the DFPs of a normal eye and a DR affected eye, respectively. DR-related lesions can be categorized into red lesions such as Microaneurysms (MA) and Haemorrhages and bright lesions such as Hard Exudates (HE) and cotton-wool spots (Figure 3(c)). There are a few works which detect other symptoms as well [146].
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig3_HTML.jpg
Figure 3

How does DR look in a DFP. (a) DFP of a normal eye. (b) DFP of an eye affected with DR. (c) Common lesions associated with DR. (d) A distribution showing number of works detecting each type of symptom.

Detection

Almost all of the work for detecting DR has been performed using DFPs. Most of these approaches detect lesions with special focus on detecting red lesions (Figure 3(d)) especially MAs. MAs receive higher attention since they indicate DR at an early stage [98]. This is important considering that one of the goals for CAD is to provide early detection (Section “Background”). Lesions are detected using morphological operations [114, 167] or image filters [130, 131]. From our study, we could not find any work on detecting lesions from OCT images.

Brief discussion

From the survey of works on DR, it was observed that most of the works have focused on detecting lesions associated with DR. Few works [156] have gone further to convert lesion detection to DR detection. Even for DR detection, most of the works surveyed, have presented their results as a binary detection, i.e whether DR is present or not in an eye. It might be useful to provide a grade to the severity of DR.

In terms of the approach used, only few works [157] have attempted to bypass the lesion detection and used non-clinical features for DR detection. Future research can focus on filling these gaps.

Glaucoma

Causes and symptoms

Glaucoma is characterized by the progressive degeneration of optic nerve fibres, which leads to structural changes of the optic nerve head, the nerve fibre layer and a simultaneous functional failure of the visual field. As the symptoms only occur when the disease is quite advanced, glaucoma is called the silent thief of sight. Although glaucoma cannot be cured, its progression can be slowed down by treatment. Therefore, timely diagnosis of this disease is important [168, 169].

Detection

Glaucoma diagnosis is typically based on the medical history, intra-ocular pressure and visual field loss tests together with a manual assessment of the Optic Disc (OD) through ophthalmoscopy. OD or optic nerve head is the location where ganglion cell axons exit the eye to form the optic nerve, through which visual information of the photo-receptors is transmitted to the brain. In 2D images, the OD can be divided into two distinct zones; namely, a central bright zone called the optic cup (in short, cup) and a peripheral region called the neuroretinal rim [90]. Glaucoma causes an enlargement of cup region with respect to OD (thinning of neuroretinal rim) called cupping [69]. This is one of the important indicators and various parameters related to cupping have been used to detect glaucoma.

These parameters include vertical cup to disc ratio (CDR) [170], disc diameter [171, 172], ISNT rule [173], peripapillary atrophy (PPA) [174] and notching [175]. The most popular measurement is CDR, which is computed as the ratio of the vertical cup diameter (VCD) to vertical disc diameter (VDD) clinically (Figure 4).
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig4_HTML.jpg
Figure 4

Major structures of the optic disc in DFP. The region enclosed by the blue line is the optic disc; the central bright zone enclosed by the red line is the optic cup; and the region between the red and blue lines is the neuroretinal rim.

Brief discussion

Utilizing DFP and OCT to detect glaucoma are two popular and active directions with OCT having a shorter history. Till now, time-domain OCT and SD-OCT have been widely utilized to perform glaucoma detection [3840, 4446, 49]. However, swept-source OCT (SS-OCT) has not been further exploited for the research of glaucoma. For DFP, the combined analysis of stereo DFP and OCT for extracting disc parameters may boost current performance of state-of-the-art algorithms.

Age-related macular degeneration (AMD)

Causes and symptoms
AMD causes vision loss at the central region and blur and distortion at the peripheral region (Figure 5). Depending on the presence of exudates, AMD is classified into dry AMD (non-exudative AMD) and wet AMD (exudative AMD). Dry AMD results from atrophy of the retinal pigment epithelial layer below the retina [176]. It causes vision loss through loss of photoreceptors (rods and cones) in the central part of the retina. The major symptom and also the first clinical indicator of dry AMD is drusen, sub-retinal deposits formed by retinal waste. Wet AMD causes vision loss due to abnormal blood vessel growth (choroidal neovascularization) in the choriocapillaris, through Bruch’s membrane, ultimately leading to blood and protein leakage below the macular. Bleeding, leaking, and scarring from these blood vessels eventually cause irreversible damage to the photoreceptors and rapid vision loss if left untreated. The major symptom of wet AMD is exudation [177].
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig5_HTML.jpg
Figure 5

Vision damage caused by AMD. (a) Image of a normal eye. (b) Image of an eye affected with AMD. (Image taken from Wikipedia http://en.wikipedia.org/wiki/Macular_degeneration).

Detection

AMD can be detected from DFP, OCT, X-ray, and Magnetic Resonance Imaging (MRI). Among them, DFP is perhaps the most widely used one for AMD detection, while OCT is rapidly growing in use. Most of the approaches detecting AMD from DFPs focus on detecting drusen using local thresholding [63, 65], wavelets [63], background modeling [94] and saliency [102] etc. Some of the works have also attempted to bypass drusen detection and directly predict AMD [111, 112, 119, 120, 127, 128, 178]. Considering detecting AMD from OCT, it is easier to observe exudates and edema in OCT images. OCT can segment out retinal layers. Texture and thickness of these layers can help in distinguishing normal region and region corresponding to exudates [31, 36].

Brief discussion

From the above works, it was observed that although OCT imaging is increasingly prevalent, DFP is still the mainstream image modality for AMD detection and screening. It is an active research avenue. However with the progress of SD-OCT, OCT based AMD detection and screening is emerging as a new area of focus.

Pathological myopia (PM)

Causes and symptoms

As one of the leading causes of blindness worldwide, Pathological myopia (PM) is a type of severe and progressive nearsightedness characterized by changes in the fundus of the eye, due to posterior staphyloma and deficient corrected acuity. PM is different from myopia which is caused by the lengthening of the eyeball. For myopia both environmental and genetic factors have been associated with its onset and progression [179], while PM is primarily a genetic condition [180]. Unlike myopia, PM is accompanied by degenerative changes in the retina, which if left untreated can lead to irrecoverable vision loss. The accurate detection of PM will enable timely intervention and facilitate better disease management to slow down the progression of the disease.

Detection

PM has been detected mostly from DFPs where retinal degeneration is observed in the form of PPA [181, 182]. PPA is the thinning of retinal layers around the optic nerve and is characterized by a pigmented ring like structure around the optic disc. Apart from DFPs, there have been studies to detect PM from OCT images [183] however CAD systems for detecting PM from OCT images have not emerged yet.

Brief discussion

Ohno-Matsui et al. [47] analyzed the relationship between the shape of the sclera and the myopic retinochoroidal lesions, and concluded that SS-OCT can provide important information on deformations of the sclera which are related to myopic fundus lesions. Such clinical discoveries provide strong evidences for the use of SS-OCT as a good candidate for future PM-CAD development.

Other diseases

Other major diseases that may lead to blindness include cataract and corneal opacity. Cataract is characterized by a cloudiness in the lens while corneal opacity finds cloudiness in the cornea. CAD research has been conducted for cataract grading rather than detection using on slit lamp images [33]. Grading of cataract severity is essential for cataract surgical planning [184] and an automated grading system offers an objective and efficient solution. Grading is performed by locating the cloudiness and assessing its opacity level [33]. For corneal opacity, there have not been any automatic detection methods reported so far, to the best of our knowledge.

Discussion

Feature extraction plays an essential role in ocular image based CAD systems. From the survey, we observe two broad classes of features used in the ocular CAD systems. Approaches using each one of these are described below:

Approaches using clinical features

Many of the retinal image based CAD systems employ clinical domain knowledge during the feature selection and decision making processes. Such systems focus on identifying disease associated landmarks from images. A number of clinically relevant features can be extracted from the identified landmarks. For example, the following image cues are highly related to glaucoma: large optic CDR [185]; appearance of optic Disc haemorrhage (DH) [186]; thinning of the neuroretinal rim (NRR) or notching of the NRR [175] and presence of PPA [174]. These features based on clinical knowledge can be described as clinical features.

The early efforts in retinal image analysis were focused on optic disc localization. Lowell et al. [187] used specialized template matching to locate optic disc, followed by a global elliptical and local deformable contour model for disc segmentation. Xu et al. [132] presented a deformable-model-based algorithm for the detection of the optic disc boundary in fundus images. Later efforts were spent in optic cup detection. Abramoff et al. [133] analyzed stereo-based DFPs for rim and cup segmentation via pixel feature classification. Wong et al. [188] detected the optic cup using vessel kinking analysis. Joshi et al. [189] proposed a depth discontinuity (in the retinal surface)-based approach to estimate the cup boundary. Based on cup and disc detection, CDR can be obtained based on which CAD systems for automatic glaucoma detection were developed [32, 69, 70, 80]. Cheng et al. [73, 190] developed PPA detection algorithms for Pathological Myopia (PM) detection. Liang et al. [104] focused on detecting drusen presented in retina for automatic AMD detection. Other researchers worked on CAD systems for DR based on various vasculature segmentation algorithms, e.g., matched filters [66, 67], vessel tracking [68] or morphological processing [77, 78].

The advantages of using clinical features in CAD systems are obvious: the CAD results can be interpreted and presented with clinical meaning, furthermore, the prior knowledge allows modeling the disease detection with a small data set, which is critical when the training data is insufficient.

However, the detection models built using clinical features have a number of limitations as mentioned below:

  •  The modeling process is localization or segmentation dependent. For example, [32, 69] detect glaucoma based on optic cup and disc segmentation, a small error in disc localization may propagate downstream and finally yield an error in detection.

  •  The systems are usually threshold-based or rule-based in the decision making stage thus it, by nature, does not produce a quantifiable measurement for the disease detection.

  •  A model built upon prior knowledge may not evolve with the growing available data.

  •  As different diseases may possess different landmark features, the system developed for one disease may not be adaptable for other diseases.

  •  Such systems usually needs to learn from manually curated ground truth images, which is not only time consuming but also prone to human error.

  •  Finally and most importantly, detection of one particular disease associated landmark may neither be the necessary nor be the sufficient condition for disease detection. For example, [71, 83] proposed to recognize PM based on PPA detection, however, having PPA may or may not imply having PM.

Detecting all the retinal changes in DFPs is much more difficult compared to detecting a particular landmark. Statistical learning based on image feature extraction can be a possible solution to address these challenges. The following section casts light on this possibility.

Approaches using non-clinical features

With an increasing availability of image databases and advances in statistical learning, new CAD systems are shifting to non-clinical features. Non-clinical image features relate to the content of the image such as color, texture and gradient.

Many image feature extraction techniques can be applied to retinal image based CAD systems. Bock et al. [81] used an appearance based approach to quantitatively generate a glaucomatic risk index from retina images. Cheng et al. [91] used Focal Biologically Inspired Feature (FBIF) for glaucoma type classification. Wang et al. [191] presented a DFP mosaic algorithm based on Scale-Invariant Feature Transform (SIFT) feature [192] to overcome low contrast and geometric distortion between different fields of view of DFPs. Extracted SIFT features were described using vectors to determine the matching feature point pairs between two images. The transformation matrix was then computed according to purified matching point to generate a panoramic picture with a wide field of view containing more information which may improve CAD systems. Xu et al. [181] presented a CAD system for PM detection based on SIFT features extracted from a DFP. The system achieved a high AUC value (98.4%) as compared to the earlier approaches to detect PM using particular image cues [83].

Another example is the use of superpixels [193, 194]. A superpixel is a perceptually consistent unit with all pixels in a group being similar in color and texture. It reduces the complexity of images from thousands of pixels to only a few hundred superpixels. Algorithms such as Simple Linear Iterative Clustering (SLIC) [195] have been developed to aggregate nearby pixels into superpixels whose boundaries closely match true image boundaries. Many features can be computed from superpixels such as shape, color, location and texture, and they can be used for classification via learning algorithms. Xu et al. [92] presented a superpixel based learning framework based on retinal structure priors for glaucoma detection. The use of superpixels leads to a more descriptive and effective representation than those employed by pixel-based techniques while at the same time yielding significant computational savings over methods based on sliding windows.

Non-clinical features can be considered to be associated with a data driven approach, which has shown many advantages over the approach using clinical features. Extracting non-clinical features is followed by learning from the labeled examples, therefore fewer manual ground truth labeling is needed as compared to the approaches using clinical features. As these systems do not rely on particular image landmarks, they avoid the error cascading due to initial segmentation or localization. Non-clinical features are generalized features which make it possible for the system to transfer knowledge learned from one disease to other diseases. Such feature extraction can facilitate learning algorithms such as multi-task learning [196, 197] and transfer learning [198]. Furthermore, since the techniques apply statistical evaluation, the performance of the systems is expected to improve when more data is available. The result of such systems can be a quantifiable score other than Yes or No, which is particularly useful in clinical assessment. The use of non-clinical features for CAD is a promising area for future CAD systems.

Result: predicting ocular diseases based on genetic information

Genetic information can be used to detect heritable disease related genotypes, mutations or phenotypes for clinical purposes [199]. Ocular diseases are highly inheritable, thus genetic information can provide important insights into disease risk and disease prognosis.

Heritability of ocular diseases

Heritability is the proportion of phenotypic variation in a population that is attributable to genetic variation among individuals [200].

According to [201], heritability can be presented in statistical terms a linear mixed model, where the observable characteristics of an organism can be represented as a linear function of genetic and environmental factors, namely: Phenotype(P) = Genotype(G) + Environment(E), and the heritability can be represented as H 2 = G /P where H 2 represents the heritability due to all genetic effects. Since the beginning of the 20th century, heritability studies have been conducted on numerous diverse biological and psychological human traits. Among these, attempts have been made to estimate the genetic contribution to human longevity and lifespan [202, 203], and a person’s susceptibility to becoming a smoker [204, 205].

In 1992, the first ophthalmic twin study was conducted to investigate the heredity of refractive error [206].

Since then, over 100 articles have been published in the scientific literature examining the genetic contribution to variation in ophthalmic traits. Table 3 summarizes the heritability of various ocular diseases or ocular related phenotypes as reported in the literature. It is observed that the heritability values reported in different studies vary from each other, as the value is population related.The range of heritability values are shown in Figure 6, from which it is observed that Central Corneal Thickness is the most heritable trait while PM spans a wider range due to its population dependence, and cataract seems a less heritable disease.
Table 3

Heritability for ocular diseases or disease related traits

Disease/Traits

Heritability value

Source

AMD

0.7

[207]

AMD

0.75

[208]

AMD

0.71

[209]

AMD

0.46-0.71

[210]

AMD

0.45

[211]

AMD (small hard drusen)

0.63

[212]

CCT

0.95

[213]

CCT

0.72

[214]

CDR

0.48

[215]

CDR

0.66

[214]

Corneal astigmatism

0.6

[216]

Corneal curvature

0.71

[216]

Cortical cataract

0.24

[217]

Cortical cataract

0.58

[218]

Glaucoma

0.63

[219]

Glaucoma

0.7

[220]

Glaucoma (shallow anterior chamber)

0.92

[221]

Hyperopia

0.75

[222]

Hyperopia

0.86-0.89

[218]

IOP

0.47-0.51

[223]

IOP

0.3

[224]

IOP

0.36

[215]

IOP

0.56-0.64

[225]

Noncongenital cataract

0.15-0.32

[226]

Nuclear cataract

0.356

[217]

Nuclear cataract

0.48

[227]

Ocular refraction

0.89-0.94

[228]

Pathological Myopia

0.306

[229]

Pathological Myopia

0.8

[230]

https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig6_HTML.jpg
Figure 6

Heritability for various ocular traits. The range of heritability values for different ocular traits. A higher heritability value means a higher change of inheriting the trait.

Knowledgebases of genetic markers for ocular diseases

For the past 20 years, biomedical research community has spend huge efforts in identifying genetic markers for heritable diseases, through classical linkage studies [231] or recent Genome-wide association studies [232]. The discovered disease related biomarker include genes, mutations or Single-nucleotide polymorphisms (SNPs). Such valuable knowledge has been continuously accumulated in various biomedical databases which are usually called as knowledgebases. This section introduces the knowledgebases highly relevant to this study.

  •  OMIM - Online Mendelian Inheritance in Man

  •  OMIM is a continuously updated catalog of human genes and genetic disorders and traits, with particular focus on the molecular relationship between genetic variation and phenotypic expression [233]. It is thus considered to be a phenotypic companion to the Human Genome Project [234]. As on 8 May 2013, it has more than 14, 000 disease related gene entries in stock.

  •  GWAS Catalogue - Catalogue of Published Genome-Wide Association Studies (GWAS)

  •  GWAS is an approach to rapidly scan markers across the complete sets of genome (DNA) of many people to find genetic variations associated with a particular disease [235]. The first GWAS published in 2005 [236] was associated with an ocular disease. It investigated AMD and found two SNPs that are significantly associated with AMD. Since then, similar successes have been reported using GWAS to identify genetic variations that contribute to risk of type 1 diabetes [237], Parkinson’s disease [238], heart disorders [239], obesity [240] etc. The GWAS Catalogue http://www.genome.gov/gwastudies/is a collection of GWAS discovered SNPs, hosted by NHGRI (National Human Genome Research Institute). SNP-trait associations listed in the GWAS Catalogue are limited to those with p - values < 1.0 × 10-5. As on 8 May 2013, the catalog includes 1594 humane GWA studies which examined over 200 diseases and identified more than 10,000 disease associated SNPs.

Ocular disease related SNPs

Figure 7 shows the ocular disease related SNPs found from the OMIM and GWAS Catalogue knowledgebases. There are potentially many uses of these identified SNPs: a better understanding of disease etiology, personalized medicine, new leads for studying underlying biology and risk prediction. From a risk prediction perspective, it is reasonable to average a larger number of predictors, of which some may have (limited) predictive power, and some actually may be noise. The idea being that when added together, the combined small signals results in a signal that is stronger than the noise from the unrelated predictors [241].
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig7_HTML.jpg
Figure 7

Ocular disease related SNPs found in OMIM and GWAS Catalogue. (query made on May 8th, 2013).

Discovering novel disease related snps from large-scale genome wide association study

Computational methods investigating for SNP-trait association study [242, 243] have been developed. Such methods treat SNPs as individual players in one’s genetic profile. Following these methods, efforts [244246] have been expanded to investigate those SNPs which have little effects on disease risk individually but influence the disease risk jointly, the phenomenon being known as epistatic interaction, where the effects of one gene are believe to be modified by one or several other genes. The single-locus and epistasis SNP detection based algorithms test individual SNPs or pair of SNPs without taking into consideration, the underlying biological intertwining mechanism. Whereas, the real gene-gene interaction participating in biological pathway are often composed of a group of arbitrary number of SNPs. To date, exhaustively detecting significant SNP groups of arbitrary size is still computationally infeasible [245].

Recently, machine learning especially sparse learning algorithms have been introduced for GWAS data analysis. This is intended to tackle the challenge of identifying a group of N potent but interwinely correlated SNPs, some of which may not pass the stringent threshold by themselves. Penalized regression based on Least Absolute Shrinkage Selector Operation (LASSO) [247] have recently been explored for GWAS analysis. Some researchers [248, 249] have proposed 2-step approaches for Genome-wide association analysis via shortlisting a group of marginal predictors using penalized likelihood maximization for further higher order interaction detection. Hoggart et al. [250] have proposed a method to simultaneously analyze all SNPs in genome-wide and re-sequencing association studies. D’Angelo [251] have combined LASSO and principal-components analysis for detection of gene-gene interactions in genome-wide association studies. These approaches are not global due to the 2-stage process and none of them have considered incorporating prior knowledge into the model building. Prior knowledge can be combined into GWAS to improve the power of association study [252]. it can also model dependencies and moderate the curse of dimensionality.

Discussion

From the above survey, two major observations were made. First, there is a trend of transition of the way of acquiring knowledge about CAD from semi-automatic to automatic. The second trend is the integration of heterogeneous data sources. These two trends are discussed in the following subsections.

The trend of semi-automatic to automatic knowledge acquisition

In the 1970s and 80s, research was focused on constructing knowledge-bases from inputs of physicians [253, 254] for CAD tools. Building such systems required a lot human intervention, e.g. experts’ inputs, and can be considered as a ‘semi-automatic’ way for knowledge acquisition. Over the years, the alternative approach of automatic knowledge acquisition without inputs from clinicians or experts, has become more popular [255, 256]. One such way of knowledge acquisition is to capture patterns in data using non-clinical features (Section “Approaches using non-clinical features”). This approach offers several advantages:

  •  Knowledge-bases derived from datasets are more precise in comparison with knowledge-bases constructed from expert inputs, as the inputs provided by human experts may be vague, due to limited grades of perception [257]. An increased precision of CAD systems will make them more reliable for a mass screening application.

  •  Knowledge-bases constructed using the automated approach captures empirical evidence in the data. This approach aligns with the trend of evidence-based decision making, which emphasizes on the use of empirical evidence to make clinical decisions [258].

  •  Medical datasets embed local epidemiological patterns. Hence the derived knowledge-bases can result in more accurate CAD tools, as disease and symptom patterns vary from one region to another [259]. A system learnt using data obtained from a particular region can be expected to be more precise in performing mass screening in that same region. The physician experts on the other hand may not be aware of local trends, especially when they do not have sufficient experience of clinical practice in a particular locality.

The trend of integration of heterogeneous data sources

One of the reasons, why CAD tools may be found to have sub-optimal accuracy is that the training data may itself lack all the attributes that are required for decision making [260]. Combining decision support methodologies that process information stored in different data formats has been shown to improve the performance [261]. Apart from laboratory information, attributes extracted from gene profiling data, visual clues from medical image, as well as other sources could be combined and may possibly lead to more satisfactory accuracy.

The advances in technologies related to medical signal acquisition, medical imaging and genotyping have resulted in a increased volume and complexity of collected bio & medical data. This makes it difficult for physicians to parse through the information while providing timely diagnoses and prognoses. Due to its complexity, analysis of such data has been limited to bioinformatics applications [262]. There is a significant need for development and improvement of computer-aided detection or decision support systems in medicine, with an expected amplification in the future.

In the era of information explosion, data from multiple sources are becoming increasingly available. Retinal fundus cameras can be found in numerous primary community healthcare institutions as well as optical shops. With the dramatic reduction in genotyping costs in recent years, it is foreseeable that SNP data can be acquired at low cost and with as much as ease as demographic clinical data in the near future. The health screening outreach programs have allowed individuals access the clinical data which was hard-to-access previously.

Each of these heterogeneous data sources (image features, personal profile data, SNP data) is likely to contain a different perspective on the disease risk of an individual, based on the pathological, environmental and genetic mechanisms of the disease. These perspectives may potentially be complementary and a combination of the data from these independent sources can provide a more comprehensive and holistic assessment of the disease.

Integration of different data sources in CAD systems can also help in early detection since some of the early symptoms of the disease may appear in one data source but not the other. Consequently, using just one single source or type of data may be limiting for early detection.

There is no previous work attempting to combine these three types of data for automatic disease detection except [20] mentioned in Section “Result: CAD of ocular diseases based on clinical data”. Possible reasons could be that only until recently such data has become available on a large scale. Also, researchers working on these heterogeneous data sets usually come from different domains with different foci, e.g. computer vision and image understanding researchers focused on DFP analysis, bioinformaticians are interested in discovering disease associated SNP or SNP groups. Effectively combining these data can maximize the information gain and pave the way for a holistic approach for automatic and objective disease detection and screening.

Converse to the integration of multiple data sources, there is a possibility of using the same image to detect multiple diseases since many ocular diseases may have common symptoms. Along this line, there are already machine learning algorithm such as multi-task learning which look to solve similar problems. However, to the best of our knowledge, currently there is no work in this direction.

Conclusion

CAD for ocular diseases, which can automate the detection process, has attracted extensive attention from many clinicians and researchers. They not only alleviate the increasing burden on the clinicians by providing automatic and objective diagnosis with valuable insights, but also offer early detection and easy access for patients. In this article, we have reviewed in detail the recent progress of developed methods used in CAD of ocular diseases in available literature. We investigated three types of data (i.e., clinical, genetic and imaging) that have been commonly used in existing methods for CAD. A number of major ocular diseass including DR, Glaucoma, AMD and PM were also introduced along with existing methods that have been proposed to detect these diseases. The necessity of turning semi-automatic acquisition of domain knowledge into fully automatic ones (which does not require inputs from operators) was examined. The advantages of integrating heterogeneous data sources for ocular disease detection were highlighted. We are of the belief that these two trends are of great importance and deserve further study in the future.

Appendix

A Image databases

This section briefly describes the commonly used databases for each disease. The name of the associated disease is mentioned in brackets after the name of the database.

  • ORIGA -light (Glaucoma): The ORIGA -light [263] database contains 650 annotated DFPs, including 168 glaucomatous images and 482 randomly selected nonglaucoma images. Each image is tagged with grading information, and manually segmented result of optic disc and cup.

  •  Erlangen Glaucoma Registry (Glaucoma): The Erlangen Glaucoma Registry [264] includes 861 eyes of 454 Caucasian subjects (239 normal eyes of 121 subjects, 250 ocular hypertensive eyes of 118 patients, 372 eyes of 215 patients with chronic open-angle glaucoma).

  •  The Singapore Malay eye study (SiMES) (Glaucoma): SiMES [24] is a population-based study conducted from 2004 to 2007 to assess the causes and risk factors of blindness and visual impairment in the Singapore Malay community. The study was approved by the institutional review board of Singapore Eye Research Institute. The database contains 3280 subjects, with complete or partial personal data, DFP data and genome information for each subject. The personal data in SiMES contains demographic data such as age, gender and height, ocular examination data, such as IOP and cornea thickness, as well as historical medical data. SiMES examined a population-based, cross-sectional, age stratified, random sample of 3280 Malays (78.7% participation rate) aged 40 to 80 years living in Singapore.

  •  The Singapore Indian Eye Study (SINDI) (Glaucoma): The SINDI [25] is a population-based, cross-sectional study, which was conducted on 3400 Indians aged 40 to 83 years residing in Singapore. Ocular components including axial length (AL), anterior chamber depth (ACD), and corneal radius (CR) were measured by partial coherence interferometry. Refraction was recorded in spherical equivalent (SE). After 502 individuals with previous cataract surgery were excluded, ocular biometric data on 2785 adults were analyzed.

  •  The Singapore Chinese Eye Study (SCES) (Glaucoma): The aims of SCES [26] are to identify the determinants of Anterior Chamber Depth (ACD) and to ascertain the relative importance of these determinants in Chinese persons in Singapore. 1060 Chinese participants were recruited from the Singapore Chinese Eye Study. All subjects underwent AS optical coherence tomography (OCT; Carl Zeiss Meditec, Dublin, CA). Customized software (Zhongshan Angle Assessment Program, Guangzhou, China) was used to measure the AS-OCT parameters. Anterior chamber depth was determined using IOLMaster (Carl Zeiss Meditec). Univariate and multivariate regression analysis were performed to assess the association between ACD with ocular biometric and systemic parameters.

  •  High-Resolution Fundus (HRF) Image Database (Glaucoma): The HRF [265] database has been established by Friedrich-Alexander University Erlangen-Nuremberg (Germany) and the Brno University of Technology (Czech Republic). contains 15 images of healthy patients, 15 images of patients with DR and 15 images of glaucomatous patients. Binary gold standard vessel segmentation images are available for each image. Masks determining field of view (FOV) are provided for particular datasets. The gold standard data is generated by a group of experts working in the field of retinal image analysis and clinicians from the cooperating ophthalmology clinics.

  •  The Rotterdam Study (Glaucoma): The Rotterdam Study [266] is a prospective population-based cohort study investigating age-related disorders. The study started in 1990 and is still ongoing. The original cohort was comprised of 7983 participants 55 years or older; ancillary studies were added later on, and in total 14,926 participants have been enrolled. In 2007, OCT scanning of the macular and ONH regions was added to the armamentarium. To determine which regions of the OCT volumes could be segmented in what fraction of subjects, the macular and ONH of 925 consecutive subjects was imaged with the Topcon 3-D OCT-1000 (Topcon, Tokyo, Japan).

  •  DIARETDB0 and DIARETDB1 (DR): These two databases [267, 268] of DFPs contain wide variety of DR related lesions such as Hemorrhages (H), Microaneurysms, Hard Exudates (HE), Cotton Wool Spots (CWS) or Soft Exudates and Neovascularization. There are 219 images in total with 25 of them completely normal. The Field of View (FOV) is 50 deg and image resolution is 1500×1152 pixels. The ground truth is in the form of locations and sizes of the lesions. The major difference between the two databases is that DIARETDB0 has calibration level 0 DFPs which means that the images are taken with different fundus cameras with unknown camera settings. However DIARETDB1 has calibration level 1 DFPs in a sense that images are taken from the same fundus camera. DIARETDB0 is supposed to have more variation in visual appearance across images as compared to DIARETDB1.

  •  ROC (DR): ROC stands for Retinopathy Online Challenge [269] which is a competition aiming to compare the accuracies of MA detectors on a benchmark database. The database consists of 50 training and 50 testing images. The ground truth consists of the positions of the centers of MAs and irrelevant lesions. Ground truth for the training images is released while that for the test images is kept with the organizers. Participants can submit their detection results through the challenge website and the organizers compute a performance score for the detections.

  •  Messidor (DR): Messidor database [270] consists of 1200 DFPs containing MAs, Neovascularization and Hemorrhages. The images were acquired using a color video 3CCD camera on a Topcon TRC NW6 non-mydriatic retinograph with a 45 degree FOV. The images are of resolution 1440 × 960, 2240 × 1488 or 2304×1536 pixels. The ground truth is in the form of Retinopathy grade from 0 (normal) to 3 (most severe). Similarly, risk of macular edema is marked on a scale from 0 (no risk) to 2 (high risk).

  •  STARE (DR, AMD): (STructured Analysis of the REtina) is a dataset containing images of multiple diseases. It contains 397 DFPs in total and ground truth is in the form of severity grades for the disease. The images are of resolution 700×605. Of all the images, 62 were labeled as containing drusen, including 20 ones as large many, 13 ones as large few, 10 as fine many, and 19 as fine few. To the best of our knowledge, it is the first dataset containing drusen labeling. STARE also contains DR related lesions. 91 images are labeled as being affected by DR [75]. It also contains manually labeling of vessels of part of the images.

  •  ARIA (DR, AMD): ARIA was published by St Paul’s Eye Unit of Royal Liverpool University Hospital Trust in UK. It contains 212 images in total, including 92 ones with AMD, 61 normal ones, and 59 ones with DR.

  •  AREDS (AMD): Age-Related Eye Disease Study (AREDS) enrolled 4,757 participants, aged 55-80 years. Among them, 3640 participants had at least early AMD and the other 1117 ones did not [271].

  •  Thalia-D (AMD): Thalia is a dataset constructed by iMED group from I 2 R (Institute of Infocomm Research, Singapore). It consists of 350 images, with 96 labeled as early AMD (drusen) and the others non-AMD (no drusen). Image resolution is 3072×2048 and ground truth is in the form of marked drusen boundary [272].

  •  EUGENDA (AMD): Euregio genetic database (EUGENDA) is an ongoing project currently targets on AMD. Now it contains more than 4000 images with more than 191 ones containing drusen (http://www.eugenda.org/).

  •  CAPT (AMD): Complications of Age-Related Macular Degeneration Prevention Trial (CAPT) is a randomized clinical trial to evaluate whether prophylactic laser treatment to the retina can prevent the complications of the advanced stage of AMD. In total, 1052 patients with two high-risk eyes were enrolled. The images collected by CAPT can be used as dataset for automatic AMD detection [273].

Note that for Pathological Myopia, to the best of our knowledge, there have not been many studies on image based CAD. However, there were studies on the prevalence rate of PM [274277] which used large volumes of DFPs.

B Details on methods for disease detection

Diabetic retinopathy

DFP for Detecting DR

Detection of DR using DFP typically involves four steps 1) Preprocessing to enhance lesions, 2) Segmentation of candidate lesions, 3) Feature extraction from candidate lesions 4) Classification of candidate lesions into lesions and non-lesions, based on the features extracted. The green channel of the DFP is preferred for analysis since the retina has a good contrast in this channel [98]. Out of these, the segmentation methods specific to DR are discussed below.

Segmentation is usually based on morphological operations [114, 167]. Lay and Baudoin et al. [114] were among the first to propose automatic segmentation of MAs. They performed morphological opening of images using structuring elements of different orientations and subtracted the resultant image from the original one, though it is hard to choose an optimal size of the structuring element [97].

Apart from morphological approaches, researchers have used Gabor filters [130], Gaussian correlation filters [131], curvelet transforms [105], wavelet transforms [278], local image properties [279, 280], or just the intensity values in the green channel [97, 137] for segmenting out candidate lesions.

Some of the works detected both bright and red lesions [106, 137, 149, 153, 154] while Abramoff et al. [155] and Agurto et al. [146] have also detected neovascularization in addition to the lesions. Individual detections were then fused in these works to predict the severity of DR.

In terms of the effectiveness of CAD systems for mass screening of DR, it can be assessed by the accuracy of these systems. Accuracy of systems depend on the kind of data used for training and testing them. The Retinopathy Online Challenge (ROC) is aimed at evaluating the accuracy of MA detectors on a benchmark database. The final score of a method is computed by averaging the sensitivities at seven false positive rates (1/8, 1/4, 1/2, 1, 2, 4, and 8 false positives per image). The state of the art score on the ROC database is 0.434 achieved by [79].

OCT Imaging for detecting DR

Apart from DFPs, OCT images can also be used for DR detection. An OCT image can analyze different layers of the retina and has the capability of detecting cystoid fluids. Wilkins et al. [37] proposed to detect Cystoid Macular Edema (CME) which is one of the symptoms of DR. They presented a method for segmenting retinal cyst without going further for DR detection. A drawback with the OCT images is that they are prone to noise during capture and a poor Signal to Noise Ratio (SNR) can affect the segmentation accuracy [37].

Glaucoma

For glaucoma assessment, there exist mainly four imaging modalities which provide quantitative parameters of the ONH in glaucoma: 1/ Digital Fundus Photograph (DFP); 2/ OCT; 3/ Confocal Scanning Laser Ophthalmoscopy (CSLO) and 4/ Scanning Laser Polarimetry (SLP).

DFP for detecting glaucoma

Digital Fundus Photograph (DFP) is one of the main and popular modalities to diagnose glaucoma. Since it is possible to acquire DFPs in a noninvasive manner which is suitable for large scale screening, DFP has emerged as a preferred modality for large-scale glaucoma screening. In a glaucoma screening program, an automated system decides whether or not any signs of suspicious for glaucoma are present in an image. Only those images deemed suspect by the system will be passed to ophthalmologists for further examination.

Glaucoma detection based on DFP can be categorized into three main strategies: 1) detection without disc parametrization, 2) detection with disc parametrization using stereo DFP, and 3) detection with disc parametrization with monocular DFP.

For detecting glaucoma without disc parametrization, a set of features are computed at the image-level without performing OD and cup segmentation from the DFP. Then, two-class classification is employed to classify a given image as normal or glaucomatous. Bock et al. [90] presented an automated glaucoma detection system, where different generic feature types were compressed by an appearance-based dimension reduction technique. A probabilistic two-stage classification scheme combined these features types to extract the novel Glaucoma Risk Index(GRI). Several other papers [81, 82, 99101, 108] have also adopted this strategy for glaucoma detection.

For the other two strategies of detecting glaucoma with disc parametrization, OD and cup regions are segmented to estimate the relevant disc parameters. The strategy based on monocular DFP utilizes the 2-D projection of retinal structures to compute the areas of OD and cup. As shown in Figure 4, in a monocular DFP, OD appears as a bright circular or elliptic region partially occluded by blood vessels. Retinal nerve fibres converge to the OD and form a cup-shaped region known as the cup. After segmenting the OD and cup [92, 109, 116], vertical CDR is estimated to detect glaucoma [80, 117, 118, 125, 126]. In a recent work [117], Cheng et al. introduced optic disc and optic cup segmentation using superpixel classification for glaucoma screening. In optic disc segmentation, histograms and centre surround statistics were used to classify each superpixel as disc or non-disc. For optic cup segmentation, in addition to the histograms and centre surround statistics, the location information was also included into the feature space to boost the performance.

Different from monocular DFP, a stereo set of DFP contains partial depth information, which can be used to characterize the region inside the OD such as the cup and neuroretinal rim. A considerable body of work based on stereo DFP has been carried out to detect glaucoma [110, 132134, 140, 141]. For example, Abramoff et al. [133] proposed an automated segmentation method of the optic disc cup and rim from stereo color photographs using pixel feature classification. In their system, a depth map and outputs of a Gaussian steerable filter bank were used as features for training a classifier.

OCT Imaging for detecting glaucoma
OCT is relatively new in ophthalmic care compared to fundus photography. And the use of image analysis techniques based on OCT images has a shorter history. Nevertheless, it is a rapidly growing and important modality for glaucoma detection. In the assessment of glaucoma, the optic disc is an important structure. While stereo fundus photography is able to extract some 3-D shape information of the optic nerve head, OCT provides true 3-D information. Figure 8 gives three spectral-domain OCT images in glaucoma [44]. There are mainly two strategies for segmenting the disc/cup in optic-nerve head (ONH) from OCT images for glaucoma detection [12]: 1) a pixel classification approach applied to depth-columns of OCT voxels in which the reference standard is defined by manual planimetry from stereo fundus photographs and 2) direct segmentation of structures (neural canal opening and cup) from 3-D OCT images using a graph theoretic approach.
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig8_HTML.jpg
Figure 8

Cross-sectional images of the spectral-domain OCT volume in glaucoma. (a) X-Y image of the OCT volume. (b) X-Z image of the OCT volume corresponding to the horizontal line in (a). (c) Y-Z image of the OCT volume corresponding to the vertical line in (a).

For the first strategy of segmenting ONH, a series of studies [4446] has been performed. Lee et al. [45] developed a method which can segment the optic disc cup and neuroretinal rim in spectral-domain OCT scans centered on the optic nerve head. Their system first segmented 3 intraretinal surfaces using a fast multiscale 3-D graph search method. Then, the retina of the OCT volume was flattened to have a consistent shape across scans and patients based on one of the segmented surfaces. Finally, selected features derived from OCT voxel intensities and intraretinal surfaces were used to train a k-NN classifier, which determined which A-scans in the OCT volume belong to the background, optic disc cup and neuroretinal rim. As a further study, [44] presented a fast, fully automatic method to segment the optic disc cup and rim in 3-D SD-OCT volumes, in which automated planimetry was performed directly from close-to-isotropic SD-OCT scans. In their proposed scheme, four intraretinal surfaces were segmented by utilizing a fast multiscale 3-D graph search algorithm. Then, the retina in each 3-D OCT scan was flattened to ensure a consistent optic nerve head shape. For the classifier training, a set of 15 features derived from the segmented intraretinal surfaces and voxel intensities in the SD-OCT volume were selected. Finally, based on the convex hull-based method, prior knowledge about the shapes of the cup and rim was incorporated into the system.

For the second strategy of segmenting ONH, a variety of studies [3840, 49] directly segmented the neural canal opening and cup from 3-D OCT images. Hu et al. [38] introduced a scheme for segmenting the optic disc margin of ONH in SD-OCT images using a graph-theoretic approach. They utilized a small number of slices surrounding the Bruch’s Membrane Opening (BMO) plane for creating planar 2-D projection images. In addition, since there are large vessels in images, the information from the segmented vessels was used to suppress the vasculature influence by modifying the polar cost function and remedy the segmentation difficulty. In order to investigate the correspondence and discrepancy between the Neural Canal Opening (NCO)-based metrics and the clinical disc margin, Hu et al. [40] proposed an automated approach for segmenting the NCO and cup at the level of the Retinal Pigment Epithelium (RPE)/Bruch’s Membrane (BM) complex in SD-OCT volumes.

CSLO Imaging for detecting glaucoma
CSLO utilizes a diode-laser light source to produce quantitative measurements of the ONH and posterior segment. A commercially available CSLO device is the Heidelberg Retina Tomograph (HRT; Heidelberg Engineering, Heidelberg, Germany), which is capable of detecting the structural alterations in glaucoma. An example of an HRT image is shown in Figure 9(b) [90].
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig9_HTML.jpg
Figure 9

Example images of the central retina. Optic nerve head (ONH) centred fundus photograph (a) is used for automated glaucoma detection by the proposed glaucoma risk index while glaucoma probability score utilizes HRT 2.5-dimensional topography images (b) Images taken from [90].

Numerous studies [159162] have reported that HRT measurements are highly reproducible. In [161, 162], after outlining the optic disc border manually, the system generated geometric parameters such as the cup volume, cup depth, cup shape measure or even retinal height variations along the rim contour. Then, they applied discriminant analysis (Moorfields Regression Analysis (MRA)) to combine these geometric parameters. Since the gained quantitative parameters are not fully objective due to the manual outlining of the OD border, Burgansky-Eliash et al. [159] used the parameters of a non-linear shape model of the topographic ONH shape for glaucoma classification, which overcame the subjectivity of contour based methods. In the work of [160], the progression of glaucomatous degeneration over years could be quantified. The authors utilized the HRT Topographic Change Analysis (TCA) to automatically locate and quantify the temporal glaucomatous structural ONH changes.

SLP Imaging for detecting glaucoma

SLP is another available imaging modality for the detection of glaucoma. Alongside the structural changing of the ONH, the degeneration of the nerve fibres is depicted by a thinning of the retinal nerve fibre layer (RNFL) in the course of glaucoma. SLP is able to measure the thickness of the RNFL for glaucoma detection. In SLP, the retina is illuminated by polarized light and RNFL thickness can be directly determined from the polarization change of the reflected light [59].

SLP is commercialized as the GDxVCC (Carl Zeiss Meditec, Inc., Dublin, CA). GDxVCC includes both the scanner itself and a software program that assists in the acquisition procedure, which can be used to analyze the scan, derives various parameters and translates these into an overall score, the Nerve Fiber Indicator. It could be considered as a soft classification of glaucoma likelihood. Images generated by the GDx VCC are shown in Figure 10 [281]. Many glaucomatous progression detection strategies can be formulated for SLP data. Based on repeated GDxVCC SLP measurements, Vermeer et al. [61] tested several strategies to identify the optimal one for clinical use. Medeiros et al. [62] presented a scheme for differentiating between glaucomatous and control cases, which extracted global and sectoral geometric parameters such as average thickness, minimum thickness from RNFL thickness.
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig10_HTML.jpg
Figure 10

Images generated by the GDx VCC. (a) The reflectance image, which is displayed as a colored intensity map (greater reflectance corresponds to a lighter color). (b) The retardation map converted to RNFL thickness. The RNFL thickness is color-coded based on the color spectrum with thinner regions displayed in blue and green and thicker regions displayed in yellow and red [281].

Age-related macular degeneration

DFP for detecting AMD

The existing automatic AMD detection methods focus mainly on detecting drusen, the symptom of early AMD. Several other methods walk a step further to grade AMD.

In DFPs, drusen appear as small bright spot with particular size and orientation, as shown in Figure 11(b). Because the intensity and color of the image may vary with different imaging condition, finding local maxima is a more effective method than global thresholding is. Local maxima are found through geodesic method [63], Histogram based Adaptive Local Thresholding (HALT) [65], and Otsu method based adaptive threshold [74]. After maxima detection, the candidates are further classified according to contrast, size and shape.
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig11_HTML.jpg
Figure 11

The symptoms of AMD seen in DFP. (a) DFP of a healthy eye. (b) DFP of an eye affected with dry AMD, with drusen presented. (c) DFP of an eye affected with wet AMD. Presence of exudates can be seen.

Apart from spatial domain, frequency domain has also been used for drusen detection. For example, multi-scale and multi-orientation wavelet is used to detect drusen in a hierarchical framework [63] or through Support Vector Data Description (SVDD), which is derived from support vector machine [76]. Furthermore, a mathematical technique, amplitude-modulation frequency modulation (AM-FM) was shown to be able to generate multi-scale features for classifying pathological structures, such as drusen, on a retinal image [84].

In recent years, with the progress of computer vision and machine learning, more and more advanced techniques have been introduced for drusen detection, e.g., novel feature descriptor such as ICA [85] and biologically inspired features [76], feature selection schemes such as AdaBoost [86], and parameter choosing approaches [64]. A latest work, Thalia [272] is a system for drusen lesion image detection and AMD assessment, using a hierarchical word transform (HWI) as representation.

There are other methods using background modeling [94] and saliency [102]. The background modeling method [94] first segments the healthy structure of eye and blood vessels and the inverse of the healthy parts provide the drusen detection result. The saliency based method [102] first detects the salient regions and then classifies them as blood vessel, hard exudates or drusen. In [95], a general framework was proposed to detect and characterize target lesions concurrently. In the framework, a feature space, including the confounders of both true positive (e.g., drusen near to other drusen) and false positive samples (e.g., blood vessels), is automatically derived from a set of reference image samples. Subsequently a Haar filter was used to build the transformation space and Principal Component Analysis (PCA) was used to generate the optimal filter.

Since drusen is one of the main early symptom of AMD, most of the existing work on AMD detection take drusen detection and segmentation as basis. The overlap of drusen with macular is used to measure the severity of AMD [103, 104]. The performance of such methods is restricted by the accuracy of drusen detection. To bypass drusen detection and segmentation, in recent years, researchers have started to seek for methods detecting AMD directly from DFPs. An early attempt in this direction was a histogram based representation followed by Case -Based Reasoning [111]. Good results were produced, however observations indicated that relying on the retinal image colour distribution alone was not sufficient. Thus the authors upgraded the method by using a spatial histogram technique that included colour and spatial information [112]. The latest work from the same team comprises hierarchical image decomposition stored in a tree structure to which a weighted frequent sub-tree mining algorithm is applied. The identified sub-graphs are then incorporated into a feature vector representation (one vector per image) to which classification techniques can be applied [119, 120]. These methods detect AMD from the scope of a single image. Another strategy is to use content-based image retrieval. Region based and lesion based features were tested and gave satisfactory performance [127] and [128].

The above mentioned works detect dry (non-exudate) AMD. Till now, there are few works on wet AMD detection except the one proposed in [121] where the basic idea is that the vessels in the DFP seem different under dry and wet AMD. Thus the method first detected the vessels, using a wavelet based method. Subsequently the area, standard deviation, and other features describing the distribution of the vessels were used as features for classification.

OCT imaging for detecting AMD
As mentioned in Section Imaging modalities, it is easier to observe edema and exudates in OCT (Figure 12). In [31], a method for automated characterization of the normal macular appearance in SD-OCT volumes was reported together with a general approach for local retinal abnormality detection. Ten intraretinal layers were automatically segmented and the 3-D image dataset was flattened to remove motion-based artifacts. From the flattened OCT data, 23 features were extracted in each layer locally to characterize texture and thickness properties across the macular. The normal ranges of layer-specific feature variations have been derived from 13 SD-OCT volumes depicting normal retinas. Abnormalities were then detected by classifying the local differences between the normal appearance and the retinal measures in question. This approach was applied to determine footprints of fluid-filled regions-SEADs (Symptomatic Exudate-Associated Derangements) in 78 SD-OCT volumes from 23 repeatedly imaged patients with choroidal neovascularization (CNV), intra, and sub-retinal fluid and pigment epithelial detachment. In [36], the authors improved this method by employing a probabilistically constrained combined graph search-graph cut method refines the candidate SEADs by integrating the candidate volumes into the graph cut cost function as probability constraints.
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig12_HTML.jpg
Figure 12

Example of AMD related exudate in OCT image. (a) OCT image showing an eye with severe exudate. (b) OCT image showing an eye with medium exudate. (c) OCT image showing a normal eye.

Pathological myopia

Research on CAD of PM has mainly relied on DFP but recently there have been efforts to explore the use of SS-OCT for PM analysis.

DFP for detecting PM

An observable sign for PM detection is PPA, an atrophy of pre-existing retina tissue. The APAMEA system proposed by Liu et al. [71] was the first CAD system for PM detection. In APAMEA, features were extracted from a sectional texture map generated from entropy analysis in the optic disc ROI, and SVM learning achieved a 85% specificity and 90% sensitivity. Later on, Tan et al. [72] reported a PPA detection method using a variational level set approach. The method used a disc difference approach to locate PPA by obtain a difference in the two areas, e.g., optic disc with PPA and the fundamental optic disc. It reported a 95% accuracy. The above two methods were based on a rather small data set of only 40 images. A recent advance in PPA detection was reported in [73], which was tested on a much larger dataset containing 1584 images. The authors presented a biologically inspired feature (BIF) approach for the detection of PPA. BIF mimics the cortical processes for visual perception. In the approach, a focal region (ROI) is segmented from the retinal image and the BIF is extracted followed by selective pair-wise discriminant analysis for negative and positive sparse transfer learning. The authors reported that negative sparse transfer learning is superior to the positive one for their task. The method achieves an accuracy of more than 90% in detecting PPA.

Different features have been extracted from DFP for PM detection. APAMEA extracted a texture feature obtained through entropy analysis. In [73] BIF was used for sparse learning. The study conducted by Zhang et al. [20] developed a combined approach integrating SIFT features extracted from DFP with genetic information as well as other clinical data. The study demonstrated that, by learning from multiple data sources, the classifier can achieve a more accurate prediction result. It is the first reported study to combine heterogeneous data including image, genetic and text data for PM detection.

SS-OCT imaging for detecting PM

SS-OCT uses a frequency swept laser as a light source [41] and, in practice, has less roll-off of sensitivity with tissue depth than conventional SD-OCT instruments. The current SS-OCT instruments use a longer wavelength, generally in the 1 μ m range, which has improved their ability to penetrate deeper into tissues than the conventional SD-OCT instruments [282]. Though CAD systems based on SS-OCT have not emerged, some clinical studies have discovered that SS-OCT could be a powerful machine for PM analysis. A recent study conducted in Japan [48] reported that SS-OCT can detect optic nerve pits or pit-like changes in PM eyes. Such changes are not detectable by other imaging modalities.

Other diseases

A brief review of cataract grading and CAD for corneal opacity is given below.

Cataract

Cataract is characterized by a cloudiness (opacity) in the eye lens which obstructs vision and can even lead to blindness. Cataract can be categorized into three types based on the location of opacity within the lens structure: nuclear, cortical and Posterior Sub-Capsular (PSC) [283]. Nuclear cataract (NC) begins at the center of the lens and spreads towards the surface. Cortical cataract begins at the outer rim of the lens and moves towards the center. PSC forms at the back of the lens. NC is graded using slit-lamp images of the eye while Cortical cataract (CC) and PSC are graded from the retro-illumination images of the eye lens. The grades are usually real numbers in a range that depends on the grading system used.

Figure 13(b) and (c) show the slit-lamp images of a normal eye and NC affected eye, respectively. It can be seen that the lens nucleus is the affected part and consequently NC is graded by extracting features from the eye lens. The extracted features include intensity of the sulcus region [52] (Figure 13(a)), luminance profile in the eye lens [51] and color and intensity based features extracted from the nuclear region [50, 284]. The accuracy of NC grading can be quantified using the average grading difference which is the average of the difference between actual and predicted grading over all the test samples. A lower value of this measure is better. The average grading difference of the state-of-the-art work [284] is 0.336.CC and PSC usually co-occur and are graded using retro-illumination images shown in Figure 14. Retro-illumination images are usually in pairs as each lens has two images of it, one focusing on the anterior cortex (anterior image, Figure 14 top row) and the other, 3–5 mm posterior to it, close to the posterior capsule (posterior image, Figure 14 bottom row). Most of the CC is present in the anterior cortex, and so it is sharply visible in anterior image. On the other hand, PSC is clearer in posterior image as compared to the anterior image.
https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig13_HTML.jpg
Figure 13

Parts of the right eye as viewed in a slit-lamp image [50]. (a) Important parts of the eye as seen in a slit lamp image. (b) Slit lamp image of a normal eye. (c) Slit lamp image of an eye affected with Nuclear Cataract.

https://static-content.springer.com/image/art%3A10.1186%2F1472-6947-14-80/MediaObjects/12911_2013_Article_843_Fig14_HTML.jpg
Figure 14

Examples of retroillumination images. Retroillumination images corresponding to (a) a normal eye lens, (b) lens with 61.07% of cortical cataract, and (c) lens with 4.95% of cortical opacities and 31.28% of PSC opacities. Top row shows anterior images while the bottom row shows posterior images [58].

Typical features used for grading CC and PSC include enhanced texture features [56], intensity, edge, size and spatial location based features [285], entropy [57] and Symlets wavelet coefficients and intensity features [58]. In [58] the grading accuracy is represented in terms of correlation of the predicted grades with the actual and the value of correlation coefficient is 0.7392.

Corneal opacity

Corneal haze describes the condition when the cornea becomes cloudy or opaque. The cornea is normally clear, so corneal haze can greatly impair vision. Although the haze can occur in any part of the cornea, it is most often found within the thicker, middle layer of the cornea, called the stroma. Corneal haze is most often caused by inflammatory cells and other debris that are activated during trauma, infection or surgery. Corneal haze sometimes occurs during laser vision correction procedures.

Slit lamp imaging has been used to clinically estimate corneal haze manually by physicians but not automatically. For example, it was used to observe the cornea haze after excimer laser ablation of cornea [286, 287]. Slit lamp suffers from resolution reduction caused by interference of light reflected from structures above and below the plane of examination. Confocal microscopy uses a condenser to focus the light source within a small area of the cornea and an objective, which has the same focal point (hence the term confocal) as the condenser. Therefore it is possible to avoid light contamination from out-of-focal information. Compared with slit lamp, an advantage of confocal microscope is a much higher spatial resolution. Moreover, it allows real-time viewing of structures in the living cornea at the cellular level in four dimensions (x, y, z, and time). It can be used to measure corneal haze [288, 289].

The above imaging modalities have been used in clinic with manual detection but till now, as far as we know, there is no automatic method based on these modalities. Currently, the existing automatic method is based on the most straightforward way: examining frontal photograph of eye [290, 291]. In [291], five situations are considered: cataract, iridocyclitis, corneal haze, corneal arcus, and normal eyes. In the proposed method, each image is first preprocessed using histogram equalization and K-means clustering. The extracted features are then fed into a RBF based neural network classifier.

Notes

Abbreviations

AMD: 

Age-related macular degeneration

AM-FM: 

Amplitude modulation-frequency modulation

BM: 

Bruch’s membrane

BMO: 

Bruch’s membrane opening

CAD: 

Computer aided detection

CASNET: 

Causal association network

CC: 

Cortical cataract

CCT: 

Central corneal thickness

CDR: 

Cup-to-disc ratio

CDSS: 

Clinical decision support systems

CME: 

Cystoid macular edema

CNV: 

Choroidal neovascularization

CSLO: 

Confocal scanning laser ophthalmoscopy

DH: 

Disc haemorrhage

DR: 

Diabetic retinopathy

FBIF: 

Focal biologically inspired feature

GRI: 

Glaucoma risk index

GWAS: 

Genome-wide association studies

HALT: 

Histogram based adaptive local thresholding

HE: 

Hard exudates

HRT: 

Heidelberg retina tomography

ICA: 

Independent component analysis

IOP: 

Intra-ocular pressure

LASSO: 

Least absolute shrinkage selector operation

LDA: 

Linear discriminant analysis

MGG: 

Mixture of generalized Gaussian

MLP: 

Multilayer perceptron

MOG: 

Mixture of Gaussian

MRA: 

Moorfields regression analysis

MRI: 

Magnetic resonance imaging

MTA: 

Major temporal arcade

NC: 

Nuclear cataract

NCO: 

Neural canal opening

NRR: 

Neuro-retinal rim

OCT: 

Optical coherence tomography

OMIM: 

Online Mendelian inheritance in man

PCA: 

Principal component analysis

PM: 

Pathological myopia

PPA: 

Parapapillary atrophy

PSC: 

Posterior sub-capsular

QDA: 

Quadratic discriminant analysis

RNFL: 

Retina nerve fibre layer

RNFLT: 

Retina nerve fibre layer thickness

ROI: 

Region of interest

RPE: 

Retinal pigment epithelium

SAP: 

Standard automated perimetry

SD-OCT: 

Spectral domain-optical coherence tomography

SEAD: 

Symptomatic exudate-associated derangements

SIFT: 

Scale-invariant feature transform

SLIC: 

Simple linear iterative clustering

SLP: 

Scanning laser polarimetry

SNP: 

Single-nucleotide polymorphism

SNR: 

Signal to noise ratio

SS-OCT: 

Swept source-optical coherence tomography

SVDD: 

Support vector data description

SVM: 

Support vector machines

TCA: 

Topographic change analysis

VCD: 

Vertical cup diameter

VDD: 

Vertical disc diameter.

Declarations

Acknowledgements

Authors would like to express their appreciation to the Agency for Science, Technology and Research Singapore for providing research funding and facilities for this interdisciplinary research. The authors are also particular grateful to the Singapore Eye Research Institute for clinical advice and assistance.

Authors’ Affiliations

(1)
Institute for Infocomm Research
(2)
Nanyang Technological University
(3)
Singapore National Eye Centre, Third Hospital Avenue

References

  1. Robinson BE: Prevalence of asymptomatic eye disease . Can J Optom. 2003, 65 (5): 175-180.Google Scholar
  2. National Eye Institute: Don’t lose sight of diabetic eye disease: information for people with diabetes . NIH Publ. 2004, 04: 3252-Google Scholar
  3. Fujita H, Uchiyama Y, Nakagawa T, Fukuoka D, Hatanaka Y, Hara T, Lee G, Hayashi Y, Ikedo Y, Gao X, Zhou X: Computer-aided diagnosis: The emerging of three CAD systems induced by Japanese health care needs . Comput Methods Prog Biomed. 2008, 92: 238-248.Google Scholar
  4. Wong T, Knudtson M, Klein R, Klein B, Meuer S, Hubbard L: Computer-assisted measurement of retinal vessel diameters in the Beaver Dam Eye Study: methodology, correlation between eyes, and effect of refractive errors . Ophthalmology. 2004, 111 (6): 1183-1190.PubMedGoogle Scholar
  5. Cheung C, Zheng Y, Hsu W, Lee M, Lau Q, Mitchell P, Wang J, Klein R, Wong T: Retinal vascular tortuosity, blood pressure, and cardiovascular risk factors . Ophthalmology. 2011, 118 (5): 812-818.PubMedGoogle Scholar
  6. Perumalsamy N, Prasad N, Sathya S, Ramasamy K: Software for reading and grading diabetic retinopathy: Aravind diabetic retinopathy screening 3.0 . Diabetes Care. 2007, 30 (9): 2302-2306.PubMedGoogle Scholar
  7. SERI Ocular Reading Centre . [http://www.seri.com.sg/Research\%20Professionals/Page.aspx?id=142],
  8. Sommer A, Tielsch JM, Katz J, Quigley HA, Gottsch JD, Javitt J, Singh K: Relationship between intraocular pressure and primary open angle glaucoma among white and black Americans. The Baltimore Eye Survey . Arch ophthalmol. 1991, 109 (8): 1090-1095.PubMedGoogle Scholar
  9. Wong TY, Klein R, Klein BE, Tielsch JM, Hubbard L, Nieto FJ: Retinal microvascular abnormalities and their relationship with hypertension, cardiovascular disease, and mortality . Surv Ophthalmol. 2001, 46 (1): 59-80.PubMedGoogle Scholar
  10. Colenbrander A: Measuring vision and vision loss. Duane’s Ophthalmology. 2009, Philadelphia, PA: Lippincott Williams & WilkinsGoogle Scholar
  11. Makeeva OA, Markova VV, Puzyrev VP: Public interest and expectations concerning commercial genotyping and genetic risk assessment . Personalized Med. 2009, 6 (3): 329-341.Google Scholar
  12. Abràmoff MD, Garvin MK, Sonka M: Retinal imaging and image analysis . IEEE Rev Biomed Eng. 2010, 3: 169-208.PubMedGoogle Scholar
  13. Bernardes R, Serranho P, Lobo C: Digital ocular fundus imaging: a review . Ophthalmologica. 2011, 226 (4): 161-181.PubMedGoogle Scholar
  14. Weiss S, Kulikowski C, Amarel S, Safir A: A model-based method for computer-aided medical decision making . Artif Intell. 1978, 11: 145-72.Google Scholar
  15. Kulikowski CA, Weiss SM: Representation of expert knowledge for consultation: the CASNET and EXPERT projects . Artif Intell Med. 1982, 51 ,Google Scholar
  16. Chan K, Lee TW, Sample PA, Goldbaum MH, Weinreb RN, Sejnowski TJ: Comparison of machine learning and traditional classifiers in glaucoma diagnosis . IEEE Trans Biomed Eng. 2002, 49 (9): 963-974.PubMedGoogle Scholar
  17. Bizios D, Heijl A, Bengtsson B: Integration and fusion of standard automated perimetry and optical coherence tomography data for improved automated glaucoma diagnostics . BMC Ophthalmology. 2011, 11 (1): 20-PubMedPubMed CentralGoogle Scholar
  18. Kourkoutas D, Karanasiou IS, Tsekouras G, Moshos M, Iliakis E, Georgopoulos G: Glaucoma risk assessment using a non-linear multivariable regression method . Comput Methods Programs Biomed. 2012, 108 (3): 1149-59.PubMedGoogle Scholar
  19. Liu J, Zhang Z, Wong D, Xu Y, Yin F, Cheng J, Tan N, Kwoh C, Xu D, Tham Y, Aung T, Wong T: Automatic glaucoma diagnosis through medical imaging informatics . J Am Med Assoc. 2013, 20 (6): 1021-7.Google Scholar
  20. Zhang Z, Xu Y, Liu J, Wong DWK, Kwoh CK, Shaw SM, Wong TY: Automatic diagnosis of pathological myopia from heterogeneous biomedical data . PLoS ONE. 2013, 8 (6): e65736-PubMedPubMed CentralGoogle Scholar
  21. World Health Organization: Blinding trachoma fact sheet . 2014,Google Scholar
  22. World Health Organization: Onchocerciasis fact sheet . 2014,Google Scholar
  23. Attebo K, Mitchell P, Smith M: Visual acuity and the causes of visual loss in Australia . Ophthalmology. 1996, 103 (3): 357-64.PubMedGoogle Scholar
  24. Foong A, Saw S, Loo J, Shen S, Loon S, Rosman M, Aung T, Tan D, Tai E, Wong T: Rationale and methodology for a population-based study of eye diseases in Malay people: The Singapore Malay eye study (SiMES) . Ophthalmic Epidemiol. 2007, 14: 25-35.PubMedGoogle Scholar
  25. Pan CW, Wong TY, Chang L, Lin XY, Lavanya R, Zheng YF, Kok YO, Wu RY, Aung T, Saw SM: Ocular biometry in an urban Indian population: the Singapore Indian eye study (SINDI) . Invest Ophthalmol Vis Sci. 2011, 52 (9): 6636-6642.PubMedGoogle Scholar
  26. Sng CC, Foo LL, Cheng CY, Allen JC, He M, Krishnaswamy G, Nongpiur ME, Friedman DS, Wong TY, Aung T: Determinants of anterior chamber depth: the Singapore Chinese Eye Study . Ophthalmology. 2012, 119 (6): 1143-50.PubMedGoogle Scholar
  27. Ryan SJ, Schachat AP: Retina . Elsevier Health Sci. 2012,Google Scholar
  28. Matsui M, Tashiro T, Matsumoto K, Yamamoto S: A study on automatic and quantitative diagnosis of fundus photographs. I. Detection of contour line of retinal blood vessel images on color fundus photographs . Nippon Ganka Gakkai Zasshi. 1973, 77 (8): 907-PubMedGoogle Scholar
  29. Baudoin C, Lay B, Klein J: Automatic detection of microaneurysms in diabetic fluorescein angiography . Revue depidemiologie et de sante publique. 1984, 32 (3–4): 254-Google Scholar
  30. Narasimha-Iyer H, Can A, Roysam B, Stewart C, Tanenbaum H, Majerovics A, Singh H: Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy . IEEE Trans Biomed Eng. 2006, 53 (6): 1084-1098.PubMedGoogle Scholar
  31. Quellec G, Lee K, Dolejsi M, Garvin MK, Abramoff MD, Sonka M: Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula . IEEE Trans Med Imaging. 2010, 29 (6): 1321-1330.PubMedPubMed CentralGoogle Scholar
  32. Liu J, Wong D, Lim J, Li H, Tan N, Zhang Z, Wong T, Lavanya R: ARGALI: An automatic cup-to-disc ratio measurement system for glaucoma analysis using level-set image processing . Proceedings of 13th International Conference on Biomedical Engineering. 2009, Heidelberg: Springer Berlin, 559-562.Google Scholar
  33. Huang W, Chan KL, Li H, Lim JH, Liu J, Wong TY: A computer assisted method for nuclear cataract grading from slit-lamp images using ranking . IEEE Trans Med Imaging. 2011, 30 (1): 94-107.PubMedGoogle Scholar
  34. Saine P, Tyler M: Ophthalmic photography: retinal photography, angiography, and electronic imaging. 2004, Butterworth-HeinemannGoogle Scholar
  35. Marrugo AG, Millan MS, Cristobal G, Gabarda S, Sorel M, Sroubek F: Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine . SPIE Photonics Europe. 2012, Bellingham: International Society for Optics and Photonics, 84360C-84360C.Google Scholar
  36. Chen X, Niemeijer M, Zhang L, Lee K, Abràmoff MD, Sonka M: Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut . IEEE Trans Med Imaging. 2012, 31 (8): 1521-1531.PubMedPubMed CentralGoogle Scholar
  37. Wilkins GR, Houghton OM, Oldenburg AL: Automated segmentation of intraretinal cystoid fluid in optical coherence tomography . IEEE Trans Biomed Eng. 2012, 59 (4): 1109-1114.PubMedPubMed CentralGoogle Scholar
  38. Hu Z, Niemeijer M, Lee K, Abramoff MD, Sonka M, Garvin MK: Automated segmentation of the optic disc margin in 3D optical coherence tomography images using a graph-theoretic approach . Proceedings of SPIE Conference on Medical Imaging. 2009, Bellingham: International Society for Optics and Photonics, 72620U-72620U.Google Scholar
  39. Hu Z, Niemeijer M, Lee K, Abramoff MD, Sonka M, Garvin MK: Automated segmentation of the optic canal in 3D spectral-domain OCT of the optic nerve head (ONH) using retinal vessel suppression . Invest Ophthalmol Vis Sci. 2009, 50 (1): 33-44.Google Scholar
  40. Kwon YH, Hu Z, Abramoff MD, Lee K, Garvin MK: Automated segmentation of neural canal opening and optic cup in SD-OCT images . Amer. Glaucoma Soc. 20th Annu. Meeting, Naples, FL, USA. 2010,Google Scholar
  41. Yun S, Bouma B: Wavelength swept lasers . Optical Coherence Tomography: Technology and Applications. Edited by: Drexler W, Fujimoto JG. 2008, New York: Springer,Google Scholar
  42. Grulkowski I, Liu JJ, Potsaid B, Jayaraman V, Lu CD, Jiang J, Cable AE, Duker JS, Fujimoto JG: Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers . Biomed Optics Express. 2012, 3 (11): 2733-2751.Google Scholar
  43. Spaide RF, Akiba M, Ohno-Matsui K: Evaluation of peripapillary intrachoroidal cavitation with swept source and enhanced depth imaging optical coherence tomography . Retina. 2012, 32 (6): 1037-1044.PubMedGoogle Scholar
  44. Lee K, Niemeijer M, Garvin MK, Kwon YH, Sonka M, Abramoff MD: Segmentation of the optic disc in 3D-OCT scans of the optic nerve head . IEEE Trans Med Imaging. 2010, 29: 159-168.PubMedGoogle Scholar
  45. Lee K, Niemeijer M, Garvin MK, Kwon YH, Sonka M, Abramoff MD: 3D segmentation of the rim and cup in spectral-domain optical coherence tomography volumes of the optic nerve head . Proceedings of SPIE Conference on Medical Imaging. 2009, Bellingham: International Society for Optics and Photonics, 7262-7283.Google Scholar
  46. Abramoff MD, Lee K, Niemeijer M, Alward W, Greenlee EC, Garvin MK, Sonka M, Kwon YH: Automated segmentation of the cup and rim from spectral domain OCT of the optic nerve head . Invest Ophthalmol Vis Sci. 2009, 50 (12): 5778-5784.PubMedPubMed CentralGoogle Scholar
  47. Ohno-Matsui K, Akiba M, Modegi T, Tomita M, Ishibashi T, Tokoro T, Moriyama M: Association between shape of sclera and myopic retinochoroidal lesions in patients with pathologic myopia . Invest Ophthalmol Vis Sci. 2012, 53 (1): 6046-6061.PubMedGoogle Scholar
  48. Ohno-Matsui K, Akiba M, Moriyama M, Shimada N, Ishibashi T, Tokoro T, Spaide RF: Acquired optic nerve and peripapillary pits in pathologic myopia . Ophthalmology. 2012, 119 (8): 1685-1692.PubMedGoogle Scholar
  49. Hu Z, Abramoff MD, Kwon YH, Lee K, Garvin MK: Automated segmentation of neural canal opening and optic cup in 3D spectral optical coherence tomography images of the optic nerve head . Invest Ophthalmol Vis Sci. 2010, 51 (11): 5708-5717.PubMedPubMed CentralGoogle Scholar
  50. Li H, Lim J, Liu J, Mitchell P, Tan A, Wang J, Wong T: A computer-aided diagnosis system of nuclear cataract . IEEE Trans Biomed Eng. 2010, 57 (7): 1690-1698.PubMedGoogle Scholar
  51. Duncan D, Shukla O, West S, Schein O: New objective classification system for nuclear opacification . J Optical Soc Am A, Optics Image Sci Vis. 1997, 14 (6): 1197-1204.Google Scholar
  52. Fan S, Dyer C, Hubbard L, Klein B: An automatic system for classification of nuclear sclerosis from slit-lamp photographs . Proceedings Int Conf MICCAI, Lecture Notes in Computer Science. 2003, Heidelberg: Springer Berlin, 592-601.Google Scholar
  53. Nidek Co. Ltd: Anterior eye segment analysis system: EAS-1000. Operator’s Manual, Nidek, Japan . 1991,Google Scholar
  54. Gershenzon A, Robman L: New software for lens retro-illumination digital image analysis . Aust N Z J Ophthalmol. 1999, 27 (3–4): 170-172.PubMedGoogle Scholar
  55. Klein B, Klein R, Linton K, Magli Y, Neider M: Assessment of cataracts from photographs in the Beaver Dam Eye Study . Ophthalmology. 1990, 97 (11): 1428-1433.PubMedGoogle Scholar
  56. Gao X, Li H, Lim JH, Wong TY: Computer-aided cataract detection using enhanced texture features on retro-illumination lens images . Proceedings of IEEE Int. Conf. Image Processing. 2011, IEEE, 1565-1568.Google Scholar
  57. Chow YC, Gao X, Li H, Lim JH, Sun Y, Wong TY: Automatic detection of cortical and PSC cataracts using texture and intensity analysis on retro-illumination lens images . Conf Proceedings of IEEE Eng Med Biol Soc. 2011, IEEE, 5044-5047.Google Scholar
  58. Gao X, Wong DWK, Ng TT, Cheung CYL, Cheng CY, Wong TY: Automatic grading of cortical and PSC cataracts using retroillumination lens images . Proceedings of the 11th Asian conference on Computer Vision-Volume Part II. 2012, Heidelberg: Springer Berlin, 256-267.Google Scholar
  59. Sehi M, Guaqueta D, Feuer W, Greenfield D: Scanning laser polarimetry with variable and enhanced corneal compensation in normal and glaucomatous eyes . Am J Ophthalmol. 2007, 143 (2): 272-279.PubMedGoogle Scholar
  60. Lee PJ, Liu CJ, Wojciechowski R, Bailey-Wilson JE, Cheng CY: Structure-function correlations using scanning laser polarimetry in primary angle-closure glaucoma and primary open-angle glaucoma . Am J Ophthalmol. 2010, 149 (5): 817-825.PubMedPubMed CentralGoogle Scholar
  61. Vermeer K, Lo B, Zhou Q, Vos F, Vossepoel A, Lemij H: Event-based progression detection strategies using scanning laser polarimetry images of the human retina . Comput Biol Med. 2011, 41 (9): 857-864.PubMedGoogle Scholar
  62. Medeiros F, Zangwill L, Bowd C, Weinreb R: Comparison of the GDx VCC scanning laser polarimeter, HRT II confocal scanning laser ophthalmoscope, and stratus OCT optical coherence tomograph for the detection of glaucoma . Arch Ophthalmol. 2004, 122 (6): 827-837.PubMedGoogle Scholar
  63. Ben Sbeh Z, Cohen LD, Mimoun G, Coscas G: A new approach of geodesic reconstruction for drusen segmentation in eye fundus images . IEEE Trans Med Imaging. 2001, 20 (12): 1321-1333.PubMedGoogle Scholar
  64. Karnowski TP, Govindasamy VP, Tobin KW, Chaum E, Abramoff M: Retina lesion and microaneurysm segmentation using morphological reconstruction methods with ground-truth data . Conf Proceedings of IEEE Eng Med Biol Soc. 2008, IEEE, 5433-5436.Google Scholar
  65. Rapantzikos K, Zervakis M, Balas K: Detection and segmentation of drusen deposits on human retina: Potential in the diagnosis of age-related macular degeneration . Med Image Anal. 2003, 7: 95-108.PubMedGoogle Scholar
  66. Hoover A, Kouznetsoza V, Goldbaum M: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response . IEEE Trans Med Imag. 2000, 19 (3): 203-210.Google Scholar
  67. Lowell J, Hunter A, Steel D, Basu A, Ryder R, Kennedy L: Measurement of retinal vessel widths from fundus images based on 2-D modeling . IEEE Trans Biomed Eng. 2004, 23 (10): 1196-1204.Google Scholar
  68. Heneghan C, Flynn J, OKeef M, Cahill M: Characterization of changes in blood vessel and tortuosity in retinopathy of prematurity using image analysis . Med Image Anal. 2002, 6 (4): 407-429.PubMedGoogle Scholar
  69. Joshi G, Sivaswamy J, Krishnadas S: Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment . IEEE Tran Med Imaging. 2011, 30 (6): 1192-1205.Google Scholar
  70. Hatanaka Y, Noudo A, Muramatsu C, Sawada A, Hara T, Yamamoto T, Fujita H: Automatic measurement of vertical cup-to-disc ratio on retinal fundus images . Medi Biometrics, Lect Notes Comput Sci. 2010, 6165: 64-72.Google Scholar
  71. Liu J, Wong DWK, Lim JH, Tan NM, Zhang Z, Li H, Yin F, Lee BH, Saw SM, Louis T, W TY: Detection of pathological myopia by PAMELA with texture-based features through an SVM approach . J Healthcare Eng. 2010, 1 (1): 1-12.Google Scholar
  72. Tan NM, Liu J, Wong DWK, Lim JH, Zhang Z, Lu S, Li H, Saw SM, Wong TY: Automatic detection of pathological myopia using variational level set . Conf Proceedings IEEE Eng Med Biol Soc. 2009, IEEE, 3609-3612.Google Scholar
  73. Cheng J, Tao D, Liu J, Wong DWK, Tan NM, Wong TY, Saw SM: Peripapillary atrophy detection by sparse biologically inspired feature manifold . IEEE Trans Med Imaging. 2012, 31 (12): 2355-2365.PubMedGoogle Scholar
  74. Smith RT, Chan JK, Nagasaki T, Ahmad UF, Barbazetto I, Sparrow J, Figueroa M, Merriam J: Automated detection of macular drusen using geometric background leveling and threshold selection . Arch Ophthalmol. 2005, 123 (2): 200-206.PubMedPubMed CentralGoogle Scholar
  75. Lee B, Adam H: Drusen detection in a retinal image using multi-level analysis . Proceedings of Int Conf MICCAI. 2003, Heidelberg: Springer Berlin, 618-625.Google Scholar
  76. Freund D, Bressler N, Burlina P: Automated detection of drusen in the macula . Proceedings of IEEE Int Symposium Biomedical Imaging. 2009, IEEE, 61-64.Google Scholar
  77. Tamura S, Okamoto Y, Yanashima K: Zero-crossing interval correction in tracing eye-fundus blood vessels . Pattern Recogn. 1988, 21 (3): 227-233.Google Scholar
  78. Kochner B, Schulmann D, Michaelis M, Mann G, Englemeier K: Course tracking and contour extraction of retinal vessels from colour fundus photographs: most efficient use of steerable filters for model based image analysis . Proceedings of SPIE Conference on Medical Imaging. 1988, Bellingham: International Society for Optics and Photonics, 755-761.Google Scholar
  79. Antal B, Hajdu A: An ensemble-based system for microaneurysm detection and diabetic retinopathy grading . IEEE Trans Biomed Eng. 2012, 59 (6): 1720-1726.PubMedGoogle Scholar
  80. Wong DWK, Liu J, Lim JH, Jia X, Yin F, Li H, Wong TY: Level-set based automatic cup-to-disc ratio determination using retinal fundus images in ARGALI . Proceedings 30th Annual Intl conf. of the IEEE Eng Med Biol Soc. 2008, IEEE, 2266-2269.Google Scholar
  81. Bock R, Meier J, Michelson G, Nyl L, Hornegger J: Classifying glaucoma with image-based features from fundus photographs . Lect Notes Comput Sci. 2007, 4713: 355-364.Google Scholar
  82. Meier J, Bock R, Michelson G, Nyl LG, Hornegger J: Effects of preprocessing eye fundus images on appearance based glaucoma classification . Lect Notes Comput Sci. 2007, 4673: 165-172.Google Scholar
  83. Liu J, Wong D, Tan N, Zhang Z, Lu S, Lim J, Li H, Saw S, Tong L, Wong T: Automatic classification of pathological myopia in retinal fundus images using PAMELA . Proceedings of SPIE Conference on Medical Imaging. 2010, Bellingham: International Society for Optics and Photonics, 76240G-76240G.Google Scholar
  84. Barriga E, Murray V, Agurto C, Pattichis M, Russell S, Abramoff M, Davis H, Soliz P: Multi-scale AM-FM for lesion phenotyping on age-related macular degeneration . Proceedings IEEE Int Symp Computer-Based Medical Systems. 2009, IEEE, 1-5.Google Scholar
  85. Soliz P, Russell SR, Abramoff MD, Murillo S, Pattichis M, Davis H: Independent component analysis for vision-inspired classification of retinal images with age-related macular degeneration . Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation. 2008, IEEE, 65-68.Google Scholar
  86. Zheng Y, Vanderbeek B, Daniel E, Stambolian D, Maguire M, Brainard D, Gee J: An automated drusen detection system for classifying age-related maculardegeneration with color fundus photographs . Proceedings on IEEE International Symposium on Biomedical Imaging. 2013, IEEE, 1440-1443.Google Scholar
  87. Harangi B, Lazar I, Hajdu A: Automatic exudate detection using active contour model and regionwise classification . Conf Proceedings of IEEE Eng Med Biol Soc. 2012, IEEE, 5951-4.Google Scholar
  88. Martins CIO, Medeiros F, Veras RM, Bezerra FN, Cesar R: Evaluation of retinal vessel segmentation methods for microaneurysms detection . Proceedings of IEEE Int. Conf. Image Processing. 2009, IEEE,Google Scholar
  89. Jaafar HF, Nandi AK, Al-Nuaimy W: Detection of exudates in retinal images using a pure splitting technique . Conf Proceedings of IEEE Eng Med Biol Soc. 2010, IEEE,Google Scholar
  90. Bock R, Meier J, Nyl L, Michelson G: Glaucoma risk index: Automated glaucoma detection from color fundus images . Med Image Anal. 2010, 14: 471-481.PubMedGoogle Scholar
  91. Cheng J, Tao D, Liu J, Wong D, Lee B, Mani B, Wong T, Aung T: Focal Biologically Inspired Feature for Glaucoma Type Classification . Proceedings of Int Conf MICCAI. 2011, Heidelberg: Springer Berlin, 91-98.Google Scholar
  92. Xu Y, Liu J, Lin S, Xu D, Cheung C, Aung T, Wong T: Efficient optic cup detection from intra-image learning with retinal structure priors . Proceedings Int Conf MICCAI. 2012, Heidelberg: Springer Berlin, 58-65.Google Scholar
  93. Cheng J, Wong DWK, Cheng X, Liu J, Tan NM, Bhargava M, Cheung CMG, Wong TY: Early age-related macular degeneration detection by focal biologically inspired feature . Proceedings of IEEE Int. Conf. Image Processing. 2012, IEEE, 2805-2808.Google Scholar
  94. Köse C, Sevik U, Gencalioglu O, Ikibas C, Kayikicioglu T: A statistical segmentation method for measuring age-related macular degeneration in retinal fundus images . J Med Sys. 2010, 34: 1-13.Google Scholar
  95. Quellec G, Russell SR, Abràmoff MD: Optimal filter framework for automated, instantaneous detection of lesions in retinal images . IEEE Trans Med Imaging. 2011, 30 (2): 523-533.PubMedGoogle Scholar
  96. Walter T, Massin P, Erginay A, Ordonez R, Jeulin C, Klein JC: Automatic detection of microaneurysms in color fundus images . Med Image Anal. 2007, 11 (6): 555-566.PubMedGoogle Scholar
  97. Lazar I, Hajdu A: Retinal microaneurysm detection through local rotating cross-section profile analysis . IEEE Trans Med Imaging. 2013, 32 (2): 400-407.PubMedGoogle Scholar
  98. Fleming AD, Philip S, Goatman KA, Olson JA, Sharp PF: Automated microaneurysm detection using local contrast normalization and local vessel detection . IEEE Trans Med Imaging. 2006, 25 (9): 1223-1232.PubMedGoogle Scholar
  99. Ohwada H, Daidoji M, Shirato S, Mizoguchi F: Learning first order rules from image applied to glaucoma diagnosis . Proceedings of Pacific Rim International Conference on Artificial Intelligence. 1998, Heidelberg: Springer Berlin, 494-505.Google Scholar
  100. Nyul LG: Retinal image analysis for automated glaucoma risk evaluation . Proceedings of SPIE Conference on Medical Imaging. 2009, Bellingham: International Society for Optics and Photonics, 74971C1-9.Google Scholar
  101. McIntyre R, Heywood MI, Artes PH, Abidi SSR: Toward glaucoma classification with moment methods . Proceedings First Canadian Conference on Computer and Robot Vision. 2004, IEEE, 265-272.Google Scholar
  102. Ujjwal K, Chakravarty A, Sivaswamy J: Visual saliency based bright lesion detection and discrimination in retinal images . Proceedings IEEE 10th Int Symposium Biomedical Imaging: From Nano to Macro. 2013, IEEE, 1428-1431.Google Scholar
  103. Medhi JP, Nath MK, Dandapat S: Automatic grading of macular degeneration from color fundus images . Proceedings of World Congress on Information and Communication Technologies. 2012, IEEE, 511-514.Google Scholar
  104. Liang Z, Wong DW, Liu J, Chan KL, Wong TY: Towards automatic detection of age-related macular degeneration in retinal fundus images . Conf Proceedings of IEEE Eng Med Biol Soc. 2010, IEEE, 4100-4103.Google Scholar
  105. Esmaeili M, Rabbani H, Dehnavi AM, Dehghani A: A new curvelet transform based method for extraction of red lesions in digital color retinal images . Proceedings of IEEE Int. Conf. Image Processing. 2010, IEEE,Google Scholar
  106. Ravishankar S, Jain A, Mittal A: Automated feature extraction for early detection of diabetic retinopathy in fundus images . Proceedings IEEE Conf. on Comp Vis Pattern Recognition. 2009, IEEE,Google Scholar
  107. Antal B, Hajdu A: Improving microaneurysm detection using an optimally selected subset of candidate extractors and preprocessing methods . Pattern Recognition. 2012, 45: 264-270.Google Scholar
  108. Yu J, Abidi SSR, Artes PH, Mcintyre A, Heywood M: Automated optic nerve analysis for diagnostic support in glaucoma . Proceedings of IEEE Symposium on Computer-Based Medical Systems. 2005, IEEE, 97-102.Google Scholar
  109. Xu Y, Lin S, Wong DWK, Liu J, Xu D: Efficient reconstruction-based optic cup localization for glaucoma screening . Proceedings of Int Conf MICCAI. 2013, Springer Berlin: Heidelberg, 445-452.Google Scholar
  110. Muramatsu C, Nakagawa T, Sawada A, Hatanaka Y, Hara T, Yamamoto T, Fujita H: Determination of cup and disc ratio of optical nerve head for diagnosis of glaucoma on stereo retinal fundus image pairs . Proceedings of SPIE Conference on Medical Imaging. 2009, Bellingham: International Society for Optics and Photonics, 603-610.Google Scholar
  111. Hijazi MHA, Coenen F, Zheng Y: Retinal image classification using a histogram based approach . IEEE International Joint Conference on Neural Networks. 2010, IEEE, 3501-3507.Google Scholar
  112. Ahmad HMH, Frans C, Yalin Z: Retinal image classification for the screening of age-related macular degeneration . Research and Development in Intelligent Systems XXVII. 2011, London: Springer,Google Scholar
  113. Zuiderveld K: Contrast limited adaptive histogram equalization . Graphics gems IV. 1994, San Diego, CA, USA: Academic Press Professional, Inc., 474-485.Google Scholar
  114. Lay B, Baudoin C, Klein JC: Automatic detection of microaneurysms in retinopathy fluoro-angiogram . Proceedings of 27th Annual Techincal Symposium. 1984, Bellingham: International Society for Optics and Photonics, 165-173.Google Scholar
  115. Cree MJ, Olson JA, McHardy KC, Sharp PF, Forrester JV: A fully automated comparative microaneurysm digital detection system . Eye. 1997, 11 (5): 622-628.PubMedGoogle Scholar
  116. Cheng J, Liu J, Tao D, Yin F, Wong D, Wong TY: Superpixel classification based optic cup segmentation . Proceedings of Int Conf MICCAI. 2013, Heidelberg: Springer Berlin, 421-428.Google Scholar
  117. Cheng J, Liu J, Xu Y, Yin F, Wong DWK, Tan NM, Tao D, Cheng CY, Aung T, Wong TY: Superpixel classification based optic disc and optic cup segmentation for glaucoma screening . IEEE Trans Med Imaging. 2013, 32 (6): 1019-1032.PubMedGoogle Scholar
  118. Xu Y, Xu D, Lin S, Liu J, Cheng J, Cheung CY, Aung T, Wong TY: Sliding window and regression based cup detection in digital fundus images for glaucoma diagnosis . Proceedings Int Conf MICCAI. 2011, Heidelberg: Springer Berlin, 1-8.Google Scholar
  119. Ahmad HMH, Chuntao J, Frans C, Yalin Z: Image classification for age-related macular degeneration screening using hierarchical image decompositions and graph mining . Lect Notes Comput Sci. 2011, 6912: 65-80.Google Scholar
  120. Zheng Y, Hijazi MHA, Coenen F: Automated “disease/no disease” grading of age-related macular degeneration by an image mining approach . Invest Ophthalmol Vis Sci. 2012, 53 (13): 8310-8318.PubMedGoogle Scholar
  121. Priya R, Aruna P: Automated diagnosis of Age-related macular degeneration from color retinal fundus images . Proceedings of the 3rd International Conference on Electronics Computer Technology. 2011, IEEE, 227-230.Google Scholar
  122. Walter T, Klein JC: Automatic detection of microaneurysms in color fundus images of the human retina by means of the bounding box closing . Proceedings the Third International Symposium on Medical Data Analysis. 2002, Heidelberg: Springer Berlin, 210-220.Google Scholar
  123. Niemeijer M, van Ginneken B, Staal J, Suttorp-Schulten MS, Abràmoff MD: Automatic detection of red lesions in digital color fundus photographs . IEEE Trans Med Imaging. 2005, 24 (5): 584-592.PubMedGoogle Scholar
  124. Sinthanayothin C, Boyce J, Williamson T, Cook H, Mensah E, Lal S, Usher D: Automated detection of diabetic retinopathy on digital fundus images . Diabet Med. 2002, 19 (2): 105-112.PubMedGoogle Scholar
  125. Wong D, Liu J, Lim JH, Li H, Jia X, Yin F, Wong TY: Automated detection of kinks from blood vessels for optic cup segmentation in retinal images . Proceedings of SPIE Conference on Medical Imaging. 2009, Bellingham: International Society for Optics and Photonics, 72603L1-8.Google Scholar
  126. Joshi GD, Sivaswamy J, Karan K, Prashanth R, Krishnadas R: Vessel bend-based cup segmentation in retinal images . Proceedings of Int. Conf. Pattern Recognition, Istanbul, Turkey. 2010, IEEE, 2536-2539.Google Scholar
  127. Chaum E, Karnowski TP, Govindasamy VP, Abdelrahman M, Tobin KW: Automated diagnosis of retinopathy by content-based image retrieval . Retina. 2008, 28 (10): 1463-1477.PubMedGoogle Scholar
  128. Tobin KW, Abdelrahman M, Chaum E, Govindasamy VP, Karnowski TP: A probabilistic framework for content-based diagnosis of retinal disease . Conf Proceedings IEEE Eng Med Biol Soc, Lyon, France. 2007, IEEE, 6743-6746.Google Scholar
  129. Gardner G, Keating D, Williamson T, Elliott A: Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool . Br J Ophthalmol. 1996, 80 (11): 940-944.PubMedPubMed CentralGoogle Scholar
  130. Akram MU, Khalid S, Khan SA: Identification and classification of microaneurysms for early detection of diabetic retinopathy . Pattern Recognit. 2013, 46: 107-116.Google Scholar
  131. Zhang B, Karray F, Zhang L, You J: Microaneurysm (MA) detection via sparse representation classifier with MA and Non-MA dictionary learning . Proceedings Int Conf Pattern Recognition: Istanbul, Turkey. 2010, IEEE, 277-280.Google Scholar
  132. Xu J, Chutatape O, Sung E, Zheng C, Kuan P: Optic disc feature extraction via modified deformable model technique for glaucoma analysis . Pattern Recognit. 2007, 40 (7): 2063-2076.Google Scholar
  133. Abramoff M, Alward W, Greenlee E, Shuba L, Kim C, Fingert J, Kwon Y: Automated Segmentation of the optic disc from stereo color photographs using physiologically plausible features . Invest Ophthalmol Vis Sci. 2007, 48 (4): 1665-PubMedPubMed CentralGoogle Scholar
  134. Corona E, Mitra S, Wilson M, Krile T, Kwon YH, Soliz P: Digital stereo image analyzer for generating automated 3D measures of optic disc deformation in glaucoma . IEEE Trans Med Imag. 2002, 21 (10): 1244-1253.Google Scholar
  135. Niemeijer M, van Ginneken B, Russell SR, Suttorp-Schulten MS, Abramoff MD: Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis . Invest Ophthalmol Vis Sci. 2007, 48 (5): 2260-2267.PubMedPubMed CentralGoogle Scholar
  136. Mubbashar M, Usman A, Akram MU: Automated system for macula detection in digital retinal images . Proceedings of Int. Conf. on Information and Communication Technologies, Karachi, Pakistan. 2011, IEEE, 1-5.Google Scholar
  137. Hunter A, Lowell JA, Ryder B, Basu A, Steel D: Automated diagnosis of referable maculopathy in diabetic retinopathy screening . Conf Proceedings of IEEE Eng Med Biol Soc, Boston. 2011, IEEE, 3375-3378.Google Scholar
  138. Ram K, Joshi GD, Sivaswamy J: A successive clutter-rejection-based approach for early detection of diabetic retinopathy . IEEE Trans Biomed Eng. 2011, 58 (3): 664-673.PubMedGoogle Scholar
  139. Zhang B, Wu X, You J, Li Q, Karray F: Detection of microaneurysms using multi-scale correlation coefficients . Pattern Recognit. 2010, 43 (6): 2237-2248.Google Scholar
  140. Guesalag A, Irarrźabal P, Guarini M, Álvarez R: Measurement of the glaucomatous cup using sequentially acquired stereoscopic images . Measurement. 2003, 34 (3): 207-213.Google Scholar
  141. Merickel MB, Wu X, Sonka M, Abramoff M: Optimal segmentation of the optic nerve head from stereo retinal images . Proceedings of SPIE Conference on Medical Imaging, San Diego, California, United States. 2006, Bellingham: International Society for Optics and Photonics, 1031-1038.Google Scholar
  142. Lu S, Liu J, Lim JH, Zhang Z, Meng TN, Wong WK, Li H, Wong TY: Automatic fundus image classification for computer-aided diagonsis . Conf Proceedings of IEEE Eng Med Biol Soc, Minnesota, USA. 2009, IEEE, 1453-1456.Google Scholar
  143. Cheng X, Wong DWK, Liu J, Lee BH, Tan NM, Zhang J, Cheng CY, Cheung G, Wong TY: Automatic localization of retinal landmarks . Conf Proceedings of IEEE Eng Med Biol Soc, San Diego, California, USA. 2012, IEEE, 4954-4957.Google Scholar
  144. Lee N, Wielaard J, Fawzi A, Sajda P, Laine A, Martin G, Humayun M, Smith R: In vivo snapshot hyperspectral image analysis of age-related macular degeneration . Conf Proceedings of IEEE Eng Med Biol Soc, Istanbul, Turkey. 2010, IEEE, 5363-5366.Google Scholar
  145. Jaafar HF, Nandi AK, Al-Nuaimy W: Automated detection of red lesions from digital colour fundus photographs . Conf Proceedings of IEEE Eng Med Biol Soc, Boston. 2011, IEEE, 584-592.Google Scholar
  146. Agurto C, Murray V, Barriga E, Murillo S, Pattichis M, Davis H, Russell S, Abràmoff M, Soliz P: Multiscale AM-FM methods for diabetic retinopathy lesion detection . IEEE Trans Med Imaging. 29 (2): 502-512.
  147. Jain N, Farsiu S, Khanifar AA, Bearelly S, Smith RT, Izatt JA, Toth CA: Quantitative comparison of drusen segmented on SD-OCT versus drusen delineated on color fundus photographs . Invest Ophthalmol Vis Sci. 2010, 51 (10): 4875-4883.PubMedPubMed CentralGoogle Scholar
  148. Niemeijer M, van Ginneken B, Russell SR, Suttorp-Schulten MS, Abramoff MD: Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis . Invest Ophthalmol Vis Sci. 2007, 48 (5): 2260-2267.PubMedPubMed CentralGoogle Scholar
  149. Jelinek HF, Rocha A, Carvalho T, Goldenstein S, Wainer J: Machine learning and pattern classification in identification of indigenous retinal pathology . Conf Proceedings of IEEE Eng Med Biol Soc, Boston. 2011, IEEE, 5951-5954.Google Scholar
  150. Tang L, Niemeijer M, Reinhardt J, Garvin M, Abramoff M: Splat feature classification with application to retinal hemorrhage detection in fundus images . IEEE Trans Med Imaging. 2013, 32 (2): 364-375.PubMedGoogle Scholar
  151. Hani AFM, Ngah NF, George TM, Izhar LI, Nugroho H, Nugroho HA: Analysis of foveal avascular zone in colour fundus images for grading of diabetic retinopathy severity . Conf Proceedings of IEEE Eng Med Biol Soc, Buenos Aires. 2010, IEEE, 5632-5635.Google Scholar
  152. Oloumi F, Rangayyan RM, Ells AL: Computer-aided diagnosis of proliferative diabetic retinopathy . Conf Proceedings IEEE Eng Med Biol Soc, San Diego, California, USA. 2012, IEEE, 1438-1441.Google Scholar
  153. Niemeijer M, Abramoff MD, van Ginneken B: Information fusion for diabetic retinopathy CAD in digital color fundus photographs . IEEE Trans Med Imaging. 2009, 28 (5): 775-785.PubMedGoogle Scholar
  154. Rocha A, Carvalho T, Jelinek HF, Goldenstein S, Wainer J: Points of interest and visual dictionaries for automatic retinal lesion detection . IEEE Trans Biomed Eng. 2012, 59 (8): 2244-2253.PubMedGoogle Scholar
  155. Abràmoff MD, Folk JC, Han DP, Walker JD, Williams DF, Russell SR, Massin P, Cochener B, Gain P, Tang L, Lamard M, Moga DC, Quellec G, Niemeijer M: Automated analysis of retinal images for detection of referable diabetic retinopathy . JAMA Ophthalmology. 2013, 131 (3): 351-357.PubMedGoogle Scholar
  156. Murray V, Agurto C, Barriga S, Pattichis MS, Soliz P: Real-time diabetic retinopathy patient screening using multiscale AM-FM methods . Proceedings of IEEE Int. Conf. Image Processing, Orlando, Florida, USA. 2012, IEEE, 525-528.Google Scholar
  157. Agurto C, Barriga ES, Murray V, Nemeth S, Crammer R, Bauman W, Zamora G, Pattichis MS, Soliz P: Automatic detection of diabetic retinopathy and age-related macular degeneration in digital fundus images . Invest Ophthalmol Vis Sci. 2011, 52 (8): 5862-5871.PubMedPubMed CentralGoogle Scholar
  158. Quellec G, Lamard M, Josselin PM, Cazuguel G, Cochener B, Roux C: Optimal wavelet transform for the detection of microaneurysms in retina photographs . IEEE Trans Med Imaging. 2008, 27 (9): 1230-1241.PubMedPubMed CentralGoogle Scholar
  159. Burgansky-Eliash Z, Wollstein G, Bilonick R, Ishikawa H, Kagemann L, Schuman J: Glaucoma detection with the Heidelberg Retina Tomograph 3 . Ophthalmology. 2007, 114 (3): 466-471.PubMedGoogle Scholar
  160. Chauhan B, Blanchard J, Hamilton D, LeBlanc R: Technique for detecting serial topographic changes in the optic disc and peripapillary retina using scanning laser tomography . Invest Ophthalmol Vis Sci. 2000, 41 (3): 775-782.PubMedGoogle Scholar
  161. Miglior S, Guareschi M, Albe E, Gomarasca S, Vavassori M, Orzalesi N: Detection of glaucomatous visual field changes using the Moorfields regression analysis of the Heidelberg retina tomograph . Am J Ophthalmol. 2003, 136: 26-33.PubMedGoogle Scholar
  162. Wollstein G, Garway-Heath D, Hitchings R: Identification of early glaucoma cases with the scanning laser ophthalmoscope . Ophthalmology. 1998, 105 (8): 1557-1563.PubMedGoogle Scholar
  163. Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, Hee MR, Flotte T, Gregory K, Puliafito CA: Optical coherence tomography . Science. 1991, 254 (5035): 1178-1181.PubMedPubMed CentralGoogle Scholar
  164. Hee MR, Baumal CR, Puliafito CA, Duker JS, Reichel E, Wilkins JR, Coker JG, Schuman JS, Swanson EA, Fujimoto JG: Optical coherence tomography of age-related macular degeneration and choroidal neovascularization . Ophthalmology. 1996, 103 (8): 1260-PubMedGoogle Scholar
  165. Fujimoto JG: Optical coherence tomography for ultrahigh resolution in vivo imaging . Nat Biotechnol. 2003, 21 (11): 1361-1367.PubMedGoogle Scholar
  166. Pardianto G: Understanding diabetic retinopathy . Mimbar Ilmiah Oftalmologi Indonesia. 2005, 2: 65-6.Google Scholar
  167. Jelinek HJ, Cree MJ, Worsley D, Luckie A, Nixon P: An automated microaneurysm detector as a tool for identification of diabetic retinopathy in rural optometric practice . Clin Exp Optomet. 2006, 89 (5): 299-305.Google Scholar
  168. Michelson G, Wrntges S, Hornegger J, Lausen B: The papilla as screening parameter for early diagnosis of glaucoma . Deutsches Aerzteblatt Int. 2008, 105: 34-35.Google Scholar
  169. Mookiah M, Acharya U, Lim CM, Petznick A, S Suri J: Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features . Knowledge-Based Syst. 2012, 33: 73-82.Google Scholar
  170. Damms T, Dannheim F: Sensitivity and specificity of optic disc parameters in chronic glaucoma . Invest Ophth Vis Sci. 1993, 34 (7): 2246-2250.Google Scholar
  171. Michael D, Hancox OD: Optic disc size, an important consideration in the glaucoma evaluation . Clin Eye Vis Care. 1999, 11 (2): 59-62.Google Scholar
  172. Mookiah M, Acharya U, Chua C, Min L, Ng E, Mushrif M, Laude A: Automated detection of optic disk in retinal fundus images using intuitionistic fuzzy histon segmentation . Proc Inst Mech Eng H. 2013, 227 (1): 37-49.PubMedGoogle Scholar
  173. Harizman N, Oliveira C, Chiang A, Tello C, Marmor M, Ritch R, Liebmann JM: The ISNT rule and differentiation of normal from glaucomatous eyes . Arch Ophthalmol. 2006, 124 (11): 1579-1583.PubMedGoogle Scholar
  174. Jonas J, Fernandez M, Naumann G: Glaucomatous parapapillary atrophy: Occurrence and correlations . Arch Ophthalmol. 1992, 110: 214-222.PubMedGoogle Scholar
  175. Allingham R: Shields’ Textbook of Glaucoma. 2004, Philadelphia, USA: Lippincott Williams & WilkinsGoogle Scholar
  176. Bressler NM, Bressler SB, Fine SL: Age-related macular degeneration . Surv Ophthalmol. 1988, 32 (6): 375-413.PubMedGoogle Scholar
  177. De J, Paulus T: Age-related macular degeneration . N Engl J Med. 2006, 355 (14): 1474-1485.Google Scholar
  178. Hijazi MHA, Coenen F, Zheng Y: Data mining techniques for the screening of age-related macular degeneration . Knowledge-Based Syst. 2012, 29: 83-92.Google Scholar
  179. Saw S, Katz J, Schein O, Chew S, Chan T: Epidemiology of myopia . Epidemiol Rev. 1996, 18 (2): 175-187.PubMedGoogle Scholar
  180. Young T, Ronan S, Alvear A, Wildenberg S, Oetting W, Atwood L, Wilkin D, King R: A second locus for familial high myopia maps to chromosome 12q . Am J Hum Genet. 1998, 63 (5): 1419-24.PubMedPubMed CentralGoogle Scholar
  181. Xu Y, Liu J, Zhang Z, Tan NM, Wong D, Saw SM, Wong TY: Learn to recognize pathological myopia in fundus images using bag-of-feature and sparse learning approach . Proceedings IEEE 10th Int Symposium Biomedical Imaging, San Francisco, USA. 2013, IEEE, 888-891.Google Scholar
  182. Cheng J, Tao D, Liu J, Wong D, Tan N, Wong T, Saw S: Peripapillary atrophy detection by sparse biologically inspired feature manifold . IEEE Trans Med Imaging. 2012, 31 (12): 2355-2365.PubMedGoogle Scholar
  183. Lim L, Cheung G, Lee S: Comparison of spectral domain and swept-source optical coherence tomography in pathological myopia . Eye (Lond). 2014, 28 (4): 488-491.Google Scholar
  184. Allen D, Vasavada A: Cataract and surgery for cataract . BMJ. 2006, 333 (7559): 128-32.PubMedPubMed CentralGoogle Scholar
  185. Varma R, Steinmann W, Spaeth G, Wilson R: Variability in digital analysis of optic disc topography . Graefes Arch Clin Exp Ophthalmol. 1988, 226 (5): 435-42.PubMedGoogle Scholar
  186. Jonas J, Martus P, Budde W, Hayler J: Morphologic predictive factors for development of optic disc hemorrhages in glaucoma . Invest Ophthalmol Vis Sci. 2002, 43 (9): 2956-2961.PubMedGoogle Scholar
  187. Lowell J, Hunter A, Steel D, Basu A, Ryder R, Fletcher R, Kennedy L: Optic nerve head segmentation . IEEE Trans Med Imaging. 2004, 23 (2): 256-264.PubMedGoogle Scholar
  188. Wong D, Liu J, Tan N, Yin F, Wong T: Automatic detection of the optic cup using vessel kinking in digital retinal fundus images . Proceedings IEEE Int Symposium Biomedical Imaging, Barcelona, Spain. 2012, IEEE, 1647-1650.Google Scholar
  189. Joshi G, Sivaswamy J, Krishnadas S: Depth discontinuity-based cup segmentation from multiview color retinal image . IEEE Trans Biomed Eng. 2012, 59 (6): 1523-1531.PubMedGoogle Scholar
  190. Cheng J, Liu J, Wong DWK, Tan NM, Cheung C, Baskaran M, Wong TY, Saw SM: Peripapillary atrophy detection by biologically inspired feature . Proceedings of Int Conf Pattern Recognition, Tsukuba, Japan. 2012, IEEE, 2063-2066.Google Scholar
  191. Wang Y, Shen J, Liao W, Zhou L: Automatic fundus images mosaic based on SIFT feature . Proceedings the 3rd International Congress on Image and Signal Processing, Yantai, China. 2010, IEEE, 2747-2751.Google Scholar
  192. Lowe D: Distinctive image features from scale-invariant keypoints . Int J Comput Vis. 2004, 60 (2): 91-110.Google Scholar
  193. Ren X, Malik J: Learning a classification model for segmentation . Proceedings Int Conf Computer Vision, Nice, France. 2003, IEEE, 10-17.Google Scholar
  194. Mori G, Ren X, Efros A, Malik J: Recovering human body configurations: combining segmentation and recognition . Proceedings IEEE Conf Computer Vision and Pattern Recognition, Washington. 2004, IEEE,Google Scholar
  195. Radhakrishna A, Shaji A, Smith K, Lucchi A, Fua P, Susstrunk S: Slic superpixels . Technical Report 149300, EPFL. 2010,Google Scholar
  196. Caruana R: Multitask learning . Mach Learn. 1997, 28: 41-75.Google Scholar
  197. Thrun S: Is learning the n-th thing any easier than learning the first? . Adv Neural Inform Process Syst MIT Press. 1996, 8: 640-646.Google Scholar
  198. Mihalkova L, Huynh T, Mooney R: Mapping and Revising Markov Logic Networks for Transfer Learning . Proceedings the 22nd AAAI Conference on Artificial Intelligence, Vancouver, Canada. 2007, California: AAAI, 608-614.Google Scholar
  199. Holtzman NA, Murphy PD, Watson MS, Barr PA: Predictive genetic testing: from basic research to clinical practice . Science. 1997, 278 (5338): 602-605.PubMedGoogle Scholar
  200. Sanfilippo P, Hewitt A, Hammond C, Mackey D: The heritability of ocular traits . Surv Ophthalmol. 2010, 55 (6): 561-583.PubMedGoogle Scholar
  201. Plomin R, DeFries J, McClearn G: Behavioral Genetics. 2001, New York: World PublishersGoogle Scholar
  202. Herskind AM, McGue M, Holm NV, Sörensen TI, Harvald B, Vaupel JW: The heritability of human longevity: a population-based study of 2872 Danish twin pairs born 1870-1900 . Hum Genet. 1996, 97 (3): 319-323.PubMedGoogle Scholar
  203. Karasik D, Demissie S, Cupples LA, Kiel DP: Disentangling the genetic determinants of human aging: biological age as an alternative to the use of survival measures . J Gerontol Ser A: Biol Sci Med Sci. 2005, 60 (5): 574-587.Google Scholar
  204. Schnoll R, Johnson T, Lerman C: Genetics and smoking behavior . Curr Psychiat Rep. 2007, 9 (5): 349-57.Google Scholar
  205. Vink J, Willemsen G, Boomsma D: Heritability of smoking initiation and nicotine dependence . Behav Genet. 2005, 35 (4): 397-406.PubMedGoogle Scholar
  206. Jablonski W: A contribution to the heredity of refraction in human eyes . Arch Augenheilk. 1922, 91: 308-28.Google Scholar
  207. Fajnkuchen F, Cohen S: Update on the genetics of age-related macular degeneration . Fr J Ophtalmol. 2008, 31 (6 Pt 1): 630-637.Google Scholar
  208. Antoniak K, Bienias W, Nowak J: Age-related macular degeneration-a complex genetic disease . Klin Oczna. 2008, 110 (4–6): 211-218.PubMedGoogle Scholar
  209. Scholl H, Fleckenstein M, Charbel IP, Keilhauer C, Holz F, Weber B: An update on the genetics of age-related macular degeneration . Mol Vis. 2007, 7 (13): 196-205.Google Scholar
  210. Seddon J, Cote J, Page W, Aggen S, Neale M: The US twin study of age-related macular degeneration: relative roles of genetic and environmental influences . Arch Ophthalmol. 2005, 123 (3): 321-327.PubMedGoogle Scholar
  211. Hammond C, Webster A, Snieder H, Bird A, Gilbert C, Spector T: Genetic influence on early age-related maculopathy: a twin study . Ophthalmology. 2002, 109 (4): 730-736.PubMedGoogle Scholar
  212. Munch IC, Sander B, Kessel L, Hougaard JL, Taarnhoj NCBB, Sorensen TI, Kyvik KO, Larsen M: Heredity of small hard drusen in twins aged 20-46 years . Invest Ophthalmol Vis Sci. 2007, 48 (2): 833-838.PubMedGoogle Scholar
  213. Toh T, Liew S, MacKinnon J, Hewitt A, Poulsen J, Spector T, Gilbert C, Craig J, Hammond C, Mackey D: Central corneal thickness is highly heritable: the twin eye studies . Invest Ophthalmol Vis Sci. 2005, 46 (10): 3718-22.PubMedGoogle Scholar
  214. Charlesworth J, Kramer P, Dyer T, Diego V, Samples J, Craig J, Mackey D, Hewitt A, Blangero J, Wirtz M: The path to open-angle glaucoma gene discovery: endophenotypic status of intraocular pressure, cup-to-disc ratio, and central corneal thickness . Invest Ophthalmol Vis Sci. 2010, 51 (7): 3509-3514.PubMedPubMed CentralGoogle Scholar
  215. Klein B, Klein R, Lee K: Heritability of risk factors for primary open-angle glaucoma: the Beaver Dam Eye Study . Invest Ophthalmol Vis Sci. 2004, 45 (1): 59-62.PubMedGoogle Scholar
  216. Dirani M, Islam A, Shekar S, Baird P: Dominant genetic effects on corneal astigmatism: the genes in myopia (GEM) twin study . Invest Ophthalmol Vis Sci. 2008, 49 (4): 1339-1344.PubMedGoogle Scholar
  217. Congdon N, Broman K, Lai H, Munoz B, Bowie H, Gilbert D, Wojciechowski R, West S: Cortical, but not posterior subcapsular, cataract shows significant familial aggregation in an older population after adjustment for possible shared environmental factors . Ophthalmology. 2005, 112: 73-77.PubMedPubMed CentralGoogle Scholar
  218. Hammond C, Duncan D, Snieder H, de Lange M, West S, Spector T, Gilbert C: The heritability of age-related cortical cataract: the twin eye study . Invest Ophthalmol Vis Sci. 2001, 42 (3): 601-5.PubMedGoogle Scholar
  219. Teikari J: Genetic factors in open-angle (simple and capsular) glaucoma. A population-based twin study . Acta Ophthalmol. 1987, 65 (6): 715-720.Google Scholar
  220. Alsbirk P: Anterior chamber depth and primary angle-closure glaucoma. II. A genetic study . Acta Ophthalmol. 1975, 53 (3): 436-449.Google Scholar
  221. Tu Y, Yin Z, Pen H, Yuan C: Genetic heritability of a shallow anterior chamber in Chinese families with primary angle closure glaucoma . Ophthalmic Genet. 2008, 29 (4): 171-176.PubMedGoogle Scholar
  222. Teikari J, Koskenvuo M, Kaprio J, O’Donnell J: Study of gene-environment effects on development of hyperopia: a study of 191 adult twin pairs from the Finnish Twin Cohort Study . Acta Genet Med Gemellol. 1990, 39: 133-136.PubMedGoogle Scholar
  223. Lee M, Cho S, Kim H, Song Y, Lee K, Kim J, Kim D, Chung T, Kim Y, Seo J, Ham D, Sung J: Epidemiologic characteristics of intraocular pressure in the Korean and Mongolian populations: the Healthy Twin and the GENDISCAN study . Ophthalmology. 2012, 119 (3): 450-457.PubMedGoogle Scholar
  224. Forsman E, Cantor R, Lu A, Eriksson A, Fellman J, Jrvel I, Forsius H: Exfoliation syndrome: prevalence and inheritance in a subisolate of the Finnish population . Acta Ophthalmol Scand. 2007, 85 (5): 500-507.PubMedGoogle Scholar
  225. Carbonaro F, Andrew T, Mackey D, Young T, Spector T, Hammond C: Repeated measures of intraocular pressure result in higher heritability and greater power in genetic linkage studies . Invest Ophthalmol Vis Sci. 2009, 50 (11): 5115-5119.PubMedPubMed CentralGoogle Scholar
  226. Heitmann M, Hamann H, Brahm R, Grussendorf H, Rosenhagen C, Distl O: Analysis of prevalence of presumed inherited eye diseases in Entlebucher Mountain Dogs . Vet Ophthalmol. 2005, 8 (3): 145-151.PubMedGoogle Scholar
  227. Hammond C, Snieder H, Spector T, Gilbert C: Genetic and environmental factors in age-related nuclear cataracts in monozygotic and dizygotic twins . N Engl J Med. 2000, 342 (24): 1786-1790.PubMedGoogle Scholar
  228. Lyhne N, Sjlie A, Kyvik K, Green A: The importance of genes and environment for ocular refraction and its determiners: a population based study among 20-45 year old twins . Br J Ophthalmol. 2001, 85 (12): 1470-1476.PubMedPubMed CentralGoogle Scholar
  229. Tsai M, Lin L, Lee V, Chen C, Shih Y: Estimation of heritability in myopic twin studies . Jpn J Ophthalmol. 2009, 53 (6): 615-622.PubMedGoogle Scholar
  230. Gilmartin B: Myopia: precedents for research in the twenty-first century . Clin Experiment Ophthalmol. 2004, 32 (3): 305-324.PubMedGoogle Scholar
  231. Stefan MP: Genetic linkage analysis . Arch Neurol. 1999, 56 (6): 667-672.Google Scholar
  232. Consortium WTCC: Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls . Nature. 2007, 447 (7145): 661-678.Google Scholar
  233. McKusick-Nathans Institute of Genetic Medicine and Johns Hopkins University (Baltimore, MD): Online Mendelian Inheritance in Man, OMIM . [http://omim.org/],
  234. Venter JC, Adams MD, Myers EW, Li PW, Mural RJ, Sutton GG, Smith HO, Yandell M, Evans CA, Holt RA, Gocayne JD, Amanatides P, Ballew RM, Huson DH, Wortman JR, Zhang Q, Kodira CD, Zheng XH, Chen L, Skupski M, Subramanian G, Thomas PD, Zhang J, Gabor Miklos GL, Nelson C, Broder S, Clark AG, Nadeau J, McKusick VA, Zinder N, et al: The sequence of the human genome . Science. 2001, 291 (5507): 1304-1351.PubMedGoogle Scholar
  235. Institute NHGR: Fact sheets: genome-wide association studies . 2013, [http://www.genome.gov/20019523],Google Scholar
  236. Klein R, Zeiss C, Chew EY, Tsai JY, Sackler RS, Haynes C: Complement factor H polymorphism in age-related macular degeneration . Science. 2005, 308 (5720): 385-389.PubMedPubMed CentralGoogle Scholar
  237. Cooper JD, Smyth DJ, Smiles AM, Plagnol V, Walker NM, Allen JE, Downes K, Barrett JC, Healy BC, Mychaleckyj JC, Warram JH, Todd JA: Meta-analysis of genome-wide association study data identifies additional type 1 diabetes risk loci . Nat Genet. 2008, 40 (12): 1399-1401.PubMedPubMed CentralGoogle Scholar
  238. Fung HC, Scholz S, Matarin M, Simon-Sanchez J, Hernandez D, Britton A, Gibbs JR, Langefeld C, Stiegert ML, Schymick J, Okun MS, Mandel RJ, Fernandez HH, Foote KD, Rodríguez RL, Peckham E, De Vrieze FW, Gwinn-Hardy K, Hardy JA, Singleton A: Genome-wide genotyping in Parkinson’s disease and neurologically normal controls: first stage analysis and public release of data . Lancet Neurol. 2006, 5 (11): 911-916.PubMedGoogle Scholar
  239. Larson M, Atwood L, Benjamin E, Cupples LA, D’Agostino R, Fox C, Govindaraju D, Guo CY, Heard-Costa N, Hwang SJ, Murabito JM, Newton-Cheh C, O’Donnell CJ, Seshadri S, Vasan RS, Wang TJ, Wolf PA, Levy D: Framingham Heart Study 100K project: genome-wide associations for cardiovascular disease outcomes . BMC Med Genet. 2007, 8 (Suppl 1): S5-PubMedPubMed CentralGoogle Scholar
  240. Scuteri A, Sanna S, Chen WM, Uda M, Albai G, Strait J, Najjar S, Nagaraja R, Orru M, Usala G, Dei M, Lai S, Maschio A, Busonero F, Mulas A, Ehret GB, Fink AA, Weder AB, Cooper RS, Galan P, Chakravarti A, Schlessinger D, Cao A, Lakatta E, Abecasis GR: Genome-wide association scan shows genetic variants in the FTO gene are associated with obesity-related traits . PLoS Genet. 2007, 3 (7): e115-PubMedPubMed CentralGoogle Scholar
  241. Kooperberg C, LeBlanc M, Obenchain V: Risk prediction using genome-wide association studies . Genet Epidemiol. 2010, 34 (7): 643-652.PubMedPubMed CentralGoogle Scholar
  242. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira M, Bender D: PLINK: a toolset for whole-genome association and population-based linkage analysis . Am J Hum Genet. 2007, 81 (3): 559-575.PubMedPubMed CentralGoogle Scholar
  243. Marchini J, Howie B: Genotype imputation for genome-wide association studies . Nat Rev Genet. 2010, 11 (7): 499-511.PubMedGoogle Scholar
  244. Wan X, Yang C, Yang Q, Yang H, Xue H, Fan X: BOOST: A fast approach to detecting gene-gene interactions in genome-wide case-control studies . Am J Hum Genet. 2010, 87 (3): 325-340.PubMedPubMed CentralGoogle Scholar
  245. Zhang X, Huang S, Zou F, Wang W: TEAM: efficient two-locus epistasis tests in human genome-wide association study . Bioinformatics. 2010, 26 (12): i217—227-Google Scholar
  246. Wu J, Devlin B, Ringquist S: Screen and clean: A tool for identifying interactions in genome-wide association studies . Genet Epidemiol. 2010, 34 (3): 275-285.PubMedPubMed CentralGoogle Scholar
  247. Tibshirani R: Regression shrinkage and selection via the lasso . J R Statist Soc B. 1996, 58: 267-288.Google Scholar
  248. Wu T, Chen Y, Hastie T, Sobel E, Lange K: Genome-wide association analysis by lasso penalized logistic regression . Bioinformatics. 2009, 25 (6): 714-21.PubMedPubMed CentralGoogle Scholar
  249. Wu A, Aporntewan C, Ballard D, Lee J, Lee J, Zhao H: Two-stage joint selection method to identify candidate markers from genome-wide association studies . BMC Proc. 2009, 3 (7): s29-PubMedPubMed CentralGoogle Scholar
  250. Hoggart C, Whittaker J, De Iorio M, Balding D: Simultaneous analysis of all SNPs in genome-wide and re-sequencing association studies . PLoS Genet. 2008, 4 (7): e1000130-PubMedPubMed CentralGoogle Scholar
  251. D’Angelo G, Rao D, Gu C: Combining least absolute shrinkage and selection operator (LASSO) and principal-components analysis for detection of gene-gene interactions in genome-wide association studies . BMC Proc. 2009, 3 (7): S62-PubMedPubMed CentralGoogle Scholar
  252. Li C, Li M, Lange E, Watanabe R: Prioritized subset analysis: improving power in genome-wide association studies . Hum Hered. 2008, 65 (3): 129-141.PubMedGoogle Scholar
  253. Shortliffe EH: Mycin: A knowledge-based computer program applied to infectious diseases . Proceedings of Annu Symp Comput Appl Med Care, Washington, USA. 1977, Bethesda, Maryland: American Medical Informatics Association, 66-9.Google Scholar
  254. Miller R, Pople HJ, Myers J: Internist-I, an experimental computer-based diagnostic consultant for general internal medicine . N Engl J Med. 1982, 307 (8): 468-476.PubMedGoogle Scholar
  255. Drent M, van Nierop MA, Gerritsen FA, FWouters E, Mulder PG: A computer program using BALF-analysis results as a diagnostic tool in interstitial lung diseases . Am J Respi Crit Care Med. 1996, 153 (2): 736-741.Google Scholar
  256. Raza S, Sharma Y, Chaudry Q, Young AN, Wang MD: Automated classification of renal cell carcinoma subtypes using scale invariant feature transform . Conf Proceedings IEEE Eng Med Biol Soc, Minnesota, USA. 2009, IEEE, 6687-690.Google Scholar
  257. Miller GA: The magical number seven plus or minus two: some limits on our capacity for processing information . Psychol Rev. 1956, 63 (2): 81-87.PubMedGoogle Scholar
  258. Guyatt G, Rennie D, Meade MO, Cook DJ: Users guides to the medical literature: evidence-based medicine . JAMA: J Am Med Assoc. 2000, 284 (10): 1290-296.Google Scholar
  259. Haug P, Clayton PD, Shelton P, Rich T, Tocino I, Frederick PR, Crapo RO, Morrison WJ, Warner HR: Revision of diagnostic logic using a clinical database . Med Decis Making. 1989, 9 (2): 84-90.PubMedGoogle Scholar
  260. BWagholikar K, Sundararajan V, Deshpande AW: Modeling paradigms for medical diagnostic decision support: a survey and future directions . J Med Syst. 2012, 36 (5): 3029-3049.Google Scholar
  261. Aronsky D, Chan J, Haug PJ: Evaluation of a computerized diagnostic decision support system for patients with pneumonia: study design considerations . J Am Med Inform Assoc. 2001, 8 (5): 473-85.PubMedPubMed CentralGoogle Scholar
  262. Mustacchi G, Sormani M, Bruzzi P, Gennari A, Zanconati F, Bonifacio D, Monzoni A, Morandi L: Identification and validation of a new set of five genes for prediction of risk in early breast cancer . Int J Mol Sci. 2013, 14 (5): 9686-9702.PubMedPubMed CentralGoogle Scholar
  263. Zhang Z, Yin FS, Liu J, Wong WK, Tan NM, Lee BH, Cheng J, Wong TY: ORIGA-light: an online retinal fundus image database for glaucoma analysis and research . Conf Proceedings of IEEE Eng Med Biol Soc, Istanbul, Turkey. 2010, IEEE,Google Scholar
  264. Lauterwald F, Neumann CP, Lenz R, Junemann AG, Mardin CY, Meyer-Wegener K, Horn FK: The Erlangen Glaucoma registry: a scientific database for longitudinal analysis of glaucoma . Technical Report CS-2011-02. 2011, University of Erlangen, Dept. of Computer Science,Google Scholar
  265. Budai A, Odstricilik J, Kollar R, Jan J, Kubena T, Michelson G: A public database for the evaluation of fundus image segmentation algorithms . Proceedings of The Association of Research in Vision and Ophthalmology (ARVO) Annual Meeting, Vancouver, Canada. 2011, Rockville, Maryland: ARVO, 1345-1345.Google Scholar
  266. Hofman A, van Duijn CM, Franco OH, Ikram MA, Janssen HL, Klaver CC, Kuipers EJ, Nijsten TE, Stricker BH, Tiemeier H, Uitterlinden AG, Vernooij MW, Witteman JC: The Rotterdam study: 2012 objectives and design update . Eur J Epidemiol. 2011, 26 (8): 657-686.PubMedPubMed CentralGoogle Scholar
  267. Kauppi T, Kalesnykiene V, Kamarainen JK, Lensu L, Sorri I, Uusitalo H, Kaviainen H, Pietila J: DIARETDB0: Evaluation database and methodology for diabetic retinopathy algorithms . Machine Vision and Pattern Recognition Research Group, Lappeenranta University of Technology, Finland. 2006,Google Scholar
  268. Kauppi T, Kalesnykiene V, Kamarainen JK, Lensu L, Sorri I, Raninen A, Voutilainen R, Uusitalo H, Kalviainen H, Pietila J: The DIARETDB1 diabetic retinopathy database and evaluation protocol . Proceedings of the British Machine Vision Conference, Warwick, UK. 2007, Durham: BMVA, 15-1.Google Scholar
  269. Niemeijer M, van Ginneken B, Cree MJ, Mizutani A, Quellec G, Sanchez CI, Zhang B, Hornero R, Lamard M, Muramatsu C, Wu X, Cazuguel G, You J, Mayo A, Li Q, Hatanaka Y, Cochener B, Roux C, Karray F, Garcia M, Fujita H, Abramoff MD: Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs . IEEE Trans Med Imaging. 2010, 29 (1): 185-195.PubMedGoogle Scholar
  270. The MESSIDOR database . [http://messidor.crihan.fr/index-en.php]], Accessed on 11th Sep. 2013,
  271. Gangnon R, Lee L, Hubbard L, Klein B, Klein R, Ferris F, Bressler S, Milton RC, Davis M: The Age-related eye disease study severity scale for age-related macular degeneration: AREDS report No. 17 . Arch Ophthalmol. 2005, 123 (11): 1484-PubMedGoogle Scholar
  272. Wong DW, Liu J, Cheng X, Zhang J, Yin F, Bhargava M, Cheung GC, Wong TY: THALIA-An automatic hierarchical analysis system to detect drusen lesion images for amd assessment . Proceedings of IEEE Int Symposium Biomedical Imaging, Francisco, USA. 2013, IEEE, 884-887.Google Scholar
  273. CAPT study group: The complications of age-related macular degeneration prevention trial (CAPT): rationale, design and methodology . Clin Trials. 2004, 1: 91-107.Google Scholar
  274. Kanthan GL, Mitchell P, Rochtchina E, Cumming RG, Wang JJ: Myopia and the long-term incidence of cataract and cataract surgery: the blue mountains eye study . Clin Exp Ophthalmol. 2013, 42 (4): 347-353.Google Scholar
  275. Zhao L, Wang Y, Chen CX, Xu L, Jonas JB: Retinal nerve fibre layer thickness measured by Spectralis spectral-domain optical coherence tomography: The Beijing Eye Study . Acta ophthalmologica. 2013,Google Scholar
  276. Asakuma T, Yasuda M, Ninomiya T, Noda Y, Arakawa S, Hashimoto S, Ohno-Matsui K, Kiyohara Y, Ishibashi T: Prevalence and risk factors for myopic retinopathy in a Japanese population: the Hisayama Study . Ophthalmology. 2012, 119 (9): 1760-1765.PubMedGoogle Scholar
  277. Chen SJ, Cheng CY, Li AF, Peng KL, Chou P, Chiou SH, Hsu WM: Prevalence and associated risk factors of myopic maculopathy in elderly Chinese: the Shihpai eye study . Invest Ophthalmol Vis Sci. 2012, 53 (8): 4868-4873.PubMedGoogle Scholar
  278. Noronha K, Acharya UR, Nayak KP, Kamath S, Bhandary SV: Decision support system for diabetic retinopathy using discrete wavelet transform . Proc Inst Mech Eng Part H: J Eng Med. 2013, 227 (3): 251-261.Google Scholar
  279. Larsen N, Godt J, Grunkin M, Lund-Andersen H, Larsen M: Automated detection of diabetic retinopathy in a fundus photographic screening population . Invest Ophthalmol Vis Sci. 2003, 44 (2): 767-771.PubMedGoogle Scholar
  280. Hansen AB, Hartvig NV, Jensen MS, Borch-Johnsen K, Lund-Andersen H, Larsen M: Diabetic retinopathy screening using digital non-mydriatic fundus photography and automated image analysis . Acta Ophthalmologica Scandinavica. 2004, 82 (6): 666-672.PubMedGoogle Scholar
  281. Sharma A, Sobti A, Wadhwani M, Panda A: Evaluation of retinal nerve fiber layer using scanning laser polarimetry . J Curr Glaucoma Prac. 2010, 4 (3): 240-251.Google Scholar
  282. Potsaid B, Baumann B, Huang D, Barry S, Cable AE, Schuman JS, Duker JS, Fujimoto JG: Ultrahigh speed 1050 nm swept source/Fourier domain OCT retinal and anterior segment imaging at 100,000 to 400,000 axial scans per second . Optics Express. 2010, 18 (19): 20029-48.PubMedPubMed CentralGoogle Scholar
  283. Asbell P, Dualan I, Mindel J, Brocks D, Ahmad M, Epstein S: Age-related cataract . The Lancet. 2005, 365 (9459): 599-609.Google Scholar
  284. Xu Y, Gao X, Wong DW, Liu J, Xu D, Cheng CY, Cheung CYL, Wong TY: Automatic Grading of Nuclear Cataracts from Slit-lamp Lens Images Using Group Sparsity Regression . Proceedings Int Conf MICCAI, Nagoya, Japan, MICCAI. 2013, Heidelberg: Springer Berlin, 468-475.Google Scholar
  285. Li H, Lim JH, Liu J, Wong DWK, Foo Y, Sun Y, Wong TY: Automatic detection of posterior subcapsular cataract opacity for cataract screening . Conf Proceedings of IEEE Eng Med Biol Soc, Istanbul, Turkey. 2010, IEEE, 5359-5362.Google Scholar
  286. McCally RL, Hochheimer BF, Chamon W, Azar DT: Simple device for objective measurements of haze following excimer laser ablation of cornea . OE/LASE’93: Optics, Electro-Optics, & Laser Applications in Science & Engineering, Los Angeles, USA. 1993, Bellingham: International Society for Optics and Photonics, 20-25.Google Scholar
  287. McCally RL, Connolly PJ, Jain S, Azar DT: Objective measurements of haze following phototherapeutic excimer laser ablation of cornea . OE/LASE’94: Optics, Electro-Optics, & Laser Applications in Science & Engineering, Los Angeles, USA. 1994, Bellingham: International Society for Optics and Photonics, 161-165.Google Scholar
  288. Taboada J, Gaines D, Perez MA, Waller SG, Ivan DJ, Baldwin JB, LoRusso F, Tutt RC, Perez J, Tredici T, Johnson DA: Post-PRK corneal scatter measurements with a scanning confocal slit photon counter . BiOS 2000 The International Symposium on Biomedical Optics, San Jose, USA. 2000, Bellingham: International Society for Optics and Photonics, 50-59.Google Scholar
  289. Taboada J, Gaines D, Perez MA, Waller SG, Ivan DJ, Baldwin JB, LoRusso F, Tutt RC, Thompson B, Perez J, Tredici T, Johnson DA: Scanning confocal slit photon counter measurements of post-PRK haze in two-year study . BiOS 2001 The International Symposium on Biomedical Optics. 2001, Bellingham: International Society for Optics and Photonics, 7-17.Google Scholar
  290. Acharya U, Kannathal N, Ng E, Min L, Suri J: Computer-based classification of eye diseases . Conf Proceedings of IEEE Eng Med Biol Soc, New York, USA. 2006, 6121-6124.Google Scholar
  291. Acharya UR, Wong L, Ng E, Suri J: Automatic identification of anterior segment eye abnormality . IRBM. 2007, 28: 35-41.Google Scholar
  292. Pre-publication history

    1. The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/14/80/prepub

Copyright

© Zhang et al.; licensee BioMed Central Ltd. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.