Skip to main content

An early detection and segmentation of Brain Tumor using Deep Neural Network

Abstract

Background

Magnetic resonance image (MRI) brain tumor segmentation is crucial and important in the medical field, which can help in diagnosis and prognosis, overall growth predictions, Tumor density measures, and care plans needed for patients. The difficulty in segmenting brain Tumors is primarily because of the wide range of structures, shapes, frequency, position, and visual appeal of Tumors, like intensity, contrast, and visual variation. With recent advancements in Deep Neural Networks (DNN) for image classification tasks, intelligent medical image segmentation is an exciting direction for Brain Tumor research. DNN requires a lot of time & processing capabilities to train because of only some gradient diffusion difficulty and its complication.

Methods

To overcome the gradient issue of DNN, this research work provides an efficient method for brain Tumor segmentation based on the Improved Residual Network (ResNet). Existing ResNet can be improved by maintaining the details of all the available connection links or by improving projection shortcuts. These details are fed to later phases, due to which improved ResNet achieves higher precision and can speed up the learning process.

Results

The proposed improved Resnet address all three main components of existing ResNet: the flow of information through the network layers, the residual building block, and the projection shortcut. This approach minimizes computational costs and speeds up the process.

Conclusion

An experimental analysis of the BRATS 2020 MRI sample data reveals that the proposed methodology achieves competitive performance over the traditional methods like CNN and Fully Convolution Neural Network (FCN) in more than 10% improved accuracy, recall, and f-measure.

Peer Review reports

Introduction

Brain Tumor segmentation and detection are very challenging in the medical imaging area. Various DNN methods are used for Tumor segmentation, utilizing multiple deep-learning network architectures. The processing of medical images plays a crucial role in assisting humans in identifying different diseases [1]. Classification of brain Tumors is a significant part that depends on the expertise and knowledge of the physician. An intelligent system for detecting and classifying brain Tumors is essential to help physicians. Gliomas have an irregular shape and ambiguous boundaries, which are the most challenging Tumors to detect. Various authors have performed additional research on deep learning networks based on healthcare, i.e., Convolutional neural networks (CNNs), LinkNet, Visual Graphic Group (VGG), UNet, and SegNet [2].

Image segmentation poses significant challenges, including categorization, image processing, object recognition, and explanation. Whenever an image classification model is formed, e.g., it must be eligible to function with great precision even when subjected to occlusion, lighting modifications, observing angles, and other factors [3].

The conventional object detection process, including its primary feature extraction step, is unsuitable for wealthy areas. Sometimes experts in the domain cannot provide a single or collective of functionalities capable of achieving accurate results under varying conditions. The concept of model training emerges due to that kind of problem. The appropriate features for working with image data are instantly figured out [4].

Content-based image retrieval provides various imaging modalities, such as CT, MR, PET, X-rays, and Ultrasound. Also, the many image data available because of different scan parameter settings and multiple views of the same pathology make image retrieval in the medical domain tough and challenging. However, at the same time, it is one of the essential applications [5]. The MR images are taken from three different directions. These views are called sagittal, axial, and coronal [6]. For CBIR to be used in healthcare as a diagnostic aid, the medical information framework must be robust in various scenarios to be accepted by clinicians and medical practitioners [7].

First, case-based reasoning will be more acceptable to the medical community when the retrieval engine results in cases with exact locations and similar pathology responding to a query (new) case [8].

This will significantly help the medical expert have more information about the case and aid the expert in monitoring. Secondly, the database formed for testing purposes should be carefully built consisting of cases from multiple views, different scanning parameters, and acquired from different imaging modalities. CNN has been used to segment Tumors in multi-modal Imaging [8].

The CNN architecture is sophisticated, combining segmentation and classification into a single product. Current segmentation methods have been designed to solve the reduplication issue of CNNs by allocating a target class toward each pixel. A CNN model has been transformed into an FCN (Fully CNN). This article has critical contributions to brain Tumor research, which are as follows:

  • This research develops the ResNet Model to address the weaknesses of CNN and FCN methodologies and improve computational costs. The principle of ResNet is premised on adding the layer’s outcome towards its significant input.

  • The simple transformation used in Enhanced ResNet mainly improves the training process of Convolutional models by utilizing the “shortcut links.” These links provide all the possible route details in a single place and provide access in a single click reducing the accessing time.

The complete research article is organized as follows: Section 1 covers the introduction, Section 2 covers existing Tumor segmentation work related to research, Section 3 covers material and methods, section 4 covers results, section 5 covers the discussion and Section 6 covers the conclusion and future direction of the research.

Related works

The field of Tumor segmentation is continuously undergoing investigation. Deep learning has recently proven effective in healthcare image segmentation and information extraction. In deep learning techniques, pixel-based classification is the latest phenomenon. Various researchers have suggested different methods for brain Tumor segmentation. This section covers the analysis of a few of the critical research.

Research [9] presents brain Tumor segmentation using DNN. Brain Tumors are segmented on magnetic resonance visuals of the brain using a Deep Convolutional encoder model. This approach enhances learning by extracting attributes from complete images, eliminating patchwork selections, and improving calculations at adjacent intersections. Research [10] presented a technique for the early detection of brain cancers. Magnetic resonance images were examined to identify Tumor-bearing areas and categorize them into various classifications. In image classification techniques, deep learning generates efficient performance.

Consequently, the Fully Convolutional Networks technique was applied and incorporated through the Tensor Flow repository throughout this research. A newer CNN technique has been demonstrated to have a precision of 91 percent, which is better than previous research.

Research [11] developed a model by utilizing Brain imaging to recognize the nature of brain Tumors. A two-dimensional CNN was used to acknowledge malignant Tumors with an accuracy rate of 93 percent. The data for the four most often detected brain Tumors are included in the research’s analysis.

Research [12] advised a responsive and efficient Tumor segmentation framework. In a Cascades Classification Model, this strategy reduces computation time and addresses the problem of overfitting. Using two separate forms, this CNN architecture extracts global and regional characteristics. Additionally, the Tumor detection precision is significantly enhanced compared to current algorithms. The average WT, increasing Tumor, and Tumor center dice scores for the proposed approach achieved 92.3%, 94.5%, and 93.2 %.

Research [13] developed a model to evaluate Tumors utilizing an MRI dataset. It entails finding cancer, grading it by size and type, and determining the Tumor’s position. Instead of using alternative approaches for each classification task, this strategy used a single model to organize MRI Images on many classification techniques.

Research [14] prompted brain Tumor identification and separation by integrating both training methods. The first proposed approach was the Binary Pattern method based upon that neighbor range connection termed ‘nLBP’. The second strategy was based on the perspective of the neighbor next door called “αLBP.” The above two techniques were developed to process and analyses MRI images of the most prevalent cancers: Glioblastoma, malignant Tumors, & gland Tumors. For feature evolution, the statistics of the precompiled images were employed. Conventional extraction of feature strategies scored worse than this proposed model.

Research [15] applied the brain Tumor partition by integrating all the RELM (“Regularized Extreme Learning Machine”). The procedure initially normalized images to make the framework’s understanding easier. The framework utilized a min-max strategy for pre-processing phase. This min-max processing method significantly improved the brightness of the original images.

Research [16] applied the brain Tumor partition by integrating all the RELM (“Regularized Extreme Learning Machine”). The procedure initially normalized images to make the framework’s understanding easier. The framework utilized a min-max strategy for pre-processing phase. This min-max processing method significantly improved the brightness of the original images.

Research [17] proposed a Convolutional Perceptron neural network-based segmentation initiative to improve the Whale Optimization method. For improved feature evolution and partition, the hybrid algorithm produced an updated form of WOA. The Mean Filtering was used to first remove the noise from data in product development and production. The enhanced WOA was used to pick characteristics from the retrieved features. The MLP-IWOA-based classification was used to classify Tumors and outperformed specific current approaches.

Research [18] consolidated significant statistical attributes with CNN architectures to create a technique for the segment of brain cancer cells. The architecture concentrated on the Tumor’s boundary. The two-dimensional Wavelet Decomposition, Gabor Filters Filter, and similarity measures were used to identify and extract the image. A significant feature with further categorization was developed by combining these statistical properties.

Research [19] analyzed that cancer seems to be the most severe disease and therefore is considered challenging to treat. While behind the bottom section of the belly is a pancreatic malignant that develops in the pancreatic cells that aid indigestion. Its stage of growth determines the therapy for this Tumor. The Tumor is detected by individually identifying the afflicted region of the CT scanned data. It forecasts the Tumor region under consideration by utilizing Gaussian Mixture Framework and Expectation-Maximization method & CNN [20].

Materials & Methods

This section covers the essential methods used in this research and the proposed improved ResNet method working.

Convolution Neural Network

CNN is mainly a deep learning approach used to classify images. CNN is an artificial neural network designed to analyze input in a mesh form. In CNN, a Convolution process is an activity inside the convolution layer premised on just a mathematical matrix operation that increases the matrix of both the filtration system in the image to be analyzed. This convolution operation is the first and most significant utilization phase [21].

Figure 1 shows the architecture of CNN. This figure shows three layers named convolutional, pooling and fully connected layers. Another layer often employed is a pooling layer that receives the whole or averaged values of the pixels image regions. CNN is capable of learning advanced functionality by creating a feature map.

Fig. 1
figure 1

Architecture of Convolution Neural Network (CNN)

It constructs many feature maps; each convolution layer core is covered across its input sequence. Input sequences recognize characteristics presented on this feature map as simple boxes. Such maps are sent to the optimum related resources layer, keeping the most important features while discarding the remaining. Inside each fully-connected layer, the characteristics of its max-pooling base layer are turned into a 1-D feature vector, which will be employed to determine the output consequence [22]. Image scalability is not possible in a traditional neural network model.

However, in a CNN model, the image can be scaled (that is, it can go from a 3D input space to a 3-dimensional output pattern). The CNN Model comprises its input layers, convolution, Rectified Unit layer, pooling layer, and fully-Connected layers. The provided data (input images) gets split into small sections inside the convolution operation. The ReLU layer performs element-by-element activation. The requirement for a pooling layer is voluntary. Here the option of using or skipping can be taken

On the other hand, this pooling layer is mainly utilized for downstream sampling. A category score or class score code is represented in the last stage (i.e., fully connected layer) based on 0 and 1. The CNN-based brain Tumor segmentation training/testing rounds are categorized into two sections. All images are classified using categories like Tumor images and non-Tumor brain Tumor images [23].

Algorithm: 1 CNN-based Brain Tumor segmentation process. Input: Brain Tumor imagoes dataset Output: Tumor images are segmented into Tumor and Non-Tumor images. Step 1: Impose a Convolutional filtration to the very initial layer. Step 2: Refine the Convolutional filter to lower its sensitivities called “sub-sampling.” Step 3: All signal transmissions from one layer to the next are regulated primarily through activation blocks. Step 4: Use the rectified linear component to shorten the training process. Step 5: Each neuron in the previous layer is linked to every cell inside the subsequent stage. Step 6: At the end of the learning process, a failure layer is applied to provide constructive feedback on the CNN architecture.

Fully Convolutional Network (FCN)

In research [24], the FCN has been suggested as a solution to semantic segmentation and classification. Researchers utilized AlexNet, VGGNet, and GoogleNet as potential options. Researchers transmitted all such approaches from classification methods to thick FCN by replacing convolution layers with (1×1) Convolutional layers and adding a (1 × 1) convolution to frequency axis 21 to forecast rankings at each class and context category. FCN can learn to quickly build dense assumptions for per-pixel processes such as semantic segmentation [24].

Figure 2 shows the working of FCN architecture for image segmentation. Each layer in FCN is just a 3-D array of different sizes, including height, width, and dimension. The image is the first layer, with all the pixels’ information, including height, width, and colour space dimensions. Higher-level locations correlate to the image regions and are route-based, their visual field.

Fig. 2
figure 2

FCN Architecture

Significant alterations in FCN that further contributed to the conceptual framework to accomplish state- of-art outcomes are just the prototype VGG16, bipolar extrapolation method for up-sampling only the resulting feature outline, and skip correlation for incorporating minimal layer as well as consistently high layer characteristics in the closing layer for fine-grained segmentation. FCN only uses local data for segmentation.

However, only neighborhood details make logical segmentation unclear because the image’s global semantic scope is lost. Relevant information first from the entire image is beneficial for reducing uncertainty. U-Net and V-Net are the most popular FCN architectures widely used in image segmentation [25, 26].

Proposed model based on Residual Learning Network

The work explains the MRI brain Tumor datasets for medical image analysis that are freely available. This research outlines the performance indicators for evaluating deep learning image and segmentation models.

To address existing challenges, this work utilized an advanced pre-processing approach in the proposed method to eliminate many irrelevant data, resulting in impressive outcomes, perhaps in the current convolutional neural network.

The proposed strategy does not employ a complicated segmentation method to categorize the position of the brain Tumor and the extraction of features, which results in a time-consuming process with a high fault rate.

ResNet has been taken for proposed work as it is free from gradient issues, originally a problem of various deep learning models. The fading gradient problem occurs during the training procedure of a CNN network. As the learning continued, a gradient rule of previous layers lowered to nil or zero. A ResNet method can be utilized to address this problem. A gain of the relationship between these factors residual layer in ResNet is combined with all of its direct input to become its next inner layer [27,28,29]. Let H(RX) denote a residual mapping to establish a deep residual block, as shown in Fig. 3.

Fig. 3
figure 3

ResNet working structure

$$\mathrm{H}(\mathrm{RX})=\mathrm{F}(\mathrm{RX})+\mathrm{RX}$$
(1)

Consider a CNNS block with RX as input and the main objective of learning the accurate distribution H (RX). The output and the information difference is the “Residual learning value (RL),” as described in equation 2.

$$\mathrm{RL}(\mathrm{RX})=\mathrm{H}(\mathrm{RX})-\mathrm{RX}$$
(2)

where H (RX) represents the actual outcome, RL represents the Residual learning value, and RX represents the input. To overcome the gradient issue of DNN, this research provides an efficient method for a brain Tumor.

The Proposed Improved ResNet Model Working

Segmentation based on the Improved Residual Learning Network (ResNet). Existing ResNet can be improved by maintaining the details of all the available connection links. The proposed ResNet utilizes a jump relationship in that initial input data is combined with the convolution building’s outcome. The above addresses the disappearing gradient problem by enabling an additional route for the gradient to move across. The proposed method also utilizes an identification function that allows a more significant layer to accomplish as delicate as a bottom level. The proposed model used the pre-processing, Data Segmentation, and post-processing phases [30,31,32].

Figure 4 presents the working of the proposed ResNet model. In improved ResNet, the complete process is divided into four phases

Fig. 4
figure 4

(A) Long Skip Connection process in ResNet, (B) ResNet Bottleneck Block process, (C) ResNet Basic Block Working, and (D) ResNet Simple Block Working

In past research, researchers suggested numerous ResNet configurations with ResNet-18, ResNet-34, ResNet-50, and ResNet-152 layers. Each layer of just a ResNet consists of several frames or building blocks. The Identification and Convolutional blocks are merged to produce an Improved ResNet structure in such implementations. This research uses an improved ResNet-50 layered model for segmentation because it has more fabulous depth layers than ResNet-34 and fewer parameters than other ResNet models, resulting in a quicker training period. Figure 4 shows the ResNet-50 architectures [33].

$${L}_{bce}=\sum_{i}^{0} yi*logOi+\left(1-yi\right)*\mathrm{log}\left(1-Oi\right)$$
(3)
$${L}_{dice}=-\frac{2\sum_{i}^{0}*(Oi*yi) }{\sum_{i}^{0}Oi+\sum_{i}^{0} yi}$$
(4)

where \({\mathrm{L}}_{\mathrm{bce}}\) represents the standard binary entropy loss and \({L}_{dice}\) represents the dice loss mainly occurring during image segmentation.

The complete process of the proposed Improved ResNet is as follows:

  • Step 1: It contains a two-dimensional Convolution that has 64 filtrations of (7*7) framings and just a stride of size (2*2) small-batch Standard, and also the ReLU (activation function) completes the route axis uniformity. Finally, a Max Pooling with a frame of (2*2) is used.

  • Step 2: It includes one two-dimensional CNN model block with two Identification blocks, each having three pairs of filtrations [64, 64, 256] and a stride with size (1*1).

  • Step 3: It comprises one fully-connected block with three Identification blocks, each with three pairs of filtrations [128, 128, 512] to a stride with size (2*2).

  • Step 4: It contains one Convolution layer block as well as five Identification; it also uses three pairs of filtration of size [256, 256, 1024] and blocks size (3*3), as well as a stride of size (2*2).

  • Step 5: It comprises one Convolution layer block and two Identification blocks, each with three pairs of filtrations [512, 512, 2048] with just a stride size (2*2).

  • Step 6: The fully connected layer is also used to reduce the direct input toward the number of subclasses using a “Soft-max reactivation” algorithm, after which the outcome is flattened.

Proposed work model description

Phase 1

The Residual Network with Long Skip Connections is represented by Phase 1. It contains down-sampling (in Figure 4, represented by blue colour), indicating that it is a contracting path. Similarly, an up-sampling (in Figure 4, represented by orange colour) reveals that it is a rapidly expanding route. During this process, long skip connections interact with the contracting path to the growing direction, shown with arrows from left to right in Figure 4A.

Phase 2

Various (1*1) and (3*3) Conv are used; these blocks are called bottlenecks. BN and ReLU are used in this phase [34,35,36]. The concept behind Pre-Activation ResNet is to employ BN-ReLU just before a Conv, as shown in Figure 4B. the Benefits of using these bottleneck blocks are less training time and improved performance. The use of a bottleneck reduces the number of parameters and matrix multiplications. For example, if 9 operations were there, it would mainly reduce them to 6. The idea is to make residual blocks as thin as possible to increase the depth and has fewer parameters.

Phase 3

The third phase is the primary block phase, mainly utilizing (3*3) blocks only, not the (1*1) block. This phase represents the basic block. A basic ResNet block comprises two layers of 3x3 conv /BatchNorm/relu. In the picture, the lines represent the residual operation. The dotted line means that the shortcut was applied to match the input and the output dimension

Phase 4

The last phase is the simple block phase, which utilizes (3*3) n blocks. Max Pooling is used in this phase which rejects a big chunk of data. It extracts only the most salient features of the data. MaxPool bound the system to only the very important features and might miss out on some details

Dataset description

This research utilized the BraTS2020 dataset [37]. A brat consistently evaluates cutting-edge brain Tumor segmentation approaches in composite MRI scan data. BraTS 2020 uses multi-institutional like pre Image data. It concentrates on segmenting inherently heterogeneous (through shape, location, and cell biology) brain Tumors, such as gliomas. It includes 369 brain Tumor MR images. As described in Fig. 5, all previous research examined T1-weighted (called T1), post-contrast T1-weighted (called T1ce), T2-weighted (called T2), and fluid-attenuated inversion recovery (called Flair) sequencing. Each of the images has a (240*240*155) size[38]. The dataset is collected from the online Kaggle website. It includes 369 brain MR images; 125 are utilized for training and 169 MRI images for testing. Figure 5 shows the Brain Tumor types available in the BraTS 2020 dataset.

Fig. 5
figure 5

Brain Tumor Images in BraTS2020 (1) for Type T1, (2) for Tumor Type T2, (3) for Tumor Type T1c, and (4) for Tumor type FLAIR

Performance measuring parameters

The following essential version was utilized to measure the performance of the proposed method and the existing one [39,40,41].

Mean Square Error (MSE)

The procedure of squaring predicted quantities is MSE. An average of such squared errors can be used to explain it. Equation 5 denotes the cumulative square estimation error between the actual picture and the output image as MSE

$$MSE=\frac{1}{MN} *\{\sum_{i=0}^{m-1}*\sum_{j=0}^{n-1}[l\left(i,j\right)-K\left(i,j\right)]{\}}^{2}$$
(5)

Peak Signal Noise Ratio (PSNR)

PSNR relates to a picture’s immune function to noise external interference signals. When the PSNR level is greater, the noisy interference signal’s effect on the MR image database is minimal. MSE phrases are used to represent PSNR. PSNR must be between 40 and 60 dB. It is calculated by Eq. 6. Where Maxl is usually 255 and MSE is the mean square error

$$PSNR=10log10\frac{Max1}{MSE}$$
(6)

Computation Time

The time it takes to complete the segmentation procedure is calculated in milliseconds or Seconds and represented as elapsed time.

Jaccard Coefficient (JC)

It also serves as a metric for evaluating segmentation strategies. Jacquard offers Eq. 7 to compute the matching of two Q1 and Q2 pairs by standardizing the volume of their overlap over the respective union.

$$JC=2*\frac{|Q1 \bigcap Q2|}{\left|Q1\right|+\left| Q2\right|}$$
(7)

Dice Similarity Coefficient (DSC)

The DSC is now the most popular and common assessment indicator for assessing the segmentation results and their base facts. This measures the overlap values of two pairs, Q1 and Q2, via normalizing them well across the average of respective standard sizes. DSC is presented in the equation

$$Specificity =\frac{TN}{ TN+FP}$$
(8)

Sensitivity and Specificity

The following Eqs. 9 and 10 calculate sensitivity and specificity as rule-based decision theory measures. Where: TP-True Positive, FP-False Positive, TN-True Negative, FN -False Negative

$$Sensitivity=\frac{TP}{TP+FN}$$
(9)
$$Specificity =\frac{TN}{ TN+FP}$$
(10)

Results

Training results

In this research, the BraTS2020 dataset has been used collected from Kaggle [35]. This dataset mainly contains 369 brain Tumor patient MR images, where 125 are utilized for training and 169 MRI images for testing. The proposed improved ResNet model, existing CNN model, and FCN (model type U Net) are implemented using Python programming (Tensor flow) in the Anaconda environment. A complete experimental process is divided into two phases: training and testing. The first training phase is applied to train the model.

In the first phase, the normalization process is used. The dataset was corrected in the initial stage because the dataset had some inclination sub-field contortion for which the N4ITK technique has been taken. This technique mainly converts all four MRI brain Tumor image sequences of a particular patient, which helps in Tumor growth and sequencing analysis.

This work has presented an improved Recurrent neural network-based approach for Tumor segmentation from multi-modal 3-dimensional MRI images that further utilizes the BraTS 2020 brain Tumor dataset for performance validation. Several possible solutions have been tried while messing with CNN models. Table 1 shows the proposed improved ResNet system parameters utilized for training purposes. After normalization, the Stochastic Gradient Descent optimization method (SGDOM) manages the loss function limit. Its value mainly depends on the gradient (negative) towards the model minima. The training performance of the proposed improved ResNet and existing CNN and FCN is described in Figure 6.

Table 1 Training parameters of the proposed improved ResNet model
Fig. 6
figure 6

Experimental outcomes for training accuracy of proposed improved ResNet and existing CNN and FCN

The proposed enhanced ResNet model shows a lower error rate and higher accuracy in the training phase than existing methods. The proposed improved ResNet model is validated using thirty percent of the training dataset in this experiment.

Testing results

Figure 7 represents the performance validation of the proposed improved ResNet model with 50 epochs. Experimental outcomes prove that the training error rate decreases linearly, and the accuracy percentage increases for each epoch. The test dataset is implemented to the proposed and existing model through the testing phase to identify the brain Tumor cells in MRI images. The proposed improved ResNet model is compared to specific other existing methods in terms of performance metrics (T, ET, WT) to analyze the performance of Tumor segmentation. All performance measures have been taken for each patient in the given dataset. The mean values of these performance measures were then calculated for all patients. Figure 8 shows the experimental results of the proposed Improved ResNet Mode.

Fig. 7
figure 7

Experimental outcomes for training Error Rate of proposed improved ResNet and existing CNN and FCN

Fig. 8
figure 8

Experimental Results of proposed Improved ResNet Mode

Discussions

Brain Tumor segmentation and detection is a widely known area of research. Various Deep learning models have been executed for all brain Tumor cases like core Tumor region(CT), enhanced Tumor region(ET) and whole Tumor region(WT).

The proposed Improved ResNet model is based on Linked, which further performs identity mapping, and one “s outcome is merged with the outcome of the convolution layer without using any model factors. It also implies that a layer in the ResNet prototype tries to understand the residual of interconnects.

In contrast, layers in CNNs and perhaps FCN (U-Net) methods discover the actual performance. Consequently, the gradients can move quickly back, leading to faster computation than CNNs and FCN models. The quick access links in the proposed Improved ResNet model regulate the disappearing gradient issue.

Tables 2, 3, and 4 compare proposed ResNet and existing models (CNN and FCN) for JC, DICE Score, and Sensitivity, Specificity, and Accuracy parameters for CT, ET and WT respectively on BraTS2020 datasets.

Table 2 Comparison of Existing and proposed improved ResNet model for Core Tumor Region (CT)
Table 3 Comparison of Existing and proposed improved ResNet model for Enhanced Tumor Region (ET)
Table 4 Comparison of Existing and proposed improved ResNet model for Whole Tumor Region (WT)

According to the assessment conducted for CT proposed model, the output is 0.658, 0.924, 0.7613, 0.835, and 0.854 of JC, DICE Score, Sensitivity, Specificity and Accuracy, respectively. Similarly, the ET proposed model is 0.6328, 0.945, 0.7989, 0.926, 0.913, and for WT, it gives 0.6308, 0.864, 0.7365, 0.923, 0.879 values.

These results show improvement over CNN and FCN due to the four-phase process of the proposed model. The proposed Improved ResNet Model has better outcomes for all three Tumor cases (ET, CT, and WT). This proves that the proposed Improved ResNet model performs well in pediatric segmentation for a brain Tumor. Table 5 demonstrates that the proposed Improved ResNet model has the lowest computation time and the best PSNR and MSE. The proposed method has better results for MSE and PSNR than existing CNN and FCN methods. Loewe, the MSE value shows better performance. The proposed method has 26. 898% MSE and 21.457% PSNR are more than 20%, far better than CNN and FCN.

Table 5 Experimental results of Existing and proposed improved ResNet model for Enhanced Tumor Region (ET)

Conclusion & future work

Deep Neural Networks (DNNs) are very useful for image segmentation. However, this technique encounters a disappearing gradient issue that emerges throughout the training. To address this issue, the Improved ResNet is proposed in this research. A “connection link” inside a current ResNet allows the gradient to propagate backwards to subsequent layers. These links provide all the possible route details in a single place and provide access in a single click reducing the accessing time. This paper presents a pre-processing approach in the proposed method to eliminate many irrelevant data, resulting in impressive outcomes.

The proposed Improved ResNet and existing CNN and FCN models are implemented using tensor flow and tested on the BraTS2020 dataset. Experimental results demonstrate the strength of the proposed method in terms of better accuracy, less computation time, MSE, PSNR, and better DSC and JC. The strength of the proposed improved ResNet model is that users did not require the assistance of an expert to manually find the Tumor pixel by pixel, which is a complex and time-consuming operation. This proposed model tackles these issues by utilizing shortcut connection links in ResNet.

The experimental outcomes achieve better performance and a remarkable result compared with conventional techniques. In the binary classification problem, accuracy and precision were examined, as was the Dice coefficient score throughout the segmentation experiment. Future research can improve current outcomes and leverage deeper architectures to improve the overall effectiveness of segmentation output.

Availability of data and materials

This work utilizes the online brain Tumor available dataset data from the Kaggle BraTS2020 competition. The following is the link: https://www.kaggle.com/datasets/awsaf49/brats20-dataset-training-validation (accessed on 13 March 2022).

Abbreviations

MRI:

Magnetic resonance image

DNN:

Deep Neural Networks

ResNet:

Residual Network

FCN:

Fully Convolution Neural Network

VGG:

Visual Graphic group

RL:

Residual learning value

CT:

Core Tumor Region

MSE:

Mean Square Error

JC:

Jaccard Coefficient

MR:

Magnetic Resonance

PET:

Positron emission tomography

TP:

True Positive

FP:

False Positive

TN:

True Negative

FN:

False Negative

WT:

Whole Tumor Region

ET:

Enhanced Tumor Region

PSNR:

Peak Signal Noise Ratio

DSC:

Dice Similarity Coefficient

SGDOM:

Stochastic Gradient Descent optimization method

RELM:

Regularized Extreme Learning Machine

References

  1. A Tiwari A, Srivastava S, Pant M. Brain Tumor segmentation and classification from magnetic resonance images: Review of selected methods from 2014 to 2019. Pattern Recognition Letters. 2020;131:244–60. https://doi.org/10.1016/j.patrec.2019.11.020

  2. Munir K, Frezza F, Rizzi A. Brain Tumor segmentation using 2D-UNET convolutional neural network. Deep Learning for Cancer Diagnosis. 2021:239–48. https://doi.org/10.1007/978-981-15-6321-8_14

  3. Aher P, Lilhore U. Survey of brain Tumor image quarrying techniques. Int J Sci Eng Dev Res, ISSN. 2020:2455–631.

  4. Zhang D, Huang G, Zhang Q, Han J, Han J, Yu Y. Cross-modality deep feature learning for brain Tumor segmentation. Pattern Recogn. 2021;1(110). https://doi.org/10.1016/j.patcog.2020.107562

  5. Silva CA, Pinto A, Pereira S, Lopes A. Multi-stage deep layer aggregation for brain Tumor segmentation. InBrainlesion: Glioma, Multiple Sclerosis, Stroke, and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6 2021 (pp. 179–188). Springer International Publishing. https://doi.org/10.1007/978-3-030-72087-2_16

  6. Zhou T, Canu S, Vera P, Ruan S. Feature-enhanced generation and multi-modality fusion based deep neural network for brain Tumor segmentation with missing MR modalities. Neurocomputing. 2021;27(466):102–12. https://doi.org/10.1016/j.neucom.2021.09.032.

    Article  Google Scholar 

  7. Lin F, Wu Q, Liu J, Wang D, Kong X. Path aggregation U-Net model for brain Tumor segmentation. Multimedia Tools Appl. 2021;80:22951–64. https://doi.org/10.1007/s11042-020-08795-9.

    Article  Google Scholar 

  8. Das S, Swain MK, Nayak GK, Saxena S. Brain Tumor segmentation from 3D MRI slices using cascaded convolutional neural network. Advances in Electronics, Communication, and Computing: Select Proceedings of ETAEERE 2020 2021 (pp. 119–126). Springer Singapore. https://doi.org/10.1007/978-981-15-8752-8_12

  9. Zhang Y, Lu Y, Chen W, Chang Y, Gu H, Yu B. MSMANet: a multi-scale mesh aggregation network for brain Tumor segmentation. Appl Soft Comput. 2021;1(110):107733. https://doi.org/10.1016/j.asoc.2021.107733

    Article  Google Scholar 

  10. Munir K, Frezza F, Rizzi A. Deep learning for brain Tumor segmentation. Deep Learning for Cancer Diagnosis. 2021:189–201. https://doi.org/10.1007/978-981-15-6321-8_11

  11. Vaibhavi P, Rupal K. Brain Tumor Segmentation Using K-means–FCM Hybrid Technique. InAmbient Communications and Computer Systems: RACCCS 2017 2018 (pp. 341–352). Springer Singapore. https://doi.org/10.1007/978-981-10-7386-1_30

  12. Sharif MI, Li JP, Amin J, Sharif A. An improved framework for brain Tumor analysis using MRI based on YOLOv2 and convolutional neural network. Complex Intell Syst. 2021;7:2023–36. https://doi.org/10.1007/s40747-021-00310-3.

    Article  Google Scholar 

  13. Saueressig C, Berkley A, Munbodh R, Singh R. A joint graph and image convolution network for automatic brain Tumor segmentation. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke, and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part I. Cham: Springer International Publishing; 2022. p. 356–65. https://doi.org/10.1007/978-3-031-08999-2_30.

    Chapter  Google Scholar 

  14. Zeineldin RA, Karar ME, Coburger J, Wirtz CR, Burgert O. DeepSeg: deep neural network framework for automatic brain Tumor segmentation using magnetic resonance FLAIR images. Int J Computer-Assisted Radiol Surg. 2020;15:909–20. https://doi.org/10.1007/s11548-020-02186-z.

    Article  Google Scholar 

  15. Abd El Kader I, Xu G, Shuai Z, Saminu S, Javaid I, Salim Ahmad I. Differential deep convolutional neural network model for brain Tumor classification. Brain Sci. 2021;11(3):352. https://doi.org/10.3390/brainsci11030352.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Deng W, Shi Q, Luo K, Yang Y, Ning N. Brain Tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature. J Med Syst. 2019;43:1–9. https://doi.org/10.1007/s10916-019-1289-2.

    Article  Google Scholar 

  17. Bodapati JD, Shaik NS, Naralasetti V, Mundukur NB. Joint training of two-channel deep neural network for brain Tumor classification. SIViP. 2021;15(4):753–60. https://doi.org/10.1007/s11760-020-01793-2.

    Article  Google Scholar 

  18. Zhou Z, He Z, Jia Y. AFPNet: A 3D fully convolutional neural network with atrous-convolution feature pyramid for brain Tumor segmentation via MRI images. Neurocomputing. 2020;18(402):235–44. https://doi.org/10.1016/j.neucom.2020.03.097.

    Article  Google Scholar 

  19. Jiang Y, Ye M, Huang D, Lu X. AIU-Net: An Efficient Deep Convolutional Neural Network for Brain Tumor Segmentation. Math Probl Eng. 2021;4(2021):1–8. https://doi.org/10.1155/2021/7915706.

    Article  CAS  Google Scholar 

  20. Díaz-Pernas FJ, Martínez-Zarzuela M, Antón-Rodríguez M, González-Ortega D. A deep learning approach for brain Tumor classification and segmentation using a multi-scale convolutional neural network. Healthcare. 2021;9(2):153. https://doi.org/10.3390/healthcare9020153. MDPI.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Saleem H, Shahid AR, Raza B. Visual interpretability in 3D brain Tumor segmentation network. Comput Biol Med. 2021;1(133):104410.https://doi.org/10.1016/j.compbiomed.2021.104410

    Article  Google Scholar 

  22. Gupta S, Gupta M. Deep learning for brain Tumor segmentation using magnetic resonance images. In2021 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB) 2021 (pp. 1–6). IEEE. https://doi.org/10.1109/CIBCB49929.2021.9562890

  23. Kamnitsas K, Ferrante E, Parisot S, Ledig C, Nori AV, Criminisi A, Rueckert D, Glocker B. DeepMedic for brain Tumor segmentation. InBrainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: Second International Workshop, BrainLes 2016, with the Challenges on BRATS, ISLES and mTOP 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, October 17, 2016, Revised Selected Papers 2 2016 (pp. 138–149). Springer International Publishing. https://doi.org/10.1007/978-3-319-55524-9_14

  24. Hao K, Lin S, Qiao J, Tu Y. A generalised pooling for brain Tumor segmentation. IEEE Access. 2021;23(9):159283–90. https://doi.org/10.1109/ACCESS.2021.3130035.

    Article  Google Scholar 

  25. Iqbal S, Ghani MU, Saba T, Rehman A. Brain Tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN). Microsc Res Tech. 2018;81(4):419–27. https://doi.org/10.1002/jemt.22994.

    Article  PubMed  Google Scholar 

  26. Isensee F, Jäger PF, Full PM, Vollmuth P, Maier-Hein KH. nnU-Net for brain Tumor segmentation. InBrainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6 2021 (pp. 118–132). Springer International Publishing. https://doi.org/10.1007/978-3-030-72087-2_11

  27. Liu H, Li Q, Wang IC. A deep-learning model with learnable group convolution and deep supervision for brain Tumor segmentation. Math Probl Eng. 2021;10(2021):1–1. https://doi.org/10.1155/2021/6661083.

    Article  CAS  Google Scholar 

  28. Ramesh TR, Lilhore UK, Poongodi M, Simaiya S, Kaur A, Hamdi M. Predictive analysis of heart diseases with machine learning approaches. Malays J Comput Sci. 2022;31:132–48. https://doi.org/10.22452/mjcs.sp2022no1.10.

    Article  Google Scholar 

  29. Chen S, Ding C, Liu M. Dual-force convolutional neural networks for accurate brain Tumor segmentation. Pattern Recogn. 2019;1(88):90–100. https://doi.org/10.1016/j.patcog.2018.11.009.

    Article  Google Scholar 

  30. Wadhwa A, Bhardwaj A. Verma VS A review on brain Tumor segmentation of MRI images. Magn Reson Imaging. 2019;1(61):247–59. https://doi.org/10.1016/j.mri.2019.05.043.

    Article  Google Scholar 

  31. Lilhore U, Kumar S, Simaiya D, Prasad K. A Hybrid Tumor detection and classification based on machine learning. J Comput Theor Nanosci. 2020;17(6):2539–44. https://doi.org/10.1166/jctn.2020.8927.

    Article  CAS  Google Scholar 

  32. Wang Y, Peng J, Jia Z. Brain Tumor segmentation via c-dense convolutional neural network. Progress in Artificial Intelligence. 2021;10:147–56. https://doi.org/10.1007/s13748-021-00232-8.

    Article  Google Scholar 

  33. Punn NS, Agarwal S. Multi-modality encoded fusion with 3D inception U-net and decoder model for brain Tumor segmentation. Multimedia tools and applications. 2021;80(20):30305–20. https://doi.org/10.1007/s11042-020-09271-0.

    Article  Google Scholar 

  34. Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal C, Jodoin PM, Larochelle H. Brain Tumor segmentation with deep neural networks. Med Image Anal. 2017;1(35):18–31. https://doi.org/10.1016/j.media.2016.05.004.

    Article  Google Scholar 

  35. Online Kaggle Brain Tumor dataset. BraTS2020 Dataset (Training + Validation). 2022. p. 13.

    Google Scholar 

  36. Sharif MI, Li JP, Khan MA, Saleem MA. Active deep neural network features selection for segmentation and recognition of brain Tumors using MRI images. Pattern Recogn Lett. 2020;1(129):181–9. https://doi.org/10.1016/j.patrec.2019.11.019.

    Article  Google Scholar 

  37. Singh K, Lilhore U, Agrawal N. Survey on different Tumor detection methods from MR images. Int J Sci Res Comput Sci Eng Inf Technol. 2017;5:589–94.

    Google Scholar 

  38. Ghassemi N, Shoeibi A, Rouhani M. Deep neural network with generative adversarial networks pre-training for brain Tumor classification based on MR images. Biomed Signal Process Control. 2020;1(57):101678.https://doi.org/10.1016/j.bspc.2019.101678

    Article  Google Scholar 

  39. Saouli R, Akil M, Kachouri R. Fully automatic brain Tumor segmentation using end-to-end incremental deep neural networks in MRI images. Comput Methods Programs Biomed. 2018;1(166):39–49. https://doi.org/10.1016/j.cmpb.2018.09.007.

    Article  Google Scholar 

  40. Simaiya S, Lilhore UK, Prasad D, Verma DK. MRI brain Tumor detection & image segmentation by hybrid hierarchical K-means clustering with FCM-based machine learning model. Ann Roman Soc Cell Biol. 2021;28:88–94.

    Google Scholar 

  41. Jia Q, Shu H. Bitr-unet: a cnn-transformer combined network for MRI brain Tumor segmentation. In: Brain lesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, Brain Les 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part II. Cham: Springer International Publishing; 2022. p. 3–14. https://doi.org/10.1007/978-3-031-09002-8_1.

    Chapter  Google Scholar 

Download references

Acknowledgements

We pay sincere thanks to all cited researchers.

Funding

No External Funding has been received for this research from any International or national body.

Author information

Authors and Affiliations

Authors

Contributions

MA: writing and implementation of the proposed algorithm, results gathering, manuscript writing, analysis and interpretation of data. AKT: Supervision, formal analysis, validation, editing. MPS: formal analysis, critical manuscript revision, investigation, editing. AB: BraTS data set analysis, investigation, validation, writing literature—review and editing. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Anchit Bijalwan.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The corresponding author here declares that there is no conflict of interest from the other co-authors, including themselves.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aggarwal, M., Tiwari, A.K., Sarathi, M. et al. An early detection and segmentation of Brain Tumor using Deep Neural Network. BMC Med Inform Decis Mak 23, 78 (2023). https://doi.org/10.1186/s12911-023-02174-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-023-02174-8

Keywords