- Research
- Open access
- Published:
Skin lesion segmentation using deep learning algorithm with ant colony optimization
BMC Medical Informatics and Decision Making volume 24, Article number: 265 (2024)
Abstract
Background
Segmentation of skin lesions remains essential in histological diagnosis and skin cancer surveillance. Recent advances in deep learning have paved the way for greater improvements in medical imaging. The Hybrid Residual Networks (ResUNet) model, supplemented with Ant Colony Optimization (ACO), represents the synergy of these improvements aimed at improving the efficiency and effectiveness of skin lesion diagnosis.
Objective
This paper seeks to evaluate the effectiveness of the Hybrid ResUNet model for skin lesion classification and assess its impact on optimizing ACO performance to bridge the gap between computational efficiency and clinical utility.
Methods
The study used a deep learning design on a complex dataset that included a variety of skin lesions. The method includes training a Hybrid ResUNet model with standard parameters and fine-tuning using ACO for hyperparameter optimization. Performance was evaluated using traditional metrics such as accuracy, dice coefficient, and Jaccard index compared with existing models such as residual network (ResNet) and U-Net.
Results
The proposed hybrid ResUNet model exhibited excellent classification accuracy, reflected in the noticeable improvement in all evaluated metrics. His ability to describe complex lesions was particularly outstanding, improving diagnostic accuracy. Our experimental results demonstrate that the proposed Hybrid ResUNet model outperforms existing state-of-the-art methods, achieving an accuracy of 95.8%, a Dice coefficient of 93.1%, and a Jaccard index of 87.5.
Conclusion
The addition of ResUNet to ACO in the proposed Hybrid ResUNet model significantly improves the classification of skin lesions. This integration goes beyond traditional paradigms and demonstrates a viable strategy for deploying AI-powered tools in clinical settings.
Future work
Future investigations will focus on increasing the version's abilities by using multi-modal imaging information, experimenting with alternative optimization algorithms, and comparing real-world medical applicability. There is also a promising scope for enhancing computational performance and exploring the model's interpretability for more clinical adoption.
Introduction
Skin cancer is one of the most prevalent forms of cancer globally, impacting millions of individuals each year. It encompasses various subtypes, including melanoma, basal cell carcinoma (BCC), and squamous cell carcinoma (SCC), each with distinct levels of malignancy and public health implications [1]. Among these, melanoma is the most lethal if not detected early, leading to the majority of skin cancer-related deaths despite accounting for a smaller proportion of cases compared to BCC and SCC [2]. The incidence of melanoma continues to rise worldwide, with estimates indicating over 150,000 new cases annually, resulting in approximately 48,000 deaths each year [3].
The economic burden of skin cancer extends beyond its direct impact on health, significantly straining healthcare systems. Costs associated with preventive measures, diagnosis, treatment, and long-term care contribute to substantial financial challenges. For example, in the United States alone, the annual cost of treating skin cancers exceeds $8.1 billion, underscoring the economic impact of this disease [4].
Geographically, the rates of skin cancer are different; the highest rates are in countries with more exposure to Ultraviolet (UV) radiation. The highest rates of skin cancer worldwide are in Australia and New Zealand [5]. The variations are mostly the result of different skin types, daily lifestyles, and sun protection measures used by the community.
Prevention and early detection are important strategies to address global skin cancer problems. Programs to reduce UV exposure are urgent and should include sunscreen and protective clothing. Moreover, public awareness campaigns geared towards early detection through routine skin examinations and expert dermatology have effectively reduced the mortality rate due to skin cancer [6].
The global impact of skin cancer has profound implications for public health policy and healthcare systems. Continued efforts in disease prevention, early detection, and the development of alternative treatments are critical for mitigating the widespread effects of this common condition. Early detection remains paramount, as it significantly improves survival rates while reducing the need for aggressive treatments. Research emphasizes the importance of early diagnosis, which typically leads to favorable outcomes with less invasive interventions [7], the extent of skin cancer on the global level is staggering and leads to the making of public health policies and a new approach to healthcare systems. Completing preventative, screening, and advanced research in treatment are the most important things to be taken into account for reducing the consequences of this common condition. Skin cancer must be detected at an early stage because it is the best way to improve survival rates and reduce the severity of the treatments that are required. In most cases, dermatologists usually detect skin cancer very early by applying minimally invasive treatment methods. Such types of treatment are, without question, highly efficient and highly successful [8].
Furthermore, AI-based diagnostics has already proven to be helpful in many cases. These models and algorithms are the result of machine learning and deep learning techniques that are capable of examining dermatological images with great accuracy. These models can differentiate between benign and malignant lesions with accuracy at par with dermatologists with extensive experience in this area [9]. This attribute can assist dermatologists in making more precise diagnoses and has a prospect of screening widely in primary care settings, where a lack of specialized dermatological expertise might be a problem. Digital dermoscopy tools and AI-supported imaginal analysis are breakthroughs in medicine. These devices can store and compare images over time, enabling the discovery of even the smallest changes in a patient's skin lesions that could be malignant. These technologies improve the precision of diagnostics and make the patient care process much more efficient in the long term. Telemedicine has also increased accessibility to dermatological expertise, especially at the fringes of a country and in rural or underserved areas. Teledermatology is a form of telemedicine that provides remote diagnosis and management of skin lesions through sharing digital images between primary care providers and dermatologists, helping to speed up the diagnosis and make it possible for early intervention [10].
One of the crucial points is the role of early diagnosis in treating skin cancer. This helps improve the prognosis, lowering healthcare costs and lightening the need for extensive treatments. Incorporating new diagnostic technological approaches and methodologies, especially AI and telemedicine, is essential to these processes, which may finally turn around skin cancer management globally. The progress made in multispectral and hyperspectral imaging technologies has been the key to getting the detailed analysis of skin lesions, which takes place by capturing information from diverse wavelengths [11]. These imaging modalities can detect slight variations in skin coloration and blood flow at a rate that cannot be seen by the naked eye, which may also be a sign of melanoma onset. Hyperspectral imaging is a remarkable technique that is capable of picking out cancerous tissue-specific spectral signatures, a tool that could be used as a non-invasive way of diagnosing skin cancer [12].
Artificial intelligence (AI) has turned the world of dermatological imaging on its head by creating technology that can analyze complex image datasets straightforwardly. The deep learning models used by AI algorithms have now reached the same level of accuracy as dermatologists, thanks to their vast training data. These algorithms analyze high-resolution ultrasound (HRUS), optical coherence tomography (OCT), and other imaging modalities to identify malignancies and recommend diagnoses that substantially facilitate the diagnostic process and decrease the rate of human error [13].
Research puts forward the hybrid model of ResUNet architecture with ant colony optimization (ACO). Integration, however, takes advantage of the deep learning capabilities in spatial data processing and the ACO's optimization of the training parameters. When these two algorithms are combined, they result in a robust algorithm that is specifically designed for skin lesion segmentation. The study performs better than other methods regarding image segmentation accuracy thanks to the hybrid model used. This model can precisely define the lesion peripheries in different contexts, which greatly improves the accuracy and applicability of the automated skin lesion analysis. This is critical for the early detection and treatment planning of skin cancer. The development of ACO for hyperparameter optimization is one of the major improvements compared to conventional training algorithms. This method automatically changes parameters like learning rate and batch size while training the model. Therefore, the model can improve its performance without human intervention. Furthermore, this method might help us to reduce the time and computer resources needed for the model training process.
This study aims to design, implement, and evaluate a novel hybrid ResUNet model that incorporates the structural benefits of ResNet and U-Net architectures to improve the accuracy and efficiency of skin lesion segmentation. This includes the integration of ACO for hyperparameter tuning, aiming to optimize model performance. The study also assesses the model's robustness across various lesion types and potential for practical deployment in clinical settings. The overarching goal is to advance the field of medical image analysis and provide a tool that can aid in the early detection and treatment of skin cancer.
"Related work" section offers a literature review analyzing the gaps in current methods and existing technologies. "Proposed hybrid ResUNet model" section provides information regarding the hybrid deep learning model employed in the study, which includes ResUNet and ACO and experimental setup design. The outcomes and the discussion section exhibit the model in action and the visual results to showcase how effective the model has been. It summarizes the study's results and how they fit in with other research findings, showing how the study can contribute to clinical practice. The paper concludes with a "Conclusion and future work" section that shows the study results and offers directions for further research.
Related work
The increasing complexity and variety of the skin, with its many pigmentations and textures, still challenge the current models. The problem of obtaining accurate lesion contours and the issues with generalizing models across diverse datasets remain unsolved. This signifies the vitality of revolutionary methods that can increase the precision of segmentation and encompass adaptability and efficiency.
Existing techniques in skin lesion segmentation
The medical field of skin lesion segmentation has been subjected to great revolutions thanks to using different images and computational techniques. These innovations have contributed to the better diagnostic outcome of skin cancer by making it more accurate and efficient. This part of the review discusses the current advanced techniques for skin lesion segmentation.
Dermoscopy has dramatically revolutionized skin cancer diagnosis by allowing a view of the surface of the skin that is better resolved than could be seen non-invasively. The mode of action involves the reduction of skin gloss and allows the intricate assessment of colors or micro-structures in the lesion that cannot be noted normally. The latest innovations entail automated systems that analyze these images for boundaries, color variation, and diameter, employing microcalcifications as the main feature of malignancy (asymmetrical, border, color, diameter (ABCD) rule) [14]. Among the deep learning approaches, the Convolutional Neural Networks (CNNs) have become the most frequently implemented for the automated analysis of dermoscopic images. The deep learning technique known as CNNs can learn complex patterns in the data without manual feature extraction. Such as U-Net, which is a very popular architecture that was designed specifically for medical image segmentation and can process effectively and segment images at different scales [15]. An AI-based model effectively diagnoses benign and malignant skin lesions, making it a useful tool for dermatologists. Researchers are further improving the reliability and precision of skin lesion analyses by developing ensemble approaches that utilize the predictions of several machine learning models. This approach of multimodal classifiers, based on the capacity of different algorithms, is less likely to give false negatives and has a higher overall reliability [16]. AI is still on its way to improving the segmentation techniques further using the imaging technique. Smart AI algorithms have now developed the capacity for automatic lesion boundary identification tasks, which are indispensable for accurate lesion excision and management. AI can also integrate the patient's clinical data into the segmentation process so that it can make the process more informed and patient-tailored, such as the risk factors and previous health history [17].
Using multi-modal imaging data, which includes dermoscopy, ultrasound, and magnetic resonance imaging (MRI) helps AI models get a deeper understanding. This method of examination makes it possible to study the superficial characteristics, depth, and spread of the lesion, which, in turn, aids the doctors in making a clear diagnosis and planning the treatment [18]. The existing approaches used for skin lesion segmentation are various and powerful, from advanced imaging methods to complex mathematical models. These techniques have contributed to the development of highly accurate and efficient technology that can be used in the early detection of skin cancer, and that shows the importance of technology in the medical field Table 1. The researcher commonly used the international skin imaging collaboration (ISIC) dataset.
Deep learning in medical images
Deep learning has changed the way doctors see and treat patients. It uses powerful instruments to lead to more accurate and quick diagnosis and treatment processes. This portion of the text delves into the notable role of deep learning in medical image analysis, encompassing its application across radiology services, MRI, computed tomography (CT), and dermatologic imaging.
Deep learning in medical imaging is particularly useful since it can extract features from complicated datasets without manual intervention. Conventional image processing approaches frequently entail manually chosen feature extractions and engineering, which may be laborious and not capture all the information. In general, CNNs and deep learning models are trained to learn their characteristics from data automatically and don't need human intervention, resulting in a more reliable and comprehensive analysis [19]. Deep learning models have been proven to perform the same or more effectively as human beings in some diagnostic tasks. Take dermatology as an example; CNNs have been applied to the classification of skin lesions, and their accuracy is comparable to that of dermatologists. These algorithms work on dermoscopic images that mark malignant signs, giving a high diagnosis rate for skin cancer [20]. The same is true for radiology, where deep learning algorithms have greatly helped identify and characterize abnormalities in MRIs and CT scans, e.g., determining whether a tumor is malign or benign and how it increases in size over time [21]. Neural networks can process huge volumes of medical imaging data in a fraction of a second, allowing physicians to make on-time decisions. This attribute is critical in emergency medicine and surgery, where the decision-making process is as sensitive as a thread and can affect the patient's outcome. For example, deep learning is used to improve the image reconstruction time in MRI, which decreases the scan times; moreover, patients like it as the image quality is not compromised [22]. These techniques are also vital in creating personalized medicine, where the doctors can analyze the medical images with the patient's genetic, demographic, and clinical data. Such an integrative method leads to the development of contingency treatment plans that include the individualized characteristics of a particular person. For instance, deep learning models that process image data in conjunction with patient histories and genetics profiles could be used to tailor cancer care plans, predict patient outcomes, and find the best treatment options [23]. Although deep learning provides extensive benefits, some challenges are associated with its application in medical image analysis. The recurring issues like data privacy, the necessity for sizable annotated datasets in training, and the 'black box' nature of deep learning models continue to be the major concerns. In addition, transitioning these technologies into practice calls for a thorough validation and regulatory approval procedure, which is necessary to certify the safety and efficacy [24]. Deep learning has completely changed the way medical image processing is processed; the results that have been experienced include higher accuracy, speed, and personalized care. With technological advancement, the focus will also shift to improving patient outcomes and providing healthcare services more efficiently [25,26,27,28,29].
Table 2 synthesizes crucial information about each study, providing an at-a-glance understanding of how deep learning has been applied across different areas of medical imaging. The references are provided for further reading and validation of the information. This structured approach helps quickly compare and analyze deep learning advancements in medical imaging.
The authors detail several specific instances, ranging from broader imaging applications to more particular subfields such as dermatology and radiology, showing how deep learning is evolving to improve diagnostic accuracy and efficiency in virtually every medical field.
Proposed hybrid ResUNet model
This study's hybrid ResUNet model, meant for skin lesion analysis, can be considered a breakthrough in medical image segmentation. This part elaborates on the model design, combining the classical U-Net and ResNet concepts to improve segmentation precision and computational efficacy.
The hybrid ResUNet model combines the U-Net model's effective feature extraction and location determination features with ResNet's residual learning approach. This union results in deeper networks without the vanishing gradient problem typical of standard convolutional architectures.
Hybrid ResUNet model
ResNet is designed to address the degradation problem encountered in deep neural networks, where adding more layers leads to higher training errors [31]. The key innovation of ResNet is the introduction of residual learning through shortcut connections that allow gradients to flow directly through the network, enabling the training of very deep networks. ResNet features identity mappings that bypass one or more layers, effectively creating shortcuts or skipping connections. This architecture enables the network to learn residual functions, which are easier to optimize. ResNet's variants, such as ResNet-50, ResNet-101, and ResNet-152, are widely used for image classification, object detection, and other computer vision tasks. ResNet has been successfully applied in various tasks, including image classification, object detection, and semantic segmentation. Its ability to maintain performance with increasing depth has made it a standard in deep learning architectures. The model starts with the Input Layer, designed to accept 128 × 128 pixels images, which are mostly used in dermatoscopic datasets. Afterward, the Encoder contains multiple convolutional layers with growing filter sizes from 64 to 1024, aiming to capture the complex features at different scales and levels of detail. A residual block accompanies each convolutional layer to propagate features and gradients efficiently, and max pooling layers are interspersed to reduce spatial dimensions and expand the receptive field. At the Bridge, a central bottleneck composed of dense convolutional layers processes the most profound compressed features, linking the encoder to the decoder. The Decoder pathway includes transposed convolutional layers for upsampling feature maps, concatenates these with outputs from the encoder to preserve high-resolution details, and applies additional convolutional layers post-upsampling to refine the maps. Finally, the Output Layer employs a 1 × 1 convolution to transform the deep feature representations into the desired output classes, distinguishing lesion from non-lesion areas. Figure 1 depicts the architecture of Hybrid ResUnet.
The function of a residual block can be mathematically represented as in Eq. (1):
where \(x\) is the input to the residual block, \(F(x)\) is the output from the last convolutional layer within the block, and \(H(x)\) is the final output of the residual block. This formulation helps in training deeper networks by addressing the degradation problem.
Integration of ant colony optimization
Hybrid ResUNet employs ACO to fine-tune the network's hyperparameters, optimizing its performance. The algorithms section describes how ACO is implemented in the model and how the operations are performed on the input data.
ACO is a probabilistic method that can efficiently solve computational problems, expressed as finding good paths through graphs. An ACO algorithm imitates the foraging behavior of the ants by optimizing the graph on the pheromone trail on the edges. In the area related to neural networks, ACO is used to choose the most preferable hyperparameters like learning rate, batch size, and number of epochs, which are often considered critical for the quality of the model. The fitness function in ACO measures the quality or suitability of a particular solution in optimizing the performance of the Hybrid ResUNet model. The fitness function evaluates each solution based on how well it improves the model’s performance, guiding the search process toward the most effective hyperparameter configurations.
The fitness function is a weighted sum of these metrics, with the Dice Coefficient and Jaccard Index being prioritized due to their relevance to segmentation quality. The function is defined as in Eq. (2):
where \({w}_{1}\), \({w}_{2}\)​, and \({w}_{3}\)​ are the weights assigned to each metric reflect their importance in the optimization process. These weights are selected based on the model's specific goals (e.g., maximizing segmentation accuracy).
Algorithm 1 employs ACO to fine-tune hyperparameters for a given model. The algorithm begins with initial hyperparameters and pheromone levels, deploying several artificial ants to explore the hyperparameter space. Each ant selects hyperparameters based on the probabilistic influence of pheromone trails, evaluates the selected hyperparameters' performance to obtain a fitness score, and updates a fitness record accordingly. Pheromones are then updated to reflect successful hyperparameter paths, with evaporation to discourage convergence on local optima and reinforcement to encourage exploration of promising regions.
The notation used in Algorithm 1 is summarized in Table 3.
The pheromone update rule is crucial for guiding the search of the ants toward promising areas of the solution space. It is given in Eq. (3):
where:
\({\tau }_{ij}\) is the pheromone concentration on the edge \(ij\) at time \(t\),
\(\rho\) is the pheromone evaporation coefficient,
\({\Delta \tau }_{ij}(t)\) is the amount of pheromone deposited, which is typically related to the inverse of the model's error or loss.
To adapt the ACO algorithm for optimizing hyperparameters such as learning rate (LR), batch size (BS), and the number of layers (NL), we made several key adjustments. First, each ant in the ACO algorithm was configured to represent a potential set of hyperparameters, with options like LR = {0.001, 0.01, 0.1}, BS = {16, 32, 64}, and NL = {3, 5, 7}. The pheromone levels were initialized uniformly across these values to encourage exploration. During solution construction, ants would probabilistically select LR, BS, and NL values based on current pheromone levels and historical performance. Each set of hyperparameters was then evaluated using a fitness function, which involved training the model and measuring performance metrics such as accuracy and the Dice coefficient. Based on these results, the pheromone levels were updated, with better-performing combinations receiving more pheromone, increasing their likelihood of selection in subsequent iterations. This iterative process continued until the algorithm converged on optimal LR, BS, and NL values, thereby enhancing the model’s overall performance.
Figure 2 depicting the iterative process of selecting initial hyperparameters, evaluating model fitness, updating hyperparameters based on fitness, and modifying pheromone trails until the optimization criteria are met, leading to the finalization of the model training.
By incorporating the ACO, the model achieves the dynamicity of the training process, which may improve generalization and, consequently, better segmentation of unseen data. The ACO optimization enhances the model and allows us to understand better what role each hyperparameter plays in the model's performance, which helps to create more robust models.
Data preparation and image processing
Well-prepared and processed imaging data is critical to ensure that the model based on the hybrid ResUNet works properly. This part of the Methodology outlines the data preprocessing and image preprocessing that ensure the data is well formatted and improved for accurate training and analysis. The image preparation process in medical imaging comprises several vital actions to standardize and prepare the images for model input. These steps include image capturing, labeling, preprocessing, augmentation, and normalization.
Algorithm 2 presents the sequence of steps to process the raw image data and prepare it for training a machine learning model. The steps include resizing, grayscale conversion, normalization, augmentation, and dataset splitting, standard procedures in machine-learning pipelines for image data.
The notations used in Algorithm 2 are summarized in Table 4.
Normalization is crucial for deep learning models as it ensures that the input features have similar data scales, which helps the model learn more effectively. The Eq. (4) for normalization is:
where:
-
\(I\) is the original pixel value,
-
\({I}_{norm}\)​ is the normalized pixel value.
In image preparation for network training, all images are first resized to a consistent dimension, for example, 128 × 128 pixels, to enable batches of pictures for the training as shown in the Fig. 3. When dealing with color images, they are usually processed in grayscale to minimize input data complexity by reducing its dimension, i.e., it becomes 2D. Moreover, manual or semi-automatic annotation is done to get the assigned tags to each pixel, showing the association of lesion or non-lesion areas. Normalizing is done to normalize the pixel values across images, which helps to train the model quickly and better.
An important augmentation technique is rotation, scaling, and horizontal flipping. These are used to improve the model's ability to be applied to new data; the data is expanded, and realistic variations are introduced. Lastly, the dataset is split into training, validation, and testing. The separation is usually 70% for training, 15% for validation, and 15% for testing. These preprocessing steps are the core for the good quality and diverse data set, which is crucial for achieving high accuracy and robustness in segmentation medical imaging tasks by the hybrid ResUNet model. Sufficient and accurate data preparation helps improve the model’s performance and generalization across diverse imaging conditions and patient groups.
Model training and parameter tuning
Training a hybrid ResUNet model and fine-tuning its parameters are key to achieving a model that performs well for skin lesion segmentation. Here, the approach to model training is presented, with the optimization process, parameter tuning techniques, and performance metrics that help to lead the training.
Like training any other deep learning model, training a hybrid ResUNet model involves feeding it with pre-processed images and their corresponding labels and then iteratively adjusting its weights to the loss function. Therefore, this procedure must be meticulous to check its correctness and generalizability.
Algorithm 3 includes the process of backpropagation for weight updates, validation set performance evaluation, hyperparameter adjustment with potential decay over epochs, and early stopping based on performance criteria.
The notation used in Algorithm 3 are summarized in Table 5.
Dice coefficient loss is typically used as a loss function for segmentation tasks, which is very efficient, especially for imbalanced classes. The binary Dice loss for classification is defined as in Eq. (5):
where:
-
\(X\) is the predicted set of pixels,
-
\(\cap\) is the intersection of two sets.
-
\(Y\) is the ground truth,
-
\(\epsilon\) is a small constant to prevent division by zero.
Figure 4 provides a visual guide through the sequential steps of a machine learning training cycle, from model initialization and hyperparameter setting to the iterative process of training, weight updating, and validation. It highlights decision points for hyperparameter adjustments and the criteria for saving model checkpoints based on performance improvements.
Dataset description and preparation
The realization and assessment of the hybrid ResUNet model involve establishing a thorough experimental setup. This will begin with a clear explanation and preparation of the dataset used in the study. This portion illuminates the features of the dataset, the preprocessing steps that were used to make the data trainable, and the reasons for selecting these steps.
Dataset description
The ISIC 2018 dataset is a large and diverse collection of dermoscopic images compiled by the International Skin Imaging Collaboration (ISIC). It is widely recognized as a benchmark dataset for developing and evaluating algorithms for skin lesion analysis, particularly for tasks such as lesion segmentation, classification, and detection.
The ISIC 2018 dataset comprises over 10,000 high-resolution images of skin lesions, including common types such as melanoma, nevus, and seborrheic keratosis. Each image is accompanied by expert-annotated masks delineating the lesion boundaries, providing ground truth for segmentation tasks. The dataset includes images from a diverse range of patients, encompassing various skin types, lesion sizes, and appearances, which is essential for training robust models that generalize well to real-world scenarios. The images are captured under standardized conditions to ensure consistent quality and are reviewed by dermatology experts, ensuring the accuracy and reliability of the annotations. The ISIC 2018 dataset is particularly relevant to our study as it provides a comprehensive and challenging benchmark for evaluating the performance of the proposed Hybrid ResUNet model in skin lesion segmentation. Its large size and diversity allow rigorous testing of the model’s ability to generalize across different lesion types and patient demographics. We trained and tested the hybrid ResUNet model using the ISIC 2018 dataset [32]. The most common dataset in dermatology image analysis is often used for diagnosing and segmenting skin lesions. The ISIC 2018 dataset comprises 10,015 dermoscopic images, strategically divided into three subsets for training, validation, and testing. Specifically, 70% of the images are allocated to the training set, providing a robust and diverse dataset for the model to learn from. An additional 15% of the images are designated for the validation set, allowing for the fine-tuning of hyperparameters and assessing model performance during training. The remaining 15% of the images are reserved for the test set, ensuring an unbiased evaluation of the model's ability to generalize to new, unseen data. The dataset consists of high-resolution RGB images (an image that offers extensive textural and color information), essential for accurate lesion segmentation.
Data preparation
The preparation of the dataset involves several key steps to ensure that the images are suitable for processing by the deep learning model. Table 6 outlines these steps:
Figure 5 detailing the steps for processing image data, from initial loading to saving the preprocessed dataset. This includes resizing, grayscale conversion, normalization, data augmentation, and dataset splitting into training, validation, and test sets.
To use the neural network for batch processing, resizing the original images with different dimensions to a common one is required. Contrary to this, though, the grayscale information is still very important. The grayscale conversion simplifies the model without significant performance loss, especially if the texture and shape are more important. These steps are conventional practices aimed at improving model performance and robustness, particularly in medical images, which are known to be affected by factors such as lighting and camera setup.
This elaborate tuning guarantees that the dataset is correctly cleaned and arranged to predict the hybrid ResUNet model with high precision and accuracy.
Training environment and tools
An optimized training infrastructure is key to effectively training the ResUNet model. This part concerns the hardware and software applications involved in the training processes, which are chosen to maximize efficiency and model performance.
In the study, the hardware and software configurations are tailored to meet the demands of deep learning tasks, focusing on efficient data processing and model development. The chosen hardware includes NVIDIA Tesla V100 GPUs, renowned for their formidable computing performance, which is crucial for the intensive computations required to train deep learning models. Complementing the GPUs, Intel Xeon processors handle the preprocessing and other tasks that are less intensive on the GPU, such as memory management. Additionally, the systems have at least 64Â GB of RAM to facilitate smooth data manipulation and accommodate large datasets without constant data transfer to and from disk storage.
On the software side, TensorFlow, augmented by its high-level API, Keras, is the primary framework used due to its comprehensive library, user-friendly interface, and flexibility for designing sophisticated models like the hybrid ResUNet. The study employs popular Python libraries for data preparation and augmentation, with NumPy being used for numerical data manipulation and OpenCV for advanced image processing tasks. TensorFlow's ImageData Generator is utilized explicitly for real-time data augmentation, which is key to enhancing the model's robustness by simulating various imaging conditions. Version control is managed through Git, which is instrumental in tracking code changes and facilitating collaborative model development, ensuring the entire development process is well-documented and the experiments are reproducible.
Evaluation metrics
Several evaluation metrics are used to assess the effectiveness of the hybrid ResUNet model in segmenting skin lesions. These metrics provide insights into different aspects of model performance, including accuracy, precision, and the ability to handle class imbalances.
Accuracy
Accuracy measures the overall correctness of the model in classifying pixels. It is calculated as the ratio of correctly predicted pixels to the total pixels.
Sensitivity (Recall)
Sensitivity indicates the model's ability to identify positive samples (lesion pixels) correctly.
Specificity
Specificity measures the model's ability to correctly identify negatives (non-lesion pixels).
Dice Coefficient (Dice Similarity Coefficient—DSC)
The Dice Coefficient is a statistical tool that measures the similarity between the predicted segmentation and the ground truth. It is beneficial for data with imbalanced classes.
where \(X\) is the predicted set of pixels and \(Y\) is the actual set of pixels for the lesion.
Jaccard Index (Intersection over Union—IoU)
The Jaccard Index measures the overlap between the predicted segmentation and the actual data. Like the Dice coefficient, it shows how well the segmented area matches the ground truth. \(\cup\) represents the union of two sets.
These metrics collectively provide a comprehensive understanding of the model's performance, highlighting its strengths and areas for improvement in accurately segmenting skin lesions. Using multiple metrics ensures a balanced evaluation, considering both the model's precision in identifying lesions and its ability to generalize across various types of skin images.
Polygon area metric
The Polygon Area Metric (PAM) is computed by plotting key evaluation metrics on a radar chart and calculating the area of the polygon formed [33]. The metrics might include sensitivity, specificity, accuracy, precision, F1-score, etc. The formula for calculating the area of the polygon (PAM) is:
where:
-
\(n\) is the number of metrics used.
-
\({M}_{i}\) and \({M}_{i+1}\) are the values of the consecutive metrics plotted on the radar chart.
-
The angle between consecutive metrics is assumed to be \(\frac{2\pi }{n}\).
Results and discussion
This section delves into the outcomes of applying the hybrid ResUNet model to skin lesion segmentation. We unravel the model’s performance here, highlighting its success in accurately delineating lesion boundaries against the benchmark of current state-of-the-art methods.
Performance of the proposed model
The hybrid ResUNet model was evaluated on the ISIC 2018 dataset, focusing on key metrics to assess its efficacy in skin lesion segmentation. Below, we detail the model's performance across various metrics and scenarios.
Table 7 provides a comprehensive overview of the model’s performance across different statistical metrics, demonstrating its high accuracy and effectiveness in segmentation tasks.
Figure 6 illustrates the ROC curve, highlighting the model's discrimination capability between the lesion and non-lesion classes.
Figure 7 display a deep learning model's training, validation accuracy, and loss across 20 epochs. The accuracy shows convergence between training and validation performance over time. The model's error rate decreases with subsequent epochs, a key indicator of learning efficacy and model fitting.
The trajectory of training and validation loss over 20 epochs for a Hybrid ResUNet model. The training loss shows a steep decline from 0.42 to 0.01, indicating effective learning, while the validation loss demonstrates a consistent decrease from 0.35 to 0.12, reflecting good generalization without overfitting.
Table 8 breaks down the model's sensitivity and specificity by lesion type, showcasing its robustness across various skin lesions.
Figure 8 illustrates three confusion matrices representing the Hybrid ResUNet model's performance in classifying skin lesions: Melanoma, Nevus, and Seborrheic. Each matrix provides the counts of true positives, true negatives, false positives, and false negatives, reflecting the model's sensitivity and specificity in detecting each lesion type.
Figure 9 provides visual examples of the segmentation results, comparing model predictions with the ground truth annotations.
Table 9 shows the model's performance at different image resolutions, illustrating the impact of resolution on accuracy and the Dice coefficient.
Figure 10 presents heatmaps of feature importance, providing insights into which regions of the images are most critical for the model’s predictions.
Table 10 evaluates the impact of different data augmentation techniques on the model's accuracy and Dice coefficient, highlighting how each method improves model robustness.
Table 11 details the computational aspects of model training and inference, including training time, inference speed, and memory usage, demonstrating the model's efficiency and feasibility for practical applications.
These results collectively validate the effectiveness of the hybrid ResUNet model in skin lesion segmentation, with the figures and tables providing clear, cross-referenced evidence of its superior performance and operational efficiency.
We conducted formal statistical significance tests to validate the robustness of the observed improvements in the proposed Hybrid ResUNet model. Specifically, paired t-tests were applied to compare our model's performance against baseline and state-of-the-art models across key metrics, including accuracy, Dice coefficient, and Jaccard index. Additionally, we computed 95% confidence intervals (CIs) for these metrics to assess the precision and reliability of our results.
Table 12 statistical significance test results comparing the proposed Hybrid ResUNet model with the baseline model. The table reports mean values of key performance metrics, 95% confidence intervals (CIs), and p-values from paired t-tests. A p-value of less than 0.05 indicates that the results are statistically significant. The p-values obtained for all metrics are less than 0.01, indicating that the differences in performance between the Hybrid ResUNet and the baseline model are statistically significant. The 95% confidence intervals for the Hybrid ResUNet consistently show higher performance metrics, further confirming the robustness and reliability of the improvements.
Comparison with existing state-of-the-art methods
This section compares the hybrid ResUNet model's performance with other state-of-the-art (SOTA) methods in skin lesion segmentation. This comparative analysis highlights the proposed model's advancements and efficiency over existing techniques.
Table 13 provides a detailed comparison of the proposed hybrid ResUNet model with various SOTA methods utilized in skin lesion segmentation. Each method is evaluated based on five critical metrics: accuracy, sensitivity, specificity, Dice coefficient, and Jaccard index.
The hybrid ResUNet model shows the highest accuracy (95.8%) compared to the other models. This demonstrates its ability to identify lesion areas in various complex images accurately. Sensitivity and Specificity metrics are particularly useful in medical imaging applications, where missing a positive case (low sensitivity) or falsely identifying a case (low specificity) can have negative consequences. The hybrid ResUNet achieves balanced performance on both metrics, allowing reliable lesion detection with low false positives. Dice Coefficient and Jaccard Index metrics measure the difference between the predicted distribution and the ground truth, with higher numbers indicating better distribution quality. The ResUNet hybrid scores higher than the other models, indicating that it is better at capturing the true wound boundaries, even in extreme cases.
Comparative analysis shows that hybrid ResUNet outperforms traditional architectures such as U-Net and SegNet and has advantages over recent developments such as DeepLabV3 + and models with transfer learning from pre-trained networks such as VGG19 and ResNet -5 Remaining ResNet connections combined in U-Net architecture for increased hybrid ResUNet capability to detect spatial hierarchies in image data, resulting in more accurate segmentation results This balance of depth (via ResNet) and spatial resolution (via U-Net) is a decisive factor in high performance.
This comparison highlights the potential of the hybrid ResUNet as an improved tool for medical imaging work, especially for accurate and reliable classification of skin lesions, which is important for early its therapeutic pattern seen in dermatitis.
Conclusion and future work
This study presents a novel application of the Hybrid ResUNet model for skin lesion classification, incorporating the sophisticated ACO method for hyperparameter tuning. Results show that the model performs better than traditional ResNet and U-Net architectures. Deep learning algorithms and intelligent optimization methods demonstrate potential, providing promising directions for medical applications' automated image analysis. Future research directions will focus on more key areas to further enhance the performance and application of the Hybrid ResUNet model. Efforts will be made to integrate more imaging data, which may improve the detection of skin lesions and the accuracy of examination. Furthermore, exploring other optimization algorithms alongside ACO, such as genetic algorithms or particle swarm optimization, can further improve hyperparameter selection and model robustness. Another approach to explore is to provide practical benefits to healthcare professionals, user-friendliness The model would be used and tested in a real-world clinical setting for research purposes and moreover, if the application of the model is extended to other medical imaging tasks, e.g. fragmentation of images of cancer or pathology can expand its impact. Continued improvement of the model's computational efficiency will also be crucial, particularly for enabling its use in mobile devices and regions with limited computing resources. Finally, investigating the interpretability of the model's decision-making process could enhance the trust and transparency of the AI system among its end users, paving the way for its adoption in sensitive medical fields.
Availability of data and materials
The data that support the findings of this study are openly available in an online repository at: https://challenge.isic-archive.com/data/#2018.
References
Concepcion J, et al. Trends of cancer screenings, diagnoses, and mortalities during the COVID-19 pandemic: implications and future recommendations. Am Surgeon™. 2023;89(6):2276–83.
Abdalla BMZ, Abdalla CMZ. Epidemiology of skin cancer. In: Oncodermatology: an evidence-based, multidisciplinary approach to best practices. Springer; 2023. p. 29–35. https://www.mdpi.com/1999-4923/16/2/223.
Freeman K, et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: systematic review of diagnostic accuracy studies. BMJ. 2020;368:m127.
McFerran E, Donaldson S, Dolan O, Lawler M. Skin in the game: the cost consequences of skin cancer diagnosis, treatment and care in Northern Ireland. J Cancer Policy. 2024;39:100468.
Yuan J, Li X, Yu S. Global, regional, and national incidence trend analysis of malignant skin melanoma between 1990 and 2019, and projections until 2034. Cancer Control. 2024;31:10732748241227340.
Reyes-Marcelino G, et al. School-based interventions to improve sun-safe knowledge, attitudes and behaviors in childhood and adolescence: a systematic review. Prev Med. 2021;146: 106459.
Vizdoaga V, Lozan O, Bețiu M. Causes of late detection of skin cancer. Norwegian J Dev Int Sci. 2021(74–1):19–25. https://cyberleninka.ru/article/n/causes-of-late-detection-of-skin-cancer/viewer.
Malvehy J, Pellacani G. Dermoscopy, confocal microscopy and other non-invasive tools for the diagnosis of non-melanoma skin cancers and other skin conditions. Acta Dermato-Venereologica. 2017;97:22–30.
Suleman M, et al. Smart MobiNet: a deep learning approach for accurate skin cancer diagnosis. https://www.techscience.com/cmc/v77n3/55032.
Trettel A, Eissing L, Augustin M. Telemedicine in dermatology: findings and experiences worldwide–a systematic literature review. J Eur Acad Dermatol Venereol. 2018;32(2):215–24.
Ilișanu M-A, Moldoveanu F, Moldoveanu A. Multispectral imaging for skin diseases assessment—state of the art and perspectives. Sensors. 2023;23(8): 3888.
Fink C, Haenssle H. Non-invasive tools for the diagnosis of cutaneous melanoma. Skin Res Technol. 2017;23(3):261–71.
Soare C, Cozma EC, Celarel AM, Rosca AM, Lupu M, Voiculescu VM. Digitally enhanced methods for the diagnosis and monitoring of treatment responses in actinic keratoses: a new avenue in personalized skin care. Cancers. 2024;16(3): 484.
Lallas A, et al. Accuracy of dermoscopic criteria for the diagnosis of melanoma in situ. JAMA Dermatology. 2018;154(4):414–9.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, 2015, proceedings, part III 18. Springer; 2015. p. 234–241. https://link.springer.com/chapter/10.1007/978-3-319-24574-4_28.
Cordes D, Yang Z, Zhuang X, Sreenivasan K, Mishra V, Hua LH. A new algebraic method for quantitative proton density mapping using multi-channel coil data. Med Image Anal. 2017;40:154–71.
Winkler JK, et al. Assessment of diagnostic performance of dermatologists cooperating with a convolutional neural network in a prospective clinical study: human with machine. JAMA Dermatology. 2023;159(6):621–7.
Kuo KM, Talley PC, Chang C-S. The accuracy of artificial intelligence used for non-melanoma skin cancer diagnoses: a meta-analysis. BMC Med Inf Decis Mak. 2023;23(1):138.
Rana M, Bhushan M. Machine learning and deep learning approach for medical image analysis: diagnosis to detection. Multimedia Tools Appl. 2023;82(17):26731–69.
Iqbal J. Dermatologist-level classification of skin cancer with deep neural networks. 2021.
Jones C, Castro DC, De Sousa Ribeiro F, Oktay O, McCradden M, Glocker B. A causal perspective on dataset bias in machine learning for medical imaging. Nat Mach Intell. 2024:1–9. https://arxiv.org/abs/2307.16526.
Du T, et al. Adaptive convolutional neural networks for accelerating magnetic resonance imaging via k-space data interpolation. Med Image Anal. 2021;72: 102098.
De Matos J, Ataky STM, de Souza Britto A Jr, Soares de Oliveira LE, Lameiras Koerich A. Machine learning methods for histopathological image analysis: a review. Electronics. 2021;10(5):562.
Strzelecki M, Kociołek M, Strąkowska M, Kozłowski M, Grzybowski A, Szczypiński PM. Artificial intelligence in the detection of skin cancer: state of the art. Clin Dermatol. 2024;42:280–95.
Hafhouf B, Zitouni A, Megherbi AC, Sbaa S. A modified U-Net for skin lesion segmentation, in 2020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP). El-Oued city: IEEE; 2020. pp. 225–228. https://www.univ-eloued.dz/CCSSP2020/.
Azad R, Asadi-Aghbolaghi M, Fathy M, Escalera S. Attention deeplabv3+: Multi-level context attention mechanism for skin lesion segmentation, in European conference on computer vision, 2020: Springer, pp. 251–266. https://dl.acm.org/doi/10.1007/978-3-030-66415-2_16.
Han Q, et al. HWA-SegNet: multi-channel skin lesion image segmentation network with hierarchical analysis and weight adjustment. Comput Biol Med. 2023;152: 106343.
Jasil SG, Ulagamuthalvi V. Deep learning architecture using transfer learning for classification of skin lesions. J Ambient Intell Humaniz Comput. 2021:1–8. https://link.springer.com/article/10.1007/s12652-021-03062-7.
Islam W, Jones M, Faiz R, Sadeghipour N, Qiu Y, Zheng B. Improving performance of breast lesion classification using a ResNet50 model optimized with a novel attention mechanism. Tomography. 2022;8(5):2411–25.
Abunadi I, Senan EM. Deep learning and machine learning techniques of diagnosis dermoscopy images for early detection of skin diseases. Electronics. 2021;10(24): 3158.
Fang W, Yu Z, Chen Y, Huang T, Masquelier T, Tian Y. Deep residual learning in spiking neural networks. Adv Neural Inf Process Syst. 2021;34:21056–69.
Codella N et al. Skin lesion analysis toward melanoma detection 2018: a challenge hosted by the international skin imaging collaboration (isic). arXiv preprint arXiv:1902.03368, 2019.
Aydemir O. A new performance evaluation metric for classifiers: polygon area metric. J Classif. 2021;38:16–26.
Acknowledgements
The authors extend their appreciation to Taif University, Saudi Arabia, for supporting this work through project number (TU-DSPP-2024-139).
Conflict of interest
The authors declare no conflict of interest regarding the presented work and results.
Funding
This research was funded by Taif University, Taif, Saudi Arabia, Project No. (TU-DSPP-2024–139).
Author information
Authors and Affiliations
Contributions
The authors confirm their contribution to the paper as follows: Conceptualization, N.A and A.I; investigation. F.A.A and Q.H.N; writing original draft preparation, N.S. and A.I; Experiments, K.D,A; Review and editing, Q.H.N. and K.D.A.; Supervision, A.I; Funding, F.A.A. All authors have read and agreed to the published version of the manuscript. All authors reviewed the results and approved the final version of the manuscript.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Sarwar, N., Irshad, A., Naith, Q.H. et al. Skin lesion segmentation using deep learning algorithm with ant colony optimization. BMC Med Inform Decis Mak 24, 265 (2024). https://doi.org/10.1186/s12911-024-02686-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12911-024-02686-x