Work title | Research task | Model | Advantages and disadvantages |
---|---|---|---|
[15] Efficient deep features selections and classification for flower species recognition | Flower species recognition | AlexNet and VGG16 | The research explored the similarity and intra-class variability among flower classes. The automatic flower recognition and classification were effectively realized by the method of pre-training neural network for feature extraction. It showed good performance on the Flower17 data set and Flower102 data set. However, this study only used two pre-trained neural networks for comparative experiments, AlexNet and VGG16, and failed to fully explore the feature complementarity between different pre-trained neural networks, and which caused a problem of incomplete experimental models |
[16] Transfer learning with pre-trained deep convolutional neural networks for the automatic assessment of liver steatosis in ultrasound images | Automatic assessment of liver steatosis in ultrasound images | Inception-v3 and VGG-16 | The study constructed 629 liver image data sets of two types of normal and liver steatosis, which evaluated two pre-trained convolutional neural network models using fine-tuning methods and obtained satisfactory results. However, there is still the problem of incomplete experimental models |
[17] Few-shot hypercolumn-based mitochondria segmentation in cardiac and outer hair cells in focused ion beam-scanning electron microscopy (FIB-SEM) data | Few-shot hypercolumn-based mitochondria segmentation | VGG-16 | This research uses the convolution features of a pre-trained deep multi-layer convolutional neural network (such as VGG-16) to realize a few shots automatic segmentation method of mitochondria in electron microscope images. The proposed method proves that it can still provide competitive performance with less training data. However, this study only uses VGG-16 as the feature extraction model, which caused a problem that the experimental model is not comprehensive |
[18] ResFeats: Residual network based features for underwater image classification | Underwater image classification | ResNet-50 | The study explored how to use pre-trained deep networks to classify and transfer learning underwater images, and strongly verified that combining res expertise from different layers can generate a powerful image descriptor. However, the complementarity of features between different levels of other pre-trained neural network models can still be discussed later |
[19] Can pre-trained convolutional neural networks be directly used as a feature extractor for video-based neonatal sleep and wake classification | Video-based neonatal sleep and wake classification | VGG16, VGG19, InceptionV3, GoogLeNet, ResNet and AlexNet | The research also uses pre-trained convolutional neural networks (CNNs) as feature extractors, and compares the different classification effects between multiple pre-trained models. However, this study uses AlexNet to use Fluke (RGB) video frames with accuracy, sensitivity, and specificity of 65.3%, 69.8%, and 61.0%, respectively. There is still a lot of room for improvement, and a special neural network is proposed to train newborn data |
[20] CovH2SD: A COVID-19 detection approach based on Harris Hawks optimization and stacked deep learning | COVID-19 detection | ResNet50, ResNet101, VGG16, VGG19, Xception, MobileNetV1, MobileNetV2, DenseNet121 and DenseNet169 | The research focused on the impact of new coronary pneumonia. It used deep learning and pre-training models to extract features from CT images. The experimental process was complete and had good results. However, we hope that subsequent research can be extended to other types of medical images for extensive experiments |