|
|
ORIGINAL ARTICLE |
|
|
|
Year : 2022 | Volume
: 47
| Issue : 4 | Page : 315-321 |
|
Magnetic resonance imaging image-based segmentation of brain tumor using the modified transfer learning method
Sandeep Singh1, Benoy Kumar Singh2, Anuj Kumar3
1 Department of Physics, GLA University, Mathura, Uttar Pradesh; Department of Radiation Oncology, Lady Hardinge Medical College and Associated Hospitals, New Delhi, India 2 Department of Physics, GLA University, Mathura, Uttar Pradesh, India 3 Department of Radiotherapy, SN Medical College, Agra, Uttar Pradesh, India
Date of Submission | 14-Jun-2022 |
Date of Decision | 07-Oct-2022 |
Date of Acceptance | 09-Oct-2022 |
Date of Web Publication | 10-Jan-2023 |
Correspondence Address: Mr. Sandeep Singh Department of Radiation Oncology, Lady Harding Medical College, New Delhi - 110 001 India
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/jmp.jmp_52_22
Abstract | | |
Purpose: The goal of this study was to improve overall brain tumor segmentation (BraTS) accuracy. In this study, a form of convolutional neural network called three-dimensional (3D) U-Net was utilized to segment various tumor regions on brain 3D magnetic resonance imaging images using a transfer learning technique. Materials and Methods: The dataset used for this study was obtained from the multimodal BraTS challenge. The total number of studies was 2240, obtained from BraTS 2018, BraTS 2019, BraTS 2020, and BraTS 2021 challenges, and each study had five series: T1, contrast-enhanced-T1, Flair, T2, and segmented mask file (seg), all in Neuroimaging Informatics Technology Initiative (NIFTI) format. The proposed method employs a 3D U-Net that was trained separately on each of the four datasets by transferring weights across them. Results: The overall training accuracy, validation accuracy, mean dice coefficient, and mean intersection over union achieved were 99.35%, 98.93%, 0.9875%, and 0.8738%, respectively. Conclusion: The proposed method for tumor segmentation outperforms the existing method.
Keywords: Convolutional neural networks, deep learning, transfer learning, three-dimensional image processing
How to cite this article: Singh S, Singh BK, Kumar A. Magnetic resonance imaging image-based segmentation of brain tumor using the modified transfer learning method. J Med Phys 2022;47:315-21 |
How to cite this URL: Singh S, Singh BK, Kumar A. Magnetic resonance imaging image-based segmentation of brain tumor using the modified transfer learning method. J Med Phys [serial online] 2022 [cited 2023 Mar 24];47:315-21. Available from: https://www.jmp.org.in/text.asp?2022/47/4/315/367424 |
Introduction | |  |
A radiologist[1] interprets the huge quantity of images produced each day for diagnosis through visual processing, which takes time and is highly susceptible to human error. As a result, automatic segmentation using artificial intelligence and deep learning is currently used to eliminate this inaccuracy. Segmentation[2] is a common method for medical image analysis, i.e., based on a region of comparable features including gray level, contrast, and pixel value. In the context of medical images, the major goal of segmentation was to
- Separate different parts of the region of interest (ROI), i.e., tumors and normal structures
- Investigate the anatomical structures
- After any treatment, such as chemotherapy or radiation therapy, measure the volume of ROI.
However, segmentation using only one type of image is a difficult task since no single imaging modality can offer all of the information.[3] As a result, the above information from multimodality imaging can improve the segmentation accuracy and reliability.[4],[5] Deep learning and artificial intelligence show great potential for the identification and segmentation of medical images using magnetic resonance imaging (MRI) because it simplifies the process of automating imaging-based diagnosis. Using the brain tumor segmentation (BraTS)[6] dataset, Zhou et al.[7] employed four different modalities (T1-weighted, T2-weighted, contrast-enhanced [CE] T1-weighted, and fluid attenuation inversion recovery [FLAIR]) of MRI images for multi-modal image segmentation. To achieve tumor segmentation, they deploy three convolutional neural networks (CNNs) in a cascade. DeepSeg, a completely automated CNN approach based on an ensemble of two encoder-decoder methods, was employed by Zeineldin et al.[8] To increase the detection of the entire tumor and its subregions in comparison to the total tumor, they used all accessible MRI modalities (T1, T2, T1ce, and T2-FLAIR), as well as region-based training and excessive data augmentation. Roy Choudhury et al.[9] used DeepLabv3 + as the base network for BraTS in multimodal MRI images in the BraTS dataset, with Xception[10] as the base network. They used T1-weighted, T2-weighted, CE-T1-weighted, and FLAIR images to segment the total tumor, tumor core, and enhancing tumor (ET), similar to Wang et al.[11] They employed two-dimensional orthogonal slices derived from the three-dimensional (3D) volume as input instead of the entire volume. In this study, using a publically available dataset from the Multi-Modal BraTS Challenge, we developed a new approach for multi-modal segmentation of brain tumors. We use various BraTS datasets to improve the neural network's accuracy by transferring the weight of the training process.
The lack of sufficient datasets and resources to train the model is a fundamental issue in the field of deep learning and machine learning. Due to this, first, the model's performance suffers as a result of the lack of data, and second, training a machine learning model takes either a long time or a high-end machine with plenty of graphics memory, among other things. As a result, transfer learning is quite beneficial in this situation. Transfer learning, also known as knowledge transfer,[12] is a strategy for reusing a previously learned model to address a new problem. This saves time by not having to retrain the model for similar tasks, and we can also take a dataset from another domain to train our model and reuse it for our purposes.
Materials and Methods | |  |
High-grade gliomas (HGG), low-grade gliomas (LGG), and other forms of brain cancers have been observed in humans.[13] These tumors come in a variety of shapes and sizes, and their characteristics change from country to country.[14] As a result, training a model just on a dataset from a single region can result in poor overall performance. The major goal of this study was to improve the overall performance of the image segmentation model on various datasets from various regions and for various types of tumors without relying on human assistance. Federated segmentation[15] was the name for this form of segmentation. A model that had been trained on a variety of datasets and tumor types can outperform a model that had only been trained on a single dataset. Tumors of the HGG and LGG kinds were found in our 2240-study database.
Data collection
A publicly available multi-institutional dataset, BraTS 2018,[7],[16],[17] BraTS 2019,[7],[16],[17] BraTS 2020,[7],[16],[17] and BraTS 2021[7],[16],[18] having 3D multi-modal images, was used for this task of segmenting brain tumors using. This dataset was released by the Center for Biomedical Image Computing and Analytics (CBICAs), University of Pennsylvania, and downloaded from the CBICA image processing portal. This dataset includes four types of MRI images (T1-weighted, T2-weighted, CE-T1-weighted, and FLAIR) and one segmented file, all in NIFTI format. Non-ET core (NETC), enhancing tumor core (ETC), and peritumoral edema were the different tumor-type sections on the segmented mask file. There were 2240 MRI scans in the dataset. We have used the weight carrier transfer learning method for segmentation and evaluated accuracy based on various parameters such as the Jaccard coefficient or intersection over union (IOU) score, dice score, and pixel accuracy. The code was run using Google Collaboratory Pro[19] and had Python V3.7. It offers 25 GB of RAM and a Tesla P100 GPU that had 16 GB of graphics memory. For medical image segmentation, there were numerous deep-learning network designs were available. We employed 3D U-Net[20] in our study because compared to other segmentation algorithms such as VGG16, VGG19, and inception 3D U-Net shows superior performance in qualitative metrics such as Jaccard coefficient, Dice coefficient, and pixel accuracy, and it was very useful for complex medical MRI image segmentation. For testing reasons, a total of 100 MRI scans were employed. For training and validation, 75% and 25% of the data were used, respectively, accounting for a total number of 1605 studies for training and 535 studies for validation, respectively.
Data processing
The dataset was 240-by-240-by-155 pixels in size, which was too much for the system to handle, so we processed it to remove slices with a lot of background pixels. As a result of processing, the image used as input and output had final dimensions of 128-by-128-by-128 as shown in [Figure 1]. First, we performed the segmentation task using all four modalities (T1-weighted, T2-weighted, CE-T1-weighted, and FLAIR). All the four modalities (T1-weighted, T2-weighted, CE-T1-weighted, and FLAIR) were stacked together along the z-axis, which makes the data 4 channels input. Therefore, the input layer of the 3D U-Net was changed to allow input and output for the appropriate number of channels, i.e., four (128-by-128-by-128-by-4). The architecture was designed in such a way that it accepts input in the channel last format (i.e., IMG_HEIGHT, IMG_WIDTH, IMG_DEPTH, IMG_CHANNELS). In the output layer of CNN, the activation function was changed from rectified linear unit (ReLU) to softMax, because ReLU was a type of activation function which will output the input directly if it was positive or one, otherwise it will give output as zero. Hence, we used ReLU in the Single-Label problems, where the output was either 1 or zero. For multiclass-label, softMax was used in the output layer of the neural network. SoftMax[21] is a mathematical function that converts a vector of numbers into a vector of probabilities, where the probabilities of each value were proportional to the relative scale of each value in the vector. | Figure 1: Left (Image before processing) and Right (Image after processing), size decreased to 128-by-128-by-128
Click here to view |
Modified transfer learning method
A bridge connects the encoder and decoder subnetworks of the original U-Net.[22] There were numerous stages (depth) in these networks, each with multiple layers. Each encoder step consists of two convolution paths with activation ReLU[23] followed by a 2-by-2 max-pooling layer. In a transposed fashion, the same was followed in the decoder path. The training optimizer Adaptive Moment Estimation (ADAM)[24] was utilized. We employed a loss function that combined dice loss[25] and focal loss, as well as a learning rate of 0.00003, to get good results after a few training cycles. To avoid memory difficulties, a batch size of 5 images was employed. The maximum number of epochs that were used was 100.
First, we created a 3D U-Net model from scratch, complete with encoder and decoder parts. The model had then trained on the BraTS 2018 dataset, which had been preprocessed to make it smaller and fit into system memory. The model architecture and weights can be saved in a single file using the TensorFlow library,[26] which was used in this work. This file can be used to create a new model with the same architecture and weights that have already been trained. As shown in [Figure 2], the trained model, including its architecture and trained weights w1ij (i, was the number of neurons in the preceding layer or first layer, and j, was the number of neurons in the next layer of a CNN), was preserved. Now, these trained weights, w1ij, and architecture were used to create a new model that would be trained on the BraTS 2019 dataset, and again the model's weights, w2ij, were saved after training. For the next two datasets, BraTS 2020 and BraTS 2021, the process was repeated. After all four datasets were trained, the final model was saved and used to predict the test dataset. The strategy enables the model to learn more effectively on varied datasets while also assuming a prior knowledge of the job, resulting in improved accuracy and other segmentation performance indicators.
Results and Discussions | |  |
Metrics for evaluation
In our research, we used a variety of segmentation performance metrics,[27] as stated in the equations (1–4). The IOU score, known as the Jaccard coefficient, is the most used statistic for image segmentation model performance, shown in [Figure 3]. It is useful when we need statistically precise measurements with a penalty for false negatives (FNs). The percentage of correctly detected pixels for each class (i.e., NETC and ETC) is called accuracy. The weighted dice similarity coefficient (DSC) is the dice of each class weighted by the number of pixels in that class. When images have disproportionally sized classes, the weighted DSC is employed to lessen the influence of errors in the small classes on the overall quality score The DSC or f1 score is defined as twice the number of elements common to both sets (i.e., ground truth and predicted mask) divided by the sum of the number of elements in each set, shown in [Figure 4]. The fraction of pixels in an image that is correctly categorized is known as pixel accuracy. | Figure 3: Illustration of IOU score: Area of overlap/area of union. IOU: Intersection over union
Click here to view |
 | Figure 4: Illustration of Dice similarity coefficient: 2x overlap/Total number of pixels
Click here to view |
The IOU score for sets A and B was calculated as follows:
IOU (A, B) = |Intersection (A, B) |/|Union (A, B) | (1a)
where | A| represents the cardinal of set A. The IOU score can also be expressed in terms of true positives (TP), false positives (FP), and FN as:
IOU (A, B) = TP/(TP + FN + FP) (1b)
where TP = True Positives, FP = False positives, FN = False Negatives
The DSC or f1 score for two given sets A and B was expressed as:
DSC = 2* | intersection (A, B) |/(|A| + |B|) (2)
where | A| and | B| were the cardinalities of the two sets (i.e., the number of elements in each set).
Loss functions
In situations where there was a problem with Data-imbalancement[28] (background was also labeled and had a high portion as compared to ROI), loss functions were typically used. Because the region of interest in medical images was so small in comparison to the background, there was a high probability that the learning process will become stuck in local minima of loss and produce a model with predictions that were heavily biased in favor of the background. In this scenario, the area of interest or tumor will either be partially segmented or missed. Dice loss[25] was therefore used to keep a check on this issue. However, as noted by Zhang Y et al.,[29] there were a number of drawbacks to dice loss, including (a) that it was insensitive to the distance of nonoverlapping regions, (b) that it was insensitive to the precise contour of segmented regions because the regions only have a minor impact on the overall volume, and (c) that it had a large dynamic sensitivity to small lesions compared to large lesions. Therefore, we used two different loss functions,[29],[30] the first of which was dice loss, and the second of which was a total loss,[29] which was a combination of dice loss and categorical focused loss. [Figure 5] shows segmentation results on different test images with their corresponding ground truth. The dice loss was calculated as follows: | Figure 5: Segmentation results on different test images, showing (from up to down) Input image, original segmented tumor or ground truth, and predicted tumor or segmented tumor
Click here to view |
Dice (loss) =1-DSC = 1-(2* | intersection (A, B) |/(|A| + |B|)) (3)
and the total loss was
Total loss = dice (loss) + (2* Categorical Focal loss [CFL]) (4)
where CFL is expressed as:
CFL (A, B) =-A*α* (1 − B) γ*log (B) (5)
where A was ground truth, B was prediction or probability of ground truth class, α was float or integer (default value = 0.25), and γ was float or integer value (default value = 2.0).
Prediction of different test images
[Table 1] shows the results of some of the test images using the model outlined in section 2.1. As can be seen, the model's accuracy was >99%, which was far higher than existing state-of-the-art frameworks. The Jaccard coefficient or Mean IOU score was also above 0.855, and the mean pixel accuracy was more or equivalent to that of the existing frameworks.
Comparison after training on every dataset
We also examined the performance of different segmentation parameters after training on the datasets from each year, as given in [Table 2]. The model's accuracy increased by approximately 4%, from 95.9% to 99.3%. The average IOU score, Dice coefficient, NETC-weighted dice, ETC-weighted dice, and mean pixel accuracy all improved by 17%, 2.4%, 11.2%, 14.8%, and 3.2%, respectively. | Table 2: Comparison of segmentation parameters after every training step
Click here to view |
Comparison of different loss functions
For tumor segmentation tasks, ADAM is the best optimizer.[27] We compared the outcomes of the dice parameters as presented in [Table 3] using ADAM as the optimizer and two alternative loss functions as described in section 3.1.1. In the investigation, the dice loss function outperformed the total loss function. | Table 3: Comparison of segmentation parameters with different loss functions
Click here to view |
Comparison with existing work
We also compared the outcomes of our methods to those of other methods. [Table 4] shows that the proposed method outperforms the other on every parameter. This could be due to two factors: first, the amount of data used to train the model, and second, the fact that the same model was trained on multiple datasets.
Conclusion | |  |
As we all know, due to the complexity of medical images, segmenting them is a difficult task. The proposed method effectively segments the tumor while requiring the least amount of time as compared to human experts. When compared to other state-of-the-art frameworks, this method had an accuracy of 99.0% or higher. The amount of time it takes to train the model and the amount of memory space it requires could both be reduced. Another extension of this would be to train the same model in different organs and tumor types (e.g., kidney, liver, and lung). We also want to convert this segmented tumor into a 3D Dicom format structure so that radiation therapy contouring can be done faster.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
[36]
References | |  |
1. | de Azevedo-Marques PM, Ferreira JR Jr. Medical image analyst: A radiology career focused on comprehensive quantitative imaging analytics to improve healthcare. Acad Radiol 2022;29:170. |
2. | Sharma N, Aggarwal LM. Automated medical image segmentation techniques. J Med Phys 2010;35:3-14.  [ PUBMED] [Full text] |
3. | Ranjbarzadeh R, Bagherian Kasgari A, Jafarzadeh Ghoushchi S, Anari S, Naseri M, Bendechache M. Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci Rep 2021;11:10930. |
4. | Islam KT, Wijewickrema S, O'Leary S. A deep learning framework for segmenting brain tumors using MRI and synthetically generated CT images. Sensors (Basel) 2022;22:523. |
5. | Lee D, Lee J, Ko J, Yoon J, Ryu K, Nam Y. Deep learning in MR image processing. Investig Magn Reson Imaging 2019;23:81. |
6. | Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imaging 2015;34:1993-2024. |
7. | Zhou C, Ding C, Lu Z, Wang X, Tao D. One-pass multi-task convolutional neural networks for efficient brain tumor segmentation. In: Frangi AF, Schnabel JA, Davatzikos C, Alberola-López C, Fichtinger G, editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, (Lecture Notes in Computer Science). Vol. 11072. Cham: Springer International Publishing; 2018. p. 637-45. Available from: http://link.springer.com/10.1007/978-3-030-00931-1_73. [Last accessed on 2022 Jun 14]. |
8. | Zeineldin RA, Karar ME, Mathis-Ullrich F, Burgert O. Ensemble CNN Networks for GBM Tumors Segmentation Using Multi-parametric MRI; 2021. Available from: https://arxiv.org/abs/2112.06554. [Last accessed on 2022 Jun 14]. |
9. | Roy Choudhury A, Vanguri R, Jambawalikar SR, Kumar P. Segmentation of brain tumors using DeepLabv3+. In: Crimi A, Bakas S, Kuijf H, Keyvan F, Reyes M, van Walsum T, editors. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, (Lecture Notes in Computer Science). Vol. 11384. Cham: Springer International Publishing; 2019. p. 154-67. Available from: http://link.springer.com/10.1007/978-3-030-11726-9_14. [Last accessed on 2022 Jun 14]. |
10. | Chollet F. Xception: Deep Learning with Depthwise Separable Convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI: IEEE; 2017. p. 1800-7. Available from: http://ieeexplore.ieee.org/document/8099678/. [Last accessed on 2022 Jun 14]. |
11. | Wang G, Li W, Ourselin S, Vercauteren T. Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation. Front Comput Neurosci 2019;13:56. |
12. | Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng 2010;22:1345-59. |
13. | Polly FP, Shil SK, Hossain MA, Ayman A, Jang YM. Detection and classification of HGG and LGG brain tumor using machine learning. In: 2018 International Conference on Information Networking (ICOIN). Chiang Mai, Thailand: IEEE; 2018. p. 813-7. Available from: http://ieeexplore.ieee.org/document/8343231/. [Last accessed on 2022 Jun 14]. |
14. | Khazaei Z, Goodarzi E, Borhaninejad V, Iranmanesh F, Mirshekarpour H, Mirzaei B, et al. The association between incidence and mortality of brain cancer and human development index (HDI): An ecological study. BMC Public Health 2020;20:1696. |
15. | Bakas S, Sheller M, Pati S, Edwards B, Anthony Reina G, Baid U, et al. Federated Tumor Segmentation; 02 March, 2021. Available from: https://zenodo.org/record/4573128. [Last accessed on 2022 Jun 14]. |
16. | Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci Data 2017;4:170117. |
17. | Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv; 2019;1811.02629. Available from: http://arxiv.org/abs/1811.02629. [Last accessed on 2022 Oct 19]. |
18. | Baid U, Ghodasara S, Mohan S, Bilello M, Calabrese E, Colak E, et al. The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv 2021; 2107.02314. Available from: http://arxiv.org/abs/2107.02314. [Last accessed on 2022 Jun 14]. |
19. | |
20. | Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. arXiv 2016;1606.06650. Available from: http://arxiv.org/abs/1606.06650. [Last accessed on 2022 Jun 14]. |
21. | Nwankpa C, Ijomah W, Gachagan A, Marshall S. Activation Functions: Comparison of Trends in Practice and Research for Deep Learning; 2018. Available from: https://arxiv.org/abs/1811.03378. [Last accessed on 2022 Jun 14]. |
22. | Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Cham: Springer International Publishing; 2015. p. 234-41. Available from: http://link.springer.com/10.1007/978-3-319-24574-4_28. [Last accessed on 2022 Jun 14]. |
23. | Nair V, Hinton GE. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning. Madison, WI, USA: Omnipress; 2010. p. 807-14. (ICML'10). Available from: https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf. [Last accesses on 2022 Jun 14]. |
24. | |
25. | Sudre CH, Li W, Vercauteren T, Ourselin S, Jorge Cardoso M. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2017) 2017;2017:240-8. |
26. | Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. TensorFlow: A system for large-scale machine learning. arXiv; 2016; 1605.08695. Available from: http://arxiv.org/abs/1605.08695. [Last accessed on 2022 Oct 19]. |
27. | Csurka G, Larlus D, Perronnin F. What was a good evaluation measure for semantic segmentation? In: Procedings of the British Machine Vision Conference 2013. Bristol: British Machine Vision Association; 2013. p. 32.1-11. Available from: http://www.bmva.org/bmvc/2013/Papers/paper0032/index.html. [Last accessed on 2022Jun 14]. |
28. | Li X, Sun X, Meng Y, Liang J, Wu F, Li J. Dice loss for data-imbalanced NLP tasks. In: Proceedings of the 58 th Annual Meeting of the Association for Computational Linguistics. Cornell University, USA: Association for Computational Linguistics; 2020. p. 465-76. Available from: https://aclanthology.org/2020.acl-main. 45. [Last accessed on 2022 Jul 28]. |
29. | Zhang Y, Liu S, Li C, Wang J. Rethinking the dice loss for deep learning lesion segmentation in medical images. J Shanghai Jiaotong Univ (Sci) 2021;26:93-102. |
30. | Furtado P. Testing segmentation popular loss and variations in three multiclass medical imaging problems. J Imaging 2021;7:16. |
31. | Arif M, Ajesh F, Shamsudheen S, Geman O, Izdrui D, Vicoveanu D. Brain tumor detection and classification by MRI using biologically inspired orthogonal wavelet transform and deep learning techniques. J Healthc Eng 2022;2022:2693621. |
32. | Guo Z, Li X, Huang H, Guo N, Li Q. Deep learning-based image segmentation on multimodal medical imaging. IEEE Trans Radiat Plasma Med Sci 2019;3:162-9. |
33. | Sun J, Peng Y, Guo Y, Li D. Segmentation of the multimodal brain tumor image used the multi-pathway architecture method based on 3D FCN. Neurocomputing 2021;423:34-45. |
34. | Amin J, Sharif M, Yasmin M, Saba T, Anjum MA, Fernandes SL. A new approach for brain tumor segmentation and classification based on score level fusion using transfer learning. J Med Syst 2019;43:326. |
35. | Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal 2018;43:98-111. |
36. | |
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]
[Table 1], [Table 2], [Table 3], [Table 4]
|