Journal of Medical Physics
 Home | Search | Ahead of print | Current Issue | Archives | Instructions | Subscription | Login  The official journal of AMPI, IOMP and AFOMP      
 Users online: 862  Home  EMail this page Print this page Decrease font size Default font size Increase font size 


 
 Table of Contents    
ORIGINAL ARTICLE
Year : 2022  |  Volume : 47  |  Issue : 1  |  Page : 40-49
 

Segmentation of organs and tumor within brain magnetic resonance images using K-nearest neighbor classification


1 Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA
2 Department of Physics and Astronomy, Louisiana State University; Department of Radiation Oncology, Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana, USA

Date of Submission17-Jun-2021
Date of Decision24-Oct-2021
Date of Acceptance11-Dec-2021
Date of Web Publication31-Mar-2022

Correspondence Address:
Dr. Rui Zhang
Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmp.jmp_87_21

Rights and Permissions

 

   Abstract 

Purpose: To fully exploit the benefits of magnetic resonance imaging (MRI) for radiotherapy, it is desirable to develop segmentation methods to delineate patients' MRI images fast and accurately. The purpose of this work is to develop a semi-automatic method to segment organs and tumor within the brain on standard T1- and T2-weighted MRI images. Methods and Materials: Twelve brain cancer patients were retrospectively included in this study, and a simple rigid registration was used to align all the images to the same spatial coordinates. Regions of interest were created for organs and tumor segmentations. The K-nearest neighbor (KNN) classification algorithm was used to characterize the knowledge of previous segmentations using 15 image features (T1 and T2 image intensity, 4 Gabor filtered images, 6 image gradients, and 3 Cartesian coordinates), and the trained models were used to predict organ and tumor contours. Dice similarity coefficient (DSC), normalized surface dice, sensitivity, specificity, and Hausdorff distance were used to evaluate the performance of segmentations. Results: Our semi-automatic segmentations matched with the ground truths closely. The mean DSC value was between 0.49 (optical chiasm) and 0.89 (right eye) for organ segmentations and was 0.87 for tumor segmentation. Overall performance of our method is comparable or superior to the previous work, and the accuracy of our semi-automatic segmentation is generally better for large volume objects. Conclusion: The proposed KNN method can accurately segment organs and tumor using standard brain MRI images, provides fast and accurate image processing and planning tools, and paves the way for clinical implementation of MRI-guided radiotherapy and adaptive radiotherapy.


Keywords: Brain cancer, K-nearest neighbor, machine learning, magnetic resonance imaging, radiotherapy, segmentation


How to cite this article:
Yoganathan S A, Zhang R. Segmentation of organs and tumor within brain magnetic resonance images using K-nearest neighbor classification. J Med Phys 2022;47:40-9

How to cite this URL:
Yoganathan S A, Zhang R. Segmentation of organs and tumor within brain magnetic resonance images using K-nearest neighbor classification. J Med Phys [serial online] 2022 [cited 2022 May 18];47:40-9. Available from: https://www.jmp.org.in/text.asp?2022/47/1/40/341435



   Introduction Top


Magnetic resonance imaging (MRI) offers superior soft-tissue contrast than computed tomography (CT) and is usually the inevitable imaging modality for brain radiotherapy planning. Because MRI does not reveal electron density information, CT images are used to define the attenuation characteristics and MRI images are used for soft-tissue contouring. Usually, CT and corresponding MRI images are registered to enjoy the complementary benefits. With the development of integrated MRI-linear accelerator (MRI-linac),[1] MRI-guided radiotherapy is emerging as a highly promising technique because MRI-linac offers high-quality, real-time anatomic, and physiologic imaging which would allow treatment monitoring, tracking, online adaptive radiotherapy (ART), and tumor response assessment throughout the treatment course. Although MRI-based tracking or online ART is exciting, it is quite challenging to handle large numbers of daily images, especially the organ and tumor delineations, which were usually done manually by experienced dosimetrists or physicians. The manual segmentation is very time-consuming, is subjective with inter-observer variability, and can be the bottleneck for online tracking and ART. To fully exploit the benefits of MRI guidance, it is desirable to develop segmentation methods to delineate daily MRI images fast and accurately.

Most of the previous auto-segmentation work related to radiotherapy dealt with CT images only,[2],[3],[4],[5],[6],[7],[8],[9] while most MRI-based auto-segmentation studies[10],[11],[12],[13],[14],[15],[16],[17],[18] were not related to radiotherapy but instead focused on classifying brain tissues or structures such as gray matter, white matter, cerebrospinal fluid, thalamus, and ventricles for neuroimaging purposes. This is because brain MRI has been the standard tool for diagnosis and treatment of mental illness. Auto-segmentation of organs at risk (OARs) on MRI for radiotherapy is understudied, and the possible reason is small and narrow organs could be much more challenging to segment than large tissues and structures, and CT has been the default choice for OAR segmentations for radiotherapy. However, a patient who receives MRI-based radiotherapy and especially online ART will only have daily MRI images, so fast and accurate segmentation of OARs on MRI is critical.

In this study, we developed a segmentation approach based on K-nearest neighbor (KNN) machine learning algorithm. We chose KNN algorithm because it has only two hyperparameters as K value and distance metric, which are very easy to tune,[19] fast and simple, robust to noise and missing values in data, and works well for multiclass (multiple tissue segmentation) problems.[20] Multiple investigators have used KNN for MRI segmentations, for example, Anbeek et al. used KNN in a series of studies[21],[22],[23],[24] to segment white matter lesions, white matter, central gray matter, cortical gray matter, cerebrospinal fluid, ventricles, and multiple sclerosis lesion in cranial MRI, and they showed that segmentation based on KNN technique is an automatic and accurate approach that is applicable to standard clinical MRI; Mazzara et al.[25] compared a semi-automated KNN method with a fully automatic knowledge-guided method for gross tumor volume segmentation on MRI images (T1, proton density weighted, and T2) of glioma patients. The semi-automated KNN method required a manual selection of region of interest (ROI) on each MRI slice for training, whereas the automatic knowledge-guided method did not require any manual intervention. They found that KNN method performed better (average accuracy 56% ± 6%) than the knowledge-guided method (52% ± 7%); Steenwijk et al.[26] improved KNN classification of white matter lesions in MRI by optimizing intensity normalization and using spatial tissue type priors, which showed excellent performance. Most of the previous studies used KNN algorithm for brain tissue classification and were not related to radiotherapy, while using KNN to segment OARs for radiotherapy is underinvestigated.

The purpose of this work is to develop a KNN machine learning method to segment OARs and tumor within the brain using standard MRI sequences, i.e., T1 and T2 images. Gabor filter-derived features were used in our work to improve the performance of KNN model segmentation. Evaluation of our segmentation results and comparison with previous studies were also performed.


   Materials and Methods Top


Image data

MRI data of 12 brain cancer patients were anonymized[27] and included in this study. MRI consisted T1- and T2-weighted images acquired on a 1.5 Tesla Philips Intera scanner using three-dimensional (3D) gradient echo sequences with the following acquisition parameters: TE/TR = 3.414/7.33 ms, flip angle = 8o, voxel size 0.983 × 0.983 × 1.1 mm3, field of view 236 mm × 236 mm × 158.4 mm, and pixel bandwidth 241 Hz/pixel.

MRI images in Digital Imaging and Communications in Medicine format were converted to Neuroimaging Informatics Technology Initiative format (.nii) using open-source image analysis software (3D SLICER, version 4.9, Slicer Community, USA).[28] MRI bias correction was also applied using the N4ITK MRI Bias correction module available within 3D SLICER with parameters: BSpline order of 3, BSpline grid resolutions of (1, 1, 1), a shrink factor of 4, a maximum number of 100 iterations at each of the 3 resolution levels, and a convergence threshold of 0.0001. MRI images were sampled to 1 mm × 1 mm × 1 mm voxel size, and T1 and T2 MRI images were rigidly aligned. MRI intensity variation across patients was standardized by normalization process, which consisted of matching the intensity histograms of all patient images to the histogram of a randomly selected T1/T2 image.

Feature extraction

In addition to original T1 and T2 image intensity values, the following image features were derived to train KNN models: local energy and mean amplitude images based on Gabor filter, and gradient images (Gx, Gy, Gz). It was shown that including spatial information improved learning algorithm accuracy,[24],[29] so Cartesian coordinates originating from the center of whole-brain T1/T2 image were also included. In summary, total 15 image features were used for training, including 6 features (1 original intensity, local energy and mean amplitude images based on Gabor filters, and 3 gradients) for each of T1 and T2 MRI, and 3 Cartesian coordinates.

There are certain advantages in using the Gabor filter in MRI segmentation, for example, the noise in MRI can be smoothed by the Gaussian kernel in the Gabor filter, and the Gabor filter also enables to extract the edge features accurately.[30] A Gabor filter extracts multiple narrow frequency and orientation signals from the textured images. A two-dimensional (2D) Gabor filter in spatial domain is defined as a Gaussian kernel function modulated by a sinusoidal wave and can be written mathematically as follows:[31]







where f is the sinusoidal wave frequency, γ is the spatial aspect ratio which specifies the ellipticity of the support of the Gabor function, σ is the standard deviation of the Gaussian curve, ϕ is the phase offset, and θ represents the direction of the normal to the parallel stripes of a Gabor function.

Using the method in the literature,[31],[32] we calculated total 40 2D Gabor filters [Figure 1] with 5 scales and 8 orientations for 3 × 3 pixel window size. The Gabor filter is a frequency and orientation-selective Gaussian envelope. The scale channels in Gabor filter would help to capture a specific band of frequency components and scale the magnitude of the Gaussian envelope, and the orientation channels are used to extract directional features from the MRI images.
Figure 1: Gabor filters used in this study. Total 40 two-dimensional filters were calculated with 5 scales and 8 orientations for pixel window 3 × 3. The colors are used to show the difference in scale and orientation

Click here to view


The original T1 and T2 MRI images were convolved with each Gabor filter (real part), which resulted in 40 different representations for each MRI image. These 40 response images were then converted to feature images such as local energy (ψ) and mean amplitude (A):[33]





where Ii and Gi are the MRI image and the Gabor filter, and the symbol ⊗ represents the convolution operation.

Directional gradient images of T1 and T2 images along x-axis (right-left), y-axis (anterior-posterior), and z-axis (inferior-superior) were also created using the Sobel gradient operator.[34] Further, three Cartesian coordinates x, y, and z were also derived from the center of whole-brain T1/T2 image. As T1 and T2 images were registered, they had the same spatial coordinates.

Organ and tumor segmentations

The reference class labels for right eye, right lens, right optic nerve, left eye, left lens, left optic nerve, brain stem, optic chiasm, and tumor were contoured manually on all training patients by dosimetrists in our clinic previously. In order to improve the efficiency and accuracy of KNN model, 3D ROI was used to extract the image region that had only a particular organ or tumor with a certain margin for training. The generation of ROI for each OAR was automated using an atlas-based approach: MRI images of a reference patient were selected, and individual ROIs were manually created with a 2-cm margin around the organs. The training patients' and any test patient's MRI images were affine registered with the reference patient, and the ROIs were transferred from the reference patient to the training patients and the test patient. ROI for tumor was selected manually because tumor position was different for each patient, and it was created as a region that has the tumor by visual observation with an approximately 2-cm external margin.

The aforementioned 15 image features were used as predictor variables in KNN models. Eight separate KNN binary models were trained to contour each OAR inside the ROI region, and a KNN binary model was generated to contour the tumor inside the manually selected ROI region. Only features extracted within the ROI are used for the training and prediction, which will reduce the computation complexity and improve the KNN model performance. The KNN prediction classifies each pixel within ROI (like a semantic segmentation), and the final predicted segmentation patch of an OAR is reshaped to match the full 3D MRI image. The KNN classifier was trained in MATLAB (MathWorks, Natick, MA, USA). Our initial evaluation of various K values and distance metrics showed that the K value of 50 and Euclidean distance are best-suited parameters for this segmentation study. The workflows for OAR and tumor segmentations are shown in [Figure 2].
Figure 2: Workflows for (a) organs at risk and (b) tumor segmentations

Click here to view


The overall architecture of our segmentation method is as follows: in order to bring all the images to a common coordinate system, the T1 and T2 MRI images of all patients are registered with the respective images of a reference patient selected randomly, and the ROIs of individual OARs are transferred from the reference patient to the other patients. The image region within an ROI is input into the OAR-specific KNN model which predicts the particular OAR segmentation. Finally, the predicted individual OARs are combined to form a 3D segmentation matrix. A similar process is followed for tumor segmentation of a test patient except the ROIs are created manually instead of using the atlas-based approach.

Evaluation

Performance of the trained models was evaluated using “leave-one-out cross-validation” approach: N-1 patients (training) were used to train the KNN model and the segmentations were predicted for the one remaining patient (validation), and this procedure was repeated for all possible combinations of training and validation data.

The dice similarity coefficient (DSC)[35] and normalized surface dice (NSD)[36] were used to evaluate the accuracy of segmentation results:



where Vt and Vp are the true and predicted segmented volumes.



where St and Sp are the surfaces of the true and predicted segmentations. and are the border regions of true and predicted segmentation surfaces at a tolerance τ. The tolerance value τ was chosen as 1 mm for small volume OARs such as lens, optic nerves, and optic chiasm, and 3 mm for other remaining larger volumes. In case of tumor, the tolerance value τ of 3 mm was used. DSC and NSD values range from 0 to 1 and higher value indicates better segmentation performance.

Sensitivity and specificity[37] were also used for evaluation:





where TP represents true positive (intersection between the predicted segment and reference segment), and FN represents the false negative (parts of reference segment not covered by predicted segment). TN represents the true negative (pixels correctly detected as background) and FP represents the false positive (parts of predicted segment not covered by reference segment).

We also used Hausdorff distance (HD) to measure the boundary similarity between the true and predicted segmentations. They quantify the maximum distance of a point in X (predicted) to the nearest point in Y (true).

HD = max{h(X,Y),h(Y,X} (10)




   Results Top


[Figure 3] shows the comparison of organ segmentations with the ground truth for a typical patient. [Table 1] shows DSC, NSD, and sensitivity values, and [Table 2] shows HD values for organ segmentations. Specificity values are always 1 and are not included in the tables. Our method generates slightly poorer results for small organs such as eye lens, optic chiasm, and optic nerve.
Figure 3: Comparison of segmentation of organs at risk (eyes, eye lens, optic nerves, optic chiasm, and brain stem) on different slices for patient number 8. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid lines representing the ground truth and dashed lines representing K-nearest neighbor predictions

Click here to view
Table 1: Dice similarity coefficient, normalized surface dice, and sensitivity values for organs at risk segmentations

Click here to view
Table 2: Hausdorff distance (mm) for organs at risk segmentations

Click here to view


[Figure 4] and [Figure 5] show the comparison of tumor segmentation on axial, sagittal, and coronal planes for the best and worst cases, and [Table 3] shows DSC, NSD, sensitivity, and HD values for tumor segmentations. Specificity values are always 1 and not included in the table.
Figure 4: Comparison of segmentation of tumor on different planes for patient number 9. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid red lines representing the ground truth and green dashed lines representing K-nearest neighbor predictions

Click here to view
Figure 5: Comparison of segmentation of tumor on different planes for patient number 2. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid red lines representing the ground truth and green dashed lines representing K-nearest neighbor predictions

Click here to view
Table 3: Dice similarity coefficient, normalized surface dice, sensitivity, and Hausdorff distance (mm) values for tumor segmentation

Click here to view


[Table 4] compares our study with previous automatic MRI segmentation studies using brain tumor patients.
Table 4: Comparison of dice similarity coefficient values in current work with previous studies for segmentation of organs at risks and tumor within the brain

Click here to view


The KNN model trainings take approximately 40 min for organs, and 15 min for tumor on a laptop with 2.5 GHz Intel i5 processor and 8 GB random-access memory (Intel Corp., Santa Clara, CA, USA), and the model trainings are required only once. Automatic segmentations for a new patient take approximately 6 min (4 min for organs, and 2 min for tumor).


   Discussion Top


In this work, we present a machine learning method to segment OARs and tumor in standard T1 and T2 brain MRI images. A simple rigid registration was used to align all the library images to the same spatial coordinates. The KNN models were used to characterize the knowledge of previous segmentations, and the trained models were used to predict OARs and tumor contours.

Deeley et al.[10] compared the performance of automatic segmentation of OARs with human experts using T1 MRI images of 20 high-grade glioma patients and found that the differences were less 5%. However, the segmentations of smaller tubular structures such as chiasm and optic nerves showed higher variation and were challenging for both experts and automatic segmentation methods. Isambert et al.[38] also compared automatic segmentation with manual segmentation using T1 MRI images of 11 brain patients. They observed an excellent segmentation accuracy for OAR volumes >7 cc, for example, DSC values for larger OARs such as eyes, brain stem, and cerebellum were >0.8, while they were <0.41 for smaller structures such as optic nerves, optic chiasm, and pituitary gland. Similarly, our results showed that the accuracy of segmentation was generally better for large volume objects, and we achieved a slightly better segmentation accuracy for small volume objects than other studies. The mean DSC is 0.49 ± 0.17 for optic chiasm, 0.75±0.09 for right optic nerve, and 0.73 ± 0.17 for left optic nerve in our study, while it was lower than 0.41 for these organs in Isambert et al.[38] and was around 0.4–0.5 in Deeley et al.[10]

For some OARs (especially for optic chiasm and brain stem) and tumors, the DSC and HD were observed to be relatively poorer [Table 1], [Table 2], [Table 3] than the others, which is mainly attributed to the considerable variation of shapes, size, and location of these OARs or tumors. Furthermore, the boundaries of them were generally unclear and irregular with discontinuities, adding significant challenges to the auto-segmentation. In addition, the MRI scans in general acquired with a clinical scanner have wide inter-patient variations due to the heterogeneity in MRI protocols and difference in tissue contrast, which is a function of MRI field strength and vendor specific.[29]

Compared to previous studies, our research has multiple strengths. First, it develops a semi-automated segmentation method by using the simple KNN learning algorithm. Our method can avoid manual contouring of new patients, reduce the uncertainties, and facilitate online plan adaptation. Second, it does not require multiple or special MRI sequences and only needs standard T1- and T2-weighted MRI, which makes our methods easy to implement and avoids the issues associated with specialized sequences such as limited availability, increasing scan time, patient movement, and costs. Third, unlike previous studies,[10],[38] it does not require deformable registration between training and new patients, which avoids the possible uncertainties or errors associated with image registration.

Our study has some limitations. First, the ROI for tumor segmentation was created manually due to patient specificity. However, only a rough estimation of tumor location is required, and the creation of ROI only takes a few seconds, especially with the prior knowledge of the patient's cancer condition. Second, all of the patients included in our study had a localized tumor, and they were all treated with a conventional fractionated radiotherapy. Multiple metastasis brain tumors are very common in stereotactic radiotherapy/radiosurgery treatments. A separate study is required to explore the feasibility of segmenting multiple tumors automatically. Third, our current study is a preliminary work, which used handcrafted image features and the simple KNN algorithm to understand the issues posed by the learning algorithms for automatic segmentation. Even though the KNN requires less time for training compared to the deep learning approach, the prediction process is usually slower. Note a laptop with low-level configuration was used in this study, and we expect that our method would be much faster if graphics processing unit (GPU) computing is used, considering that GPU-based high-performance computing has been increasingly used in RT.[41] Unlike the deep learning approach, the KNN does not learn the weights during training, is a memory-based classifier, and requires the entire training data for prediction. Future study should explore other alternative algorithms such as deep learning approach for a fully automatic segmentation in brain. However, we think MRI segmentation based on a simple learning method like KNN is still quite valuable because it is simple, easy-to-implement, requires minimum computer resource, and very easy to tune. In addition, KNN is a good algorithm to start with and to understand the issues posed by the learning algorithms for automatic segmentations, and can be used as a baseline for comparisons with more sophisticated learning algorithms. Finally, the dataset used in our study is smaller than most of the other studies in the literature, but our OAR and tumor segmentation results are comparable or superior to others [Table 4]. This is because we carefully evaluated the various features and found the best features for this work. In addition, we used ROIs to assist learning in this study. In the future work, the data augmentation such as random rotation, translation, and scaling may also be applied to the training data, which would increase the training data.[42]


   Conclusions Top


In this paper, we presented a semi-automatic segmentation approach based on KNN model to segmenting organs (brain stem, optic chiasm, optic nerve, eye lens, and eye globes) and tumor on standard T1 and T2 brain MRI images. Overall performance of our method is superior to the previous work. It provides fast and accurate image processing and planning tools and is one step forward for MRI-guided radiotherapy.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
   References Top

1.
Lagendijk JJ, Raaymakers BW, Raaijmakers AJ, Overweg J, Brown KJ, Kerkhof EM, et al. MRI/linac integration. Radiother Oncol 2008;86:25-9.  Back to cited text no. 1
    
2.
Fortunati V, Verhaart RF, van der Lijn F, Niessen WJ, Veenland JF, Paulides MM, et al. Tissue segmentation of head and neck CT images for treatment planning: A multiatlas approach combined with intensity modeling. Med Phys 2013;40:071905.  Back to cited text no. 2
    
3.
Mattiucci GC, Boldrini L, Chiloiro G, D'Agostino GR, Chiesa S, De Rose F, et al. Automatic delineation for replanning in nasopharynx radiotherapy: What is the agreement among experts to be considered as benchmark? Acta Oncol 2013;52:1417-22.  Back to cited text no. 3
    
4.
Harrigan RL, Panda S, Asman AJ, Nelson KM, Chaganti S, DeLisi MP, et al. Robust optic nerve segmentation on clinically acquired computed tomography. J Med Imaging (Bellingham) 2014;1:034006.  Back to cited text no. 4
    
5.
Sharp G, Fritscher KD, Pekar V, Peroni M, Shusharina N, Veeraraghavan H, et al. Vision 20/20: Perspectives on automated image segmentation for radiotherapy. Med Phys 2014;41:050902.  Back to cited text no. 5
    
6.
Walker GV, Awan M, Tao R, Koay EJ, Boehling NS, Grant JD, et al. Prospective randomized double-blind study of atlas-based organ-at-risk autosegmentation-assisted radiation planning in head and neck cancer. Radiother Oncol 2014;112:321-5.  Back to cited text no. 6
    
7.
Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys 2017;44:547-57.  Back to cited text no. 7
    
8.
Meillan N, Bibault JE, Vautier J, Daveau-Bergerault C, Kreps S, Tournat H, et al. Automatic intracranial segmentation: Is the clinician still needed? Technol Cancer Res Treat 2018;17:1533034617748839.  Back to cited text no. 8
    
9.
Ren X, Xiang L, Nie D, Shao Y, Zhang H, Shen D, et al. Interleaved 3D-CNNs for joint segmentation of small-volume structures in head and neck CT images. Med Phys 2018;45:2063-75.  Back to cited text no. 9
    
10.
Deeley MA, Chen A, Datteri R, Noble JH, Cmelak AJ, Donnelly EF, et al. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: A multi-expert study. Phys Med Biol 2011;56:4557-77.  Back to cited text no. 10
    
11.
Egger J, Zukić D, Bauer MH, Kuhnt D, Carl B, Freisleben B, et al. A comparison of two human brain tumor segmentation methods for MRI data. arXiv 2011;arXiv:1102.2382.  Back to cited text no. 11
    
12.
Demirhan A, Toru M, Guler I. Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks. IEEE J Biomed Health Inform 2015;19:1451-8.  Back to cited text no. 12
    
13.
Deng M, Yu R, Wang L, Shi F, Yap PT, Shen D, et al. Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling. Med Phys. 2016;43 (12):6588-97.  Back to cited text no. 13
    
14.
González-Villà S, Oliver A, Valverde S, Wang L, Zwiggelaar R, Lladó X. A review on brain structures segmentation in magnetic resonance imaging. Artif Intell Med 2016;73:45-69.  Back to cited text no. 14
    
15.
Liu Y, Stojadinovic S, Hrycushko B, Wardak Z, Lu W, Yan Y, et al. Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications. Phys Med Biol 2016;61:8440-61.  Back to cited text no. 15
    
16.
Kong Y, Chen X, Wu J, Zhang P, Chen Y, Shu H. Automatic brain tissue segmentation based on graph filter. BMC Med Imaging 2018;18:9.  Back to cited text no. 16
    
17.
Mahbod A, Chowdhury M, Smeby O, Wang C. Automatic brain segmentation using artificial neural networks with shape context. Pattern Recognit Lett 2018;101:74-9.  Back to cited text no. 17
    
18.
Narayana PA, Coronado I, Sujit SJ, Wolinsky JS, Lublin FD, Gabr RE. Deep-learning-based neural tissue segmentation of MRI in multiple sclerosis: Effect of training set size. J Magn Reson Imaging 2020;51: 1487-96.  Back to cited text no. 18
    
19.
Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. New York, NY: Springer; 2016.  Back to cited text no. 19
    
20.
Wu XD, Kumar V, Quinlan JR, Ghosh J, Yang Q, Motoda H, et al. Top 10 algorithms in data mining. Knowl Inf Syst 2008;14:1-37.  Back to cited text no. 20
    
21.
Anbeek P, Vincken KL, van Osch MJ, Bisschops RH, van der Grond J. Probabilistic segmentation of white matter lesions in MR imaging. Neuroimage 2004;21:1037-44.  Back to cited text no. 21
    
22.
Anbeek P, Vincken KL, van Bochove GS, van Osch MJ, van der Grond J. Probabilistic segmentation of brain tissue in MR imaging. Neuroimage 2005;27:795-804.  Back to cited text no. 22
    
23.
Anbeek P, Vincken KL, Groenendaal F, Koeman A, van Osch MJ, van der Grond J. Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging. Pediatr Res 2008;63:158-63.  Back to cited text no. 23
    
24.
Anbeek P, Vincken KL, Viergever MA. Automated MS-lesion segmentation by k-nearest neighbor classification. In: The MIDAS Journal - MS Lesion Segmentation (MICCAI 2008 Workshop).  Back to cited text no. 24
    
25.
Mazzara GP, Velthuizen RP, Pearlman JL, Greenberg HM, Wagner H. Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation. Int J Radiat Oncol Biol Phys 2004;59:300-12.  Back to cited text no. 25
    
26.
Steenwijk MD, Pouwels PJ, Daams M, van Dalen JW, Caan MW, Richard E, et al. Accurate white matter lesion segmentation by k nearest neighbor classification with tissue type priors (kNN-TTPs). Neuroimage Clin 2013;3:462-9.  Back to cited text no. 26
    
27.
Newhauser W, Jones T, Swerdloff S, Newhauser W, Cilia M, Carver R, et al. Anonymization of DICOM electronic medical records for radiation therapy. Comput Biol Med 2014;53:134-40.  Back to cited text no. 27
    
28.
Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin JC, Pujol S, et al. 3D Slicer as an image computing platform for the quantitative imaging network. Magn Reson Imaging 2012;30:1323-41.  Back to cited text no. 28
    
29.
Johansson A, Garpebring A, Karlsson M, Asklund T, Nyholm T. Improved quality of computed tomography substitute derived from magnetic resonance (MR) data by incorporation of spatial information – Potential application for MR-only radiotherapy and attenuation correction in positron emission tomography. Acta Oncol 2013;52:1369-73.  Back to cited text no. 29
    
30.
Yang X, Wu N, Cheng G, Zhou Z, Yu DS, Beitler JJ, et al. Automated segmentation of the parotid gland based on atlas registration and machine learning: A longitudinal MRI study in head-and-neck radiation therapy. Int J Radiat Oncol Biol Phys 2014;90:1225-33.  Back to cited text no. 30
    
31.
Haghighat M, Zonouz S, Abdel-Mottaleb M. CloudID: Trustworthy cloud-based and cross-enterprise biometric identification. Expert Syst Appl 2015;42:7905-16.  Back to cited text no. 31
    
32.
Kyrki V, Kamarainen JK, Kalviainen H. Simple Gabor feature space for invariant object recognition. Pattern Recognit Lett 2004;25:311-8.  Back to cited text no. 32
    
33.
Kuse M, Wang YF, Kalasannavar V, Khan M, Rajpoot N. Local isotropic phase symmetry measure for detection of beta cells and lymphocytes. J Pathol Inform 2011;2:S2.  Back to cited text no. 33
  [Full text]  
34.
Sobel I. An isotropic 3×3 gradient operator. In: Freeman H, editor. Machine Vision for Three – Dimensional Scenes. NY: Academic Press; 1990. p. 376379.  Back to cited text no. 34
    
35.
Dice LR. Measures of the amount of ecologic association between species. Ecology 1945;26:297-302.  Back to cited text no. 35
    
36.
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, Fauw J. Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. arXiv 2018;arXiv:1809.04430.  Back to cited text no. 36
    
37.
García-Lorenzo D, Francis S, Narayanan S, Arnold DL, Collins DL. Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging. Med Image Anal 2013;17:1-18.  Back to cited text no. 37
    
38.
Isambert A, Dhermain F, Bidault F, Commowick O, Bondiau PY, Malandain G, et al. Evaluation of an atlas-based automatic segmentation software for the delineation of brain organs at risk in a radiation therapy clinical context. Radiother Oncol 2008;87:93-9.  Back to cited text no. 38
    
39.
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, et al. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019;54:220-37.  Back to cited text no. 39
    
40.
Havaei M, Jodoin PM, Larochelle H. Efficient Interactive Brain Tumor Segmentation as Within-Brain kNN Classification. 22nd International Conference on Pattern Recognition; 2014. p. 556-61.  Back to cited text no. 40
    
41.
Jia X, Ziegenhein P, Jiang SB. GPU-based high-performance computing for radiation therapy. Phys Med Biol 2014;59:R151-82.  Back to cited text no. 41
    
42.
Seo H, Khuzani MB, Vasudevan V, Huang C, Ren HY, Xiao RX, et al. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys 2020;47:E148-67.  Back to cited text no. 42
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4]



 

Top
Print this article  Email this article
  

    

 
   Search
 
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Article in PDF (2,297 KB)
    Citation Manager
    Access Statistics
    Reader Comments
    Email Alert *
    Add to My List *
* Registration required (free)  


    Abstract
   Introduction
    Materials and Me...
   Results
   Discussion
   Conclusions
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed346    
    Printed6    
    Emailed0    
    PDF Downloaded68    
    Comments [Add]    

Recommend this journal