

ORIGINAL ARTICLE 



Ahead of print
publication 

Multimodality medical image fusion using crossbilateral filter and neurofuzzy approach
Harmeet Kaur^{1}, Satish Kumar^{2}, Kuljinder Singh Behgal^{3}, Yagiyadeep Sharma^{3}
^{1} Department of Computer Science and Applications (DCSA), Panjab University, Chandigarh, India ^{2} Department of Computer Science and Applications (DCSA), Panjab University Regional Centre, Hoshiarpur, Punjab, India ^{3} Department of Radiotherapy, Behgal Institute of IT and Radiation Technology, Mohali, Punjab, India
Date of Submission  18Jan2021 
Date of Decision  12Jul2021 
Date of Acceptance  01Oct2021 
Date of Web Publication  02Dec2021 
Correspondence Address: Harmeet Kaur, Department of Computer Science and Applications (DCSA), Panjab University, Chandigarh India
Source of Support: None, Conflict of Interest: None DOI: 10.4103/jmp.JMP_14_21
Abstract   
Context: The proposed technique uses the edgepreserving capabilities of crossbilateral filter (CBF) and artificial intelligence technique adaptive neurofuzzy inference system (ANFIS) to fuse multimodality medical images. Aims: The aim is to present the unlike information onto a single image as each modality of medical image contains the unalike domain of information. Settings and Design: First, the multimodality medical images are decomposed using CBF by tuning its parameters: radiometric and geometric sigma producing CBF component and detail component. This detail is fed to ANFIS for fusion. On the other hand, the subbands obtained from DWT are fused using average rule. Reconstruction method gives final image. Subjects and Methods: ANFIS is used to train the Sugeno systems using neuroadaptive learning. The fuzzy inference system in the ANFIS is used to define fuzzy rules for fusion. On the other hand, bior2.2 is used to decompose the source images. Statistical Analysis Used: The performance is verified on the Harvard database with five cases, and the results are equated with conventional metrics, objective metrics as well as visual inspection. The statistics of the metrics values is visualized in the form of column chart. Results: In Case 1, better results are obtained for all conventional metrics except for average gradient (AG) and spatial frequency (SF). It also achieved preferred objective metric values. In Case 2, all metrics except AG, mutual information, fusion symmetry, and SF are better values among all methods. In Cases 3, 4, and 5, all the metrics have achieved desired values. Conclusions: Experiments conclude that conventional, objective, visual evaluation shows best results for Cases 1, 3, 4, and 5.
Keywords: Adaptive neurofuzzy inference system, biorthogonal wavelet, crossbilateral filter, fusion, medical images
How to cite this URL: Kaur H, Kumar S, Behgal KS, Sharma Y. Multimodality medical image fusion using crossbilateral filter and neurofuzzy approach. J Med Phys [Epub ahead of print] [cited 2022 Jan 25]. Available from: https://www.jmp.org.in/preprintarticle.asp?id=331674 
Introduction   
Researchers are digging deeper into fusing multimodality medical images, as the prevailing situation is that maximum of the population from all over the world is diagnosed with one or the other type of cancer, and to save the precious lives, remarkable work needs to be carried out. Integration of multimodality images into a single image, i.e. a fused image, should have the capability to represent the best information from each modality and hence is the main motive of image fusion. Positron emission tomography (PET), computerized tomography (CT), magnetic resonance imaging with T1 weight (MRIT1), and MRI with T2 weight (MRT2) are the widely used modalities of the medical images. The aim is to contribute the relevant information from each modality into the resultant image which will aid the radiologist to figure out the tumor area for radiotherapy planning as well as treatment. [Figure 1] shows the delineation of suspected volume in which different volumes are marked with different colors. These are the regions defined by the International Atomic Energy Agency^{[1]} report of ICRU 50/62/83 that with the advancements in technology as shift is from plane (XY axis) to volume (threedimension [3D]), the particular volumes need to be defined accurately. The treated volume is the tissue volume that is planned to receive dose selected by a radiation oncologist, and it further consists of planning target volume, irradiated treatment volume, clinical tumor volume, and gross tumor volume (GTV).
The diverse modalities provide different types of information like a CT scan will provide us the tumor information with additional information such as blood vessels, inflammation, shell, and edema. The MRI gives the softtissue information and contrast information. Functional information is provided in the PET/singlephoton emission computerized tomography (SPECT) scan.
The fusion process is a stepwise process in which the first step is to acquire the medical images to be fused. Then before applying fusion, the images can be directly fed to fusion system or decomposed so that the performance of fusion is enhanced. Decomposition of images is done to extract information from source image into subbands for which various techniques are implemented. The strategies to embed this useful information in the final output are called fusion algorithms. A lot of work is present in the literature to give an overview of available methods for fusion.^{[2],[3]} Crossbilateral filter (CBF) is used to fetch the detail from each input and then weights are assigned to formulate the final fused image.^{[4]} The authors have used conventional as well as objective measures to validate the principle of the proposed method. Du et al. have implemented reflectance and illuminationbased decomposition.^{[5]} Two color based methods i.e. Retinex based and gray world are discussed.^{[5]} The former method is used to decompose image into reflectance, illumination image whereas later coverup the color constancy issue. The human retinainspired model^{[6]} allows to preserve spectral features with minimum spatial distortion. The proposed method performed well when compared with hue intensity saturation, discrete wavelet transform (DWT), waveletbased sharpening, and waveletà trous transform. Image fusion based on convolution neural network (IFCNN), a CNNbased fusion method, is proposed in Zhang et al.'s study^{[7]} with two convolution layers to extract image features, followed by relevant fusion rule. The method is fully convolution, allowing it to be trained in an endtoend way. The output confirms better performance as the proposed method is able to maintain data information and color information. Nonsubsampled contourlet transform (NSCT) is used with fuzzy entropy^{[8]} to increase the visual inspection of tumor area. After decomposing the input, the lowfrequency components are fused by calculating fuzzy entropy value and highfrequency components are fused using regional energy. The metrics average gradient (AG), standard deviation (SD), and edge values are better than other methods. Image local features are extracted from input images and combined with fuzzy logic.^{[9]} The weights for each pixel are calculated so as to combine the source images giving preference to weight factor and hence better results are achieved. The basic principle of ANFIS is defined and workflow is explained by Walia et al.^{[10]} with parameter setting for nonlinear functions. An attempt is made^{[11]} to provide decision support system to diagnose Alzheimer using neural network. The fuzzified data set is used in hybrid neurofuzzy system which proved to be more precise than old manual system. A new method based on Berkeley wavelet transform (BWT) and support vector machine (SVM)^{[12]} is proposed to segment the brain MR to precisely figure out the tumor cells as well as healthy cells. The comparison is made with ANFIS, backpropagation, and KNN classifiers and results indicated that the proposed method achieved good results in terms of sensitivity, specificity, and accuracy. Fusion algorithm to increase the segment ability of echocardiography features using pixellevel principal component analysis and DWT techniques is discussed in Mazaheri et al.'s study,^{[13]} and the proposed method is able to reduce noise and artifacts. Fuzzyadaptive reduced pulsecoupled neural network (RPCNN) is employed^{[14]} with multiscale geometric analysis. The fuzzy membershiP values act as linking strength of RPCNN's neurons. The drawbacks of other fusion techniques such as reduced contrast, missing details, and the performance are properly managed by the proposed method. CT and MRI images are fused using iterative neurofuzzy approach (INFA)^{[15]} and lifting wavelet transform with NFA. The results of INFA are better in terms of metrics and visual information. Adaptive neurofuzzy inference system is employed to fuse PET and MRI images by Kavitha et al.^{[16]} The source images are decomposed using shiftinvariant wavelet and then fused using ANFIS. The metrics entropy, AG, average, SD, mean square error, and peak signaltonoise ratio (PSNR) are calculated and results agree to the performance on both visual and mathematical results. Twoscale image decomposition, sparse representation is done^{[17]} including contrast enhancement, spatial gradientbased edge detection, and then breaking the image into base and detail layers Jianwen Hu et al.^{[18]} used pixellevel multiscale directional bilateral filter with the focus on multisensor images. Ability to preserve edges and directional information resulted in good performance in terms of visual as well as performance metrics. Experimental outcomes are compared with conventional methods on infrared and medical images. Multiscale directional bilateral filter outperformed DWT, shearlet wavelet transform, dualtree complex wavelet transform (DTCWT), NSCT, and multiscale bilateral filter (MBF) with visual information fidelity (VIF) 72.01% in multisensor images and also gives better values for medical images. VIF 77.05% is achieved when applied on CT and MRI images, and for MRIT1 and MRIT2 images, the highest Q_{E} (edge information) is obtained, i.e. 50.02%. A pixellevel fuzzybased fusion scheme with minimumsummean of maximum (MINSUMMOM) is presented^{[19]} which shows it is better than the minimummaximumcentroid (MINMAXCentroid) algorithm. A hybrid approach integrating the advantages of NSCT, RPCNN, and fuzzy logic is depicted,^{[14]} and a naïve fusion scheme is proposed which has shown its worth with higher spatial resolution and lesser difference with the original image. Higher value of entropy and SD is achieved with this new algorithm's implementation.
Wavelet transform has been widely used in the past and as a stateofart is explored by many authors. PET and CT images are transformed^{[20]} using twodimensional DWT followed by weighted average of the approximate coefficients. Parul et al.^{[21]} used a weighted average of pixels derived from eigenvalues in wavelet domain. Sharpened images are obtained in the resultant image. Apart from evaluating the performance on conventional metrics, Petrovic and Xydeas image fusion metric is used. Haribabu et al.^{[22]} decomposed PET images into intensity hue and saturation components and then fed to DWT for further decomposition into low and highfrequency components. The lowfrequency components were fused with average rule, and for the highfrequency components, spatial frequency (SF) is considered with 8*8 window. The performance is better when compared with PCA fusion scheme with outcome 62.2149, 3.0617, and 3.4886 for PSNR, entropy, and SD, respectively. Spatial features are preserved^{[23]} using difference in red and blue components by applying YCbCr. DWT is applied for image fusion using pulsecoupled neural networks (PCNNs). The proposed method shows less spatial distortions when compared with contourlet, curvelet, and DWT. Discrepancy and AG evaluation using the proposed technique is 3.8931 and 5.1807, respectively. A cascade combination of stationary wavelet transform and nonsubsampled wavelet transform is depicted^{[24]} with focus on preserving the spectral and spatial features of source images. The entropy, SD, fusion factor (FF), Q (edge strength), SSIM (structured similarity), EME (measure of enhancement), and PSNR of the proposed method entropy are 6.9414, 74.1232, 4.2770, 0.8588, 0.8867, 24.7221, and 39.5643, respectively. Integer wavelet transform is used to decompose the images^{[25]} and then neurofuzzy is applied on the wavelet coefficients. Metrics were calculated for entropy, fusion symmetry (FS), and FF achieving higher values for entropy and FF when compared to DWTbased neurofuzzy. Rajkumar et al.^{[15]} introduced an INFA fusion method. A NFA with lifting wavelet transform is also implemented on CT and MRI images. For evaluation, comparison with existing methods, i.e. DWT and average method on six datasets of brain images, is taken. New methods outperformed from the existing ones with values of normalized correlation coefficient (CC), entropy, and structure similarity index are 0.9526, 6.5104, and 0.6574, respectively, using INFA.
The literature survey implies that the authors have used many decomposition and fusion methods such as DWT, IFCNN, IHS, NSCT, BWT and SVM, DTCWT, fuzzyadaptive RPCNN, MBF with VIF, PCNN, and CBF. The DWT has the drawback of shift variance, and artifacts are also introduced causing redundancy. The DWT is also unable to perform well at the edges, and the contrast is also reduced. Though RDWT technique is translation invariant but it provides redundant information. The DTCWT is a shift invariant and has a directional sensitivity. The reconstruction property is also perfect in the case of DTCWT, but it is computationally expensive and demands huge memory. The authors have used CBF in the past for fusion. However, to further investigate its features, CBF is used in this work for decomposing the images, which is a prefusion requirement. On applying CBF, image is decomposed into two components, namely CBF component and detail component. Subtracting the CBF component from original image gives detail component. This detail component is used for further processing. The detail component of each modality is given as input to ANFIS for fusion. Although many researchers have proposed their method to improvise the fusion performance in Tang et al.'s study,^{[26]} the authors have made a multimodal medical image fusion image database, then the fusion algorithms are applied on this database, and then, the quality of output is assessed.
In this paper, the edgepreserving capabilities of CBF are explored in order to enhance the multimodality medical image fusion results. The proposed work verifies and compares devised technique with the techniques available in MATLAB Toolbox. The purpose is to enhance the fusion results which helps the oncologist to outline the tumor area more precisely than the existing techniques.
Subjects and Methods   
The proposed method considers different modality images A and B of the same organ as input. The proposed method is different from the other methods, as CBF is able to target the edges. Here, using CBF, one image is used to shape the kernel weights and is applied on the second image. In parallel to these calculations, biorthogonal wavelet (bior2.2) transform is applied on the source images A and B which gives approximate and detail components. Fuzzy inference system and average rule are performed on the decomposed parts for fusion. The block diagram of the proposed scheme is shown in [Figure 2] for two input images A and B.
Crossbilateral filter   
Decomposition is performed taking into consideration scale as well as orientation. Highpass and lowpass filters give the complete representation of the image in the decomposed parts, i.e., the source images are decomposed into subbands, each containing low and highfrequency components in it. Lowfrequency components are passed by lowpass filter denoting smooth regions whereas highfrequency components are smoothened by highpass filter, denoting edges where smoothing is a process of convolution of image with uniform/Gaussian kernel. The CBF accomplishes edgepreserved smoothing by modifying the kernel based on the indigenous contents, which is impossible to achieve using Gaussian kernel. Using crossbilateral filterbased decomposition, detailed coefficients are obtained.
The CBF component is calculated, as depicted in [Figure 2], for each input image (A_{CBF} and B_{CBF}) while adjusting the radiometric sigma and geometric sigma. Euclidean distance calculation is done to consider the neighboring pixels as well. When these CBF components are subtracted from their respective original images, detailed components are obtained, having equation:
A_{DETAIL} = A − A_{CBF}
B_{DETAIL} = B − B_{CBF}
These obtained details are then further decomposed using wavelets and from the various waveletbased decomposition methods in the literature, like Daubechies, which is the extremal phase wavelets with a maximum of 15 vanishing points to choose from. On the other hand, Haar or db1 is the exclusive orthogonal wavelet with linear phase. Biorthogonal wavelets with different versions have been implemented and finally bior2.2 is used in this paper. The highfrequency components calculated by bior2.2 act as input to ANFIS for fusion as it contains maximum of information.
Adaptive neurofuzzy inference systembased fusion
ANFIS is a class of adaptive networks that are functionally equivalent to fuzzy inference systems (FISs) where tuning of the parameters of Takagi–Sugeno is done using hybrid learning method. Leastsquares and backpropagation gradient descent methods are used in combination for modeling training data set. The ANFIS used in this study includes two inputs with five membership functions for each input, set of 25 rules, and single output, i.e. fused image. Neural networks are used with fuzzy logic in which the neurons adjust the membership functions. Apart from improving the performance of inference system, neural networks decline the development time and are automatically preceded.
The advantage of using ANFIS is that having training and testing data, it has the capacity to train. Hence, after the selection of membership type and number of membership functions, the rules are defined by the ANFIS, and [Figure 3] shows the rule surface viewer where the corresponding output is defined for the combination of inputs. The performance of the designed system can be checked with the training as well as testing data [Figure 4]a and [Figure 4]b, and graphical depiction allows tracing the performance in a userfriendly manner. In the proposed work, neurofuzzy inference system for fusion as shown in [Figure 5] is depicting 2 input nodes, 5 membership functions for each input node, 25 nodes signifying 25 rules, defuzzification nodes, and single output node.  Figure 4: (a) The performance of adaptive neurofuzzy inference system with testing data and (b) The performance of adaptive neurofuzzy inference system with training data, respectively
Click here to view 
 Figure 5: Structure of adaptive neurofuzzy inference system model depicting 2 input nodes, 5 membership functions for each input node, 25 nodes signifying 25 rules, defuzzification nodes, and single output node
Click here to view 
Evaluation parameters
To verify the output obtained after fusion, evaluation is done using metric calculation and comparing the values. On the other hand, as the image has to be finally presented to the radiation oncologist for planning and treatment, he/she should also be satisfied with the output image visually. Hence, the evaluation is categorized as^{[4]} evaluation using conventional metrics, evaluation using objective metrics, and subjective evaluation.
Evaluation using conventional metrics
Various conventional metrics are available in the literature for evaluation of fusion results.^{[4],[26],[27],[28],[29]} The brief discussion of various metrics used in the literature which are frequently used to verify results using the objective evaluation is as follows:
 Average pixel intensity (API) calculates the mean of the pixel values to give the index of contrast.
where
f (i, j) =pixel size of the image
 AG is calculated to find out clarity and sharpness in the fused image
 Entropy: It gives the probabilitybased amount of information present in the image and is calculated by the formula:
where p_{k} is the probability if intensity value k.
 Mutual information (MIF), also called cross entropy, gives the overall MIF between source images and fused image, and to calculate it, we first calculate MI_{AF} and MI_{BF} and then addition of both will give the final MIF, i.e.,
MI = MI_{AF} + MI_{BF}
 FS: The symmetry of the final fused image is compared with the input images using the formula.
 CC: To measure the correlation between the output and the input images, r_{AF }and r_{BF }signifies the relevance of A and B with respect to F. The CC is then calculated as:
 SF is calculated to find out the regionwise information levels.
The row and column frequencies (RF and CF) are calculated as:
Rootmeansquare error (RMSE) is calculated to find the error as percentage between reconstructed image and original image and is given as:
where M and N is the dimension of images and I_{source} is the original image and I_{fused} is the output image.
 PSNR is based on the RMSE and is calculated as
PSNR = 10 × log10 (M × N/[RMSE]^2)
Evaluation using objective metrics
Gradient informationbased performance measuring is done in objective evaluation with factors:
 Total fusion performance, Q^{AB}/^{F}, measures the information transferred from original image to fused image
 Fusion loss, LAB/F, measures the loss of information due to fusion
 Fusion artifact, N^{AB}/^{F}, tells about the undesired artifacts added in the image due to fusion.
The sum of the above three factors should be one, i.e.,
Q^{AB}/^{F} + L^{AB}/^{F} + N^{AB}/^{F} = 1
Subjective evaluation
The subjective evaluation is done on the treatment planning system (TPS) by the oncologist, in which the fused image is verified with input images to confirm for availability of information from both the modalities in the output image. Comparison of tumor volume is done by contouring the tumor area in the input modalities and then in the fused image. The good fusion output should be able to present the precise tumor volume with the capability to save the organ at risk.
Results   
The opensource images from Harvard database^{[30]} were taken to perform fusion, as the proposed method is a naive technique and its viability is defined in this paper which comes out to be better than the existing methods. The proposed method can be extended on realtime images as well. However, to perform on the real images, the registration phase needs to be performed at first. Diagnostically 1cm cuts are required, but for image fusion for treatment planning in radiation oncology, 1mm cuts are required, and keeping these details in view, the five cases were recommended by the radiation oncologist and medical physicist.
Case 1: Sarcoma with fusion of CT modality with MRI modality
Case 2: Metastatic adenocarcinoma with fusion of MR with gadolinium contrast medium (MRGAD) modality with MRT2 modality
Case 3: Meningioma with fusion of CT modality with MRT2 modality
Case 4: Meningioma with fusion of CT modality with MRGAD modality
Case 5: Astrocytoma with fusion of MRGAD with PETfluorodeoxyglucose (FDG) modality.
In Case 1, the patient was a 22yearold man who was admitted for resection of Ewing's sarcoma. On examination, he was inattentive, confused, and had a right homonymous hemianopia, a left inferior quadrantanopia, right lower extremity hyperreflexia, and right extensor plantar response. CT and MRI modalities are available in this case.
Case 2 is metastatic carcinoma of the colon of a 62yearold man suffering a first seizure, a tonicclonic convulsion with focal onset. There was a history of carcinoma of the colon, with recent metastasis to the liver and lung. MR images show a lesion involving the right second frontal convolution and another in the cerebellum, near the fourth ventricle, also visible on the sagittal image map. The low signal on T2weighted images of the frontal lesion is remarkable, since metastases are often associated with high signal. MRGAD and MRt2 images are available for fusing.
Case 3 is a meningioma case with CT and MRT2 modalities available of a 75yearold man. The CT is fused with MRT2.
Case 4 is a meningioma case with CT and MRGAD modalities available of a 75yearold man with fusion of CT with MRGAD to add precision to the treatment and planning.
Case 5 is a grand mal seizure in a 53yearold case and brain biopsy revealed Grade IV astrocytoma. The MRGAD and PET with FDG radiotracer images are available and are fused to figure out the tumor volume.
Fused image by the proposed method is compared with different methods which are available in MATLAB. The comparison of fusion algorithms based on conventional metrics for Case 1–Case 5 is given in [Table 1].The comparison of fusion algorithms based on objective metrics for Case 1–Case 5 is given in [Table 2]. The initial decomposition wavelet used is bior2.2 in each method at level 1. The linear image fusion is used with parameter value = 0.4. The proposed method used CBF with the following parameters: spatial sigma = 1.8, radiometric sigma = 25, kernel size = 5, and covariance window size = 11.  Table 1: Comparison of fusion algorithms based on conventional metrics for Case 1–Case 5
Click here to view 
 Table 2: Comparison of fusion algorithms based on objective metrics for Case 1–Case 5
Click here to view 
A higher value of the metrics except for RMSE, L^{AB}/^{F}, and N^{AB}/^{F} is desired to be called a better fusion outcome. The higher values are bolded, and for RMSE, L^{AB}/^{F}, and N^{AB}/^{F}, lower values obtained are bolded.
The quantitative comparison shows that for Case 1, proposed method has achieved better API, entropy, MIF, FS, and CC, and for AG and SF, CBF method has performed well. This implies that the fused output has better pixel intensity, and entropy of the fused output is also better. The MIF has also increased and the symmetry of the information is also higher than the other methods. The proposed method is also capable of improving the correlation among input and output images. Moreover, in terms of objective measures, more information is transferred from input images to output image, loss of information is lowered, and addition of artifacts is also lesser when compared with the performance of other methods as shown in [Figure 6]a, [Figure 6]b, [Figure 6]c, [Figure 6]d, [Figure 6]e, [Figure 6]f, [Figure 6]g, [Figure 6]h. It is clear from [Figure 6]h that the image obtained from the proposed method is more informative in terms of MIF, correlation, entropy, and symmetry with respect to source. This case achieved the highest PSNR with value 38.4048.
For Case 2, the proposed method is able to improve only the pixel intensity, entropy of the fused output, and correlation between inputs and output. The linear fusion method has performed better in terms of MIF and FS. The CBF is able to perform well in terms of AG and SF. While looking at objective metrics, the proposed method is able to perform well in terms of minimum addition of artifacts. In this case, CBF has performed well in terms of transfer of information, which has increased with minimum loss of information. [Figure 7]a, [Figure 7]b, [Figure 7]c, [Figure 7]d, [Figure 7]e, [Figure 7]f, [Figure 7]g, [Figure 7]h presents input as well as output from different methods, and [Figure 7]h shows that intensity, entropy, and correlation are better but are unable to perform better in terms of MIF and symmetry, for which linear fusion method has performed better. The image also verifies that negligible amount of artifact is transferred in the fused image. The highest entropy with value 1.6241, CC with value 0.9176, and the lowest N^{AB}/^{F} with value 0.0001 are attained.
For Case 3, the proposed method is able to achieve better API, AG, entropy, MIF, symmetry of fused image, cross correlation, SF, PSNR, and a minimum of RMSE and is shown in [Figure 8]a, [Figure 8]b, [Figure 8]c, [Figure 8]d, [Figure 8]e, [Figure 8]f, [Figure 8]g, [Figure 8]h. The fused image is informative as it has more transfer of information with less loss of information and addition of artifacts. For this case, the lowest RMSE with value 31.0082, L^{AB}/^{F} with value 0.0969, and the highest Q^{AB}/^{F} with value 0.8826 are obtained.
For Case 4, the values of metrics API, AG, entropy, MIF, FS, CC, SF, and PSNR are highest for the proposed method among the values obtained from all other methods. The new method was also capable to transfer maximum information from source images to fused image (Q^{AB}/^{F} = 0.869) as shown in [Figure 9]a, [Figure 9]b, [Figure 9]c, [Figure 9]d, [Figure 9]e, [Figure 9]f, [Figure 9]g, [Figure 9]h. The information loss, L^{AB}/^{F} = 0.1410, and addition of artifacts, i.e. N^{AB}/^{F} = 0.0021, are also negligible. In this case, the highest FS with value 1.9999 and SF with value 35.4682 are achieved.
Similarly for Case 5, [Figure 10]a, [Figure 10]b, [Figure 10]c, [Figure 10]d, [Figure 10]e, [Figure 10]f, [Figure 10]g, [Figure 10]h shows input images as well as fused output. PETFDG gives the functional activity of the tissues under study. Using the radioactive material, the radioactivity inside the body is captured to study the abnormalities. The conventional metrics have shown acceptable values and the objective metrics also showed values as expected by a good fusion algorithm. In this case, the highest API with value 7.1181, AG with value 18.2626, MIF with value 0.9893, and FS with value 1.9999 are achieved. For the subjective evaluation, the output images were visually inspected by the oncologist. All the cases were presented to the expert, as shown in [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15], and the following remarks were obtained:  Figure 11: Case 1 showing computerized tomography image, magnetic resonance imaging image, and the fused output, respectively
Click here to view 
 Figure 12: Case 2 showing magnetic resonance imaginggadolinium contrast medium image, magnetic resonanceT2 image, and the fused output, respectively
Click here to view 
 Figure 13: Case 3 showing computerized tomography image, magnetic resonance T2 image, and the fused output, respectively
Click here to view 
 Figure 14: Case 4 showing computerized tomography image, magnetic resonance imaginggadolinium contrast medium image, and the fused output, respectively
Click here to view 
 Figure 15: Case 5 showing magnetic resonance imaginggadolinium contrast medium image, positron emission tomographyfluorodeoxyglucose image with missed tumor areas, and the fused output, respectively
Click here to view 
For Case 1, CT image shows a particular length of the tumor (red arrow) and MRI image shows a lot of artifacts and is less sensitive than CT and shows a longer tumor (yellow arrow). Once the modalities are fused, one is actually able to see what is the actual site of the tumor and what will be the planned target volume and this precision in turn will be able to save a lot of normal tissues which otherwise may get overradiated. Thus, the fused output is able to reduce the size of the planned target volume and finally the entire dose to the normal organs is also reduced.
For Case 2, as displayed in [Figure 7] and [Figure 12], contouring on MRGAD displays more treatment volume (in red color) as compared to MRT2 (in yellow color). After fusion, we can observe that there is a significant difference in treatment volume. Radiotherapy treatment should be executed as per the MRT2 image which is actually the size of the tumor. It has increased the precision while executing radiotherapy treatment.
For Case 3, as shown in [Figure 8] and [Figure 13], MRT2 shows good middle line shift and demarcation of the tumor (in yellow color) but without differentiation from the edema. Whereas CT does not shows the middle line shift but is able to present the basic tumor outline (in yellow color) and tumor area (in red color). On fusing the two modalities, the resultant image is able to precisely define tumor volume (in red) with the presence of middle line shift also. This also signifies that the CTbased treatment may not be able to perform well and hence fusing it with other modalities for more information is needed.
Similarly, for Case 4, as visible in [Figure 9] and [Figure 14], CT does not views the middle line shift but is able to present the outline of tumor area (in yellow color). MRGAD is able to display the middle line shift and the tumor volume, but variation from edema is unclear. The tumor is going along the perisylvian fissure anteriorly in the MRGAD image. On fusing the two modalities, the resultant image carry the desirable properties of both the modalities i.e. middle line shift as well as tumor volume.
For Case 5, as displayed in [Figure 10] and [Figure 15], the CT modality is able to capture the tumor present in the particular organ. In PET, the metabolic information shows that the cancerous area is visible but is not completely being figured out and hence a part of the tumor volume is missed. However, in the fused image, the tumor volume is precisely delineated and the healthy tissues are saved from radiations.
On the other hand, if the radiologist witnesses some additional information in the volume lying between the two contours, then the radiotherapy can be executed on the bigger volume. The patient's position uncertainty in the treatment room will advocate the use of outer contouring volume for the radiotherapy treatment. The radiation oncologist may suggest to treat the bigger volume if the involvement of lymph nodes is seen which could be the volume lying between the two contours. The statistical analysis of the metrics is also done using graphs to envision the performance of fusion methods in each case [Figure 16],[Figure 17],[Figure 18],[Figure 19],[Figure 20],[Figure 21],[Figure 22],[Figure 23],[Figure 24],[Figure 25],[Figure 26],[Figure 27].
Discussion   
A new technique to perform multimodality medical image fusion is proposed using CBF and ANFIS. The proposed fusion method gives precise tumor area with preservation of edges and negligible loss in information. The output is more informative in terms of edge information, information transfer, and PSNR too and is quantified using different metrics as discussed in the result. The doctors also confirmed that the abnormality is more clearly viewed as well as addition of artifacts is minimal in the proposed algorithm's output when compared with the mentioned stateoftheart methods.
Fusion output can play a vital role to execute an optimum dose to the tumor so that the condition of underdose or overdose is overlooked. In the radiotherapy practices, a dose above 107% of the recommended dose is called hot spot or overdose, and if the dose remains <95% of the suggested amount, then it is called cold spot or underdose. The dose quantity will directly affect the surrounding organs because a cold spot may lead to relapse/recurrence of the tumor whereas hot spot may disturb the functioning of the targeted organ.
The proposed method is skilled to perform fusion in a better as well as qualified way as only by adjusting the two parameters which allows tracking the size of the feature and contrast of the feature to be preserved. The noniterative nature of CBF allows ignoring the cumulative effect over several iterations, which may mislead the fusion process. To conclude, the proposed method is efficient than the mentioned fusion methods. The fusion method devised can be integrated with the TPS. The integration of fusion technique with the radiation output parameters will improvise the treatment planning. The present study has applied fusion on slices. In future, the work can be used to evolve over a surface in 3D domain instead of a single slice. Thus, proposed method can be applied on the voxels to create 3D volumes. The database used is open source and retrieved from Harvard University.^{[30]}
Acknowledgment
We are very thankful to radiation oncologists, medical physicist/radiological safety officer at Behgal Institute of IT and Radiation Technology for evaluating the results visually and for comparing the output image with the input images.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References   
1.  
2.  Huang B, Yang F, Yin M, Mo X, Zhong C. A review of multimodal medical image fusion techniques. Comput Math Methods Med 2020;2020:116. 
3.  ElGamal FE, Elmogy M, Atwan A. Current trends in medical image registration and fusion. Egypt Inform J 2016;17:99124. 
4.  Kumar BK. Image fusion based on pixel significance using cross bilateral filter. SIViP 2015;9:1193204. 
5.  Du J, Li W, Tan H. Intrinsic image decompositionbased grey and pseudocolor medical image fusion. Proc IEEE Access 2019;7:5644356. 
6.  Sabalan D, Hassan G. MRI and PET images fusion based on human retina model. J Zhejiang Univ Sci A 2007;8:162432. 
7.  Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L. IFCNN: A general image fusion framework based on convolutional neural network. Inf Fusion 2020;54:99118. 
8.  Wang K, Cai K. Improving medical image fusion method using fuzzy entropy and nonsubsampling contourlet transform. Int J Imaging Syst Technol 2021;31:20414. 
9.  Javed U, Riaz MM, Ghafoor A, Ali SS, Cheema TA. MRI and PET image fusion using fuzzy logic and image local features. ScientificWorldJournal 2014;2014:18. 
10.  Walia N, Singh H, Sharma A. ANFIS: Adaptive NeuroFuzzy Inference System – A Survey. Int J Comput Appl 2015;123:328. 
11.  Obi JC, Imainivan AA. Decision support system for the intelligient identification of alzheimer using neuro fuzzy logic. Int J Soft Comput 2011;2:2538. 
12.  Bahadure NB, Ray AK, Thethi HP. Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. Int J Biomed Imaging 2017;2017:112. 
13.  Mazaheri S, Sulaiman PS, Wirza R, Dimon MZ, Khalid F, Moosavi Tayebi R. Hybrid pixelbased method for cardiac ultrasound fusion based on integration of PCA and DWT. Comput Math Methods Med 2015;2015:116. 
14.  Das S, Kundu MK. A neurofuzzy approach for medical image fusion. IEEE Trans Biomed Eng 2013;60:334753. 
15.  Rajkumar S, Bardhan P, Akkireddl SK, Munshi C. CT and MRI image fusion based on wavelet transform and neurofuzzy concepts with quantitative analysis. Proc ICECS 2014;2014:16. 
16.  Kavitha CT, Chellamuthu C. Fusion of PET and MRI images using adaptive neurofuzzy inference system. J Sci Ind Res (India) 2012;71:6516. 
17.  Maqsood S, Javed U. Multimodal medical image fusion based on twoscale image decomposition and sparse representation. Biomed Signal Process Control 2020;57:101810. 
18.  Hu J, Li S. The multiscale directional bilateral filter and its application to multisensor image fusion. Inf Fusion 2012;13:196206. 
19.  Teng J, Wang S, Zhang J, Wang X. Fusion algorithm of medical images based on fuzzy logic. In: Proc 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery. IEEE 2010;4:54650. 
20.  Shangli C. Medical image of PET/CT weighted fusion based on wavelet transform. Proc ICBBE 2008;2008:25235. 
21.  Shah P, Merchant SN. An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood. In: Proc 14 ^{th} International Conference on Information Fusion.ISIF 2011;Part 2:193541. 
22.  Haribabu M, Bindu CH, Prasad KS. Multimodal medical image fusion of MRI – PET using wavelet transform. In: Proc International Conference on Advances in Mobile Network, Communication and Its Applications. IEEE 2012;12730. 
23.  Nobariyan BK, Daneshvar S, Foroughi A. A new MRI and PET image fusion algorithm based on pulse coupled neural network. In: Proc 22 ^{nd} Iran Conference Electrical Engineering. IEEE 2014;195055. 
24.  Bhateja V, Patel H, Krishn A, Sahu A, LayEkuakille A. Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sens J 2015;15:678390. 
25.  Kavitha CT, Chellamuthu C. Multimodal medical image fusion based on integer wavelet transform and neurofuzzy. In: Proc 2010 International Conference on Signal Image Processing. IEEE 2010;296300. 
26.  Tang L, Tian C, Li L, Hu B, Yu W, Xu K. Perceptual quality assessment for multimodal medical image fusion. Signal Processing:Image Communication. Elsevier 2020;85:18. 
27.  Li S, Hong R, Wu X. A novel similarity based quality metric for image fusion. In: Proc International Conference on Audio, Language and Image Processing. IEEE 2008;16772. 
28.  Zheng Y, Essock EA, Hansen BC, Haun AM. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf Fusion 2007;8:17792. 
29.  Prakash O, Kumar A, Khare A. Pixellevel image fusion scheme based on steerable pyramid wavelet transform using absolute maximum selection fusion rule. In: Proc International Conference on Issues and Challenges in Intelligent Computing Techniques. IEEE 2014;76570. 
30.  
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10], [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15], [Figure 16], [Figure 17], [Figure 18], [Figure 19], [Figure 20], [Figure 21], [Figure 22], [Figure 23], [Figure 24], [Figure 25], [Figure 26], [Figure 27]
[Table 1], [Table 2]
