Journal of Medical Physics
 Home | Search | Ahead of print | Current Issue | Archives | Instructions | Subscription | Login  The official journal of AMPI, IOMP and AFOMP      
 Users online: 181  Home  EMail this page Print this page Decrease font size Default font size Increase font size 


 
 Table of Contents    
ORIGINAL ARTICLE
Year : 2021  |  Volume : 46  |  Issue : 4  |  Page : 263-277
 

Multi-modality medical image fusion using cross-bilateral filter and neuro-fuzzy approach


1 Department of Computer Science and Applications (DCSA), Panjab University, Chandigarh, India
2 Department of Computer Science and Applications (DCSA), Panjab University Regional Centre, Hoshiarpur, Punjab, India
3 Department of Radiotherapy, Behgal Institute of IT and Radiation Technology, Mohali, Punjab, India

Date of Submission18-Jan-2021
Date of Decision12-Jul-2021
Date of Acceptance01-Oct-2021
Date of Web Publication02-Dec-2021

Correspondence Address:
Ms. Harmeet Kaur
Department of Computer Science and Applications (DCSA), Panjab University, Chandigarh
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmp.JMP_14_21

Rights and Permissions

 

   Abstract 

Context: The proposed technique uses the edge-preserving capabilities of cross-bilateral filter (CBF) and artificial intelligence technique adaptive neuro-fuzzy inference system (ANFIS) to fuse multi-modality medical images. Aims: The aim is to present the unlike information onto a single image as each modality of medical image contains the unalike domain of information. Settings and Design: First, the multi-modality medical images are decomposed using CBF by tuning its parameters: radiometric and geometric sigma producing CBF component and detail component. This detail is fed to ANFIS for fusion. On the other hand, the sub-bands obtained from DWT are fused using average rule. Reconstruction method gives final image. Subjects and Methods: ANFIS is used to train the Sugeno systems using neuro-adaptive learning. The fuzzy inference system in the ANFIS is used to define fuzzy rules for fusion. On the other hand, bior2.2 is used to decompose the source images. Statistical Analysis Used: The performance is verified on the Harvard database with five cases, and the results are equated with conventional metrics, objective metrics as well as visual inspection. The statistics of the metrics values is visualized in the form of column chart. Results: In Case 1, better results are obtained for all conventional metrics except for average gradient (AG) and spatial frequency (SF). It also achieved preferred objective metric values. In Case 2, all metrics except AG, mutual information, fusion symmetry, and SF are better values among all methods. In Cases 3, 4, and 5, all the metrics have achieved desired values. Conclusions: Experiments conclude that conventional, objective, visual evaluation shows best results for Cases 1, 3, 4, and 5.


Keywords: Adaptive neuro-fuzzy inference system, biorthogonal wavelet, cross-bilateral filter, fusion, medical images


How to cite this article:
Kaur H, Kumar S, Behgal KS, Sharma Y. Multi-modality medical image fusion using cross-bilateral filter and neuro-fuzzy approach. J Med Phys 2021;46:263-77

How to cite this URL:
Kaur H, Kumar S, Behgal KS, Sharma Y. Multi-modality medical image fusion using cross-bilateral filter and neuro-fuzzy approach. J Med Phys [serial online] 2021 [cited 2022 Aug 18];46:263-77. Available from: https://www.jmp.org.in/text.asp?2021/46/4/263/331674



   Introduction Top


Researchers are digging deeper into fusing multi-modality medical images, as the prevailing situation is that maximum of the population from all over the world is diagnosed with one or the other type of cancer, and to save the precious lives, remarkable work needs to be carried out. Integration of multi-modality images into a single image, i.e. a fused image, should have the capability to represent the best information from each modality and hence is the main motive of image fusion. Positron emission tomography (PET), computerized tomography (CT), magnetic resonance imaging with T1 weight (MRI-T1), and MRI with T2 weight (MR-T2) are the widely used modalities of the medical images. The aim is to contribute the relevant information from each modality into the resultant image which will aid the radiologist to figure out the tumor area for radiotherapy planning as well as treatment. [Figure 1] shows the delineation of suspected volume in which different volumes are marked with different colors. These are the regions defined by the International Atomic Energy Agency[1] report of ICRU 50/62/83 that with the advancements in technology as shift is from plane (XY axis) to volume (three-dimension [3D]), the particular volumes need to be defined accurately. The treated volume is the tissue volume that is planned to receive dose selected by a radiation oncologist, and it further consists of planning target volume, irradiated treatment volume, clinical tumor volume, and gross tumor volume (GTV).
Figure 1: Target volume delineation

Click here to view


The diverse modalities provide different types of information like a CT scan will provide us the tumor information with additional information such as blood vessels, inflammation, shell, and edema. The MRI gives the soft-tissue information and contrast information. Functional information is provided in the PET/single-photon emission computerized tomography (SPECT) scan.

The fusion process is a stepwise process in which the first step is to acquire the medical images to be fused. Then before applying fusion, the images can be directly fed to fusion system or decomposed so that the performance of fusion is enhanced. Decomposition of images is done to extract information from source image into sub-bands for which various techniques are implemented. The strategies to embed this useful information in the final output are called fusion algorithms. A lot of work is present in the literature to give an overview of available methods for fusion.[2],[3] Cross-bilateral filter (CBF) is used to fetch the detail from each input and then weights are assigned to formulate the final fused image.[4] The authors have used conventional as well as objective measures to validate the principle of the proposed method. Du et al. have implemented reflectance and illumination-based decomposition.[5] Two color based methods i.e. Retinex based and gray world are discussed.[5] The former method is used to decompose image into reflectance, illumination image whereas later cover-up the color constancy issue. The human retina-inspired model[6] allows to preserve spectral features with minimum spatial distortion. The proposed method performed well when compared with hue intensity saturation, discrete wavelet transform (DWT), wavelet-based sharpening, and wavelet-à trous transform. Image fusion based on convolution neural network (IFCNN), a CNN-based fusion method, is proposed in Zhang et al.'s study[7] with two convolution layers to extract image features, followed by relevant fusion rule. The method is fully convolution, allowing it to be trained in an end-to-end way. The output confirms better performance as the proposed method is able to maintain data information and color information. Nonsubsampled contourlet transform (NSCT) is used with fuzzy entropy[8] to increase the visual inspection of tumor area. After decomposing the input, the low-frequency components are fused by calculating fuzzy entropy value and high-frequency components are fused using regional energy. The metrics average gradient (AG), standard deviation (SD), and edge values are better than other methods. Image local features are extracted from input images and combined with fuzzy logic.[9] The weights for each pixel are calculated so as to combine the source images giving preference to weight factor and hence better results are achieved. The basic principle of ANFIS is defined and workflow is explained by Walia et al.[10] with parameter setting for nonlinear functions. An attempt is made[11] to provide decision support system to diagnose Alzheimer using neural network. The fuzzified data set is used in hybrid neuro-fuzzy system which proved to be more precise than old manual system. A new method based on Berkeley wavelet transform (BWT) and support vector machine (SVM)[12] is proposed to segment the brain MR to precisely figure out the tumor cells as well as healthy cells. The comparison is made with ANFIS, backpropagation, and K-NN classifiers and results indicated that the proposed method achieved good results in terms of sensitivity, specificity, and accuracy. Fusion algorithm to increase the segment ability of echocardiography features using pixel-level principal component analysis and DWT techniques is discussed in Mazaheri et al.'s study,[13] and the proposed method is able to reduce noise and artifacts. Fuzzy-adaptive reduced pulse-coupled neural network (RPCNN) is employed[14] with multi-scale geometric analysis. The fuzzy membershiP values act as linking strength of RPCNN's neurons. The drawbacks of other fusion techniques such as reduced contrast, missing details, and the performance are properly managed by the proposed method. CT and MRI images are fused using iterative neuro-fuzzy approach (INFA)[15] and lifting wavelet transform with NFA. The results of INFA are better in terms of metrics and visual information. Adaptive neuro-fuzzy inference system is employed to fuse PET and MRI images by Kavitha et al.[16] The source images are decomposed using shift-invariant wavelet and then fused using ANFIS. The metrics entropy, AG, average, SD, mean square error, and peak signal-to-noise ratio (PSNR) are calculated and results agree to the performance on both visual and mathematical results. Two-scale image decomposition, sparse representation is done[17] including contrast enhancement, spatial gradient-based edge detection, and then breaking the image into base and detail layers Jianwen Hu et al.[18] used pixel-level multi-scale directional bilateral filter with the focus on multi-sensor images. Ability to preserve edges and directional information resulted in good performance in terms of visual as well as performance metrics. Experimental outcomes are compared with conventional methods on infrared and medical images. Multi-scale directional bilateral filter outperformed DWT, shearlet wavelet transform, dual-tree complex wavelet transform (DTCWT), NSCT, and multi-scale bilateral filter (MBF) with visual information fidelity (VIF) 72.01% in multi-sensor images and also gives better values for medical images. VIF 77.05% is achieved when applied on CT and MRI images, and for MRI-T1 and MRI-T2 images, the highest QE (edge information) is obtained, i.e. 50.02%. A pixel-level fuzzy-based fusion scheme with minimum-sum-mean of maximum (MIN-SUM-MOM) is presented[19] which shows it is better than the minimum-maximum-centroid (MIN-MAX-Centroid) algorithm. A hybrid approach integrating the advantages of NSCT, RPCNN, and fuzzy logic is depicted,[14] and a naïve fusion scheme is proposed which has shown its worth with higher spatial resolution and lesser difference with the original image. Higher value of entropy and SD is achieved with this new algorithm's implementation.

Wavelet transform has been widely used in the past and as a state-of-art is explored by many authors. PET and CT images are transformed[20] using two-dimensional DWT followed by weighted average of the approximate coefficients. Parul et al.[21] used a weighted average of pixels derived from eigenvalues in wavelet domain. Sharpened images are obtained in the resultant image. Apart from evaluating the performance on conventional metrics, Petrovic and Xydeas image fusion metric is used. Haribabu et al.[22] decomposed PET images into intensity hue and saturation components and then fed to DWT for further decomposition into low- and high-frequency components. The low-frequency components were fused with average rule, and for the high-frequency components, spatial frequency (SF) is considered with 8*8 window. The performance is better when compared with PCA fusion scheme with outcome 62.2149, 3.0617, and 3.4886 for PSNR, entropy, and SD, respectively. Spatial features are preserved[23] using difference in red and blue components by applying YCbCr. DWT is applied for image fusion using pulse-coupled neural networks (PCNNs). The proposed method shows less spatial distortions when compared with contourlet, curvelet, and DWT. Discrepancy and AG evaluation using the proposed technique is 3.8931 and 5.1807, respectively. A cascade combination of stationary wavelet transform and nonsubsampled wavelet transform is depicted[24] with focus on preserving the spectral and spatial features of source images. The entropy, SD, fusion factor (FF), Q (edge strength), SSIM (structured similarity), EME (measure of enhancement), and PSNR of the proposed method entropy are 6.9414, 74.1232, 4.2770, 0.8588, 0.8867, 24.7221, and 39.5643, respectively. Integer wavelet transform is used to decompose the images[25] and then neuro-fuzzy is applied on the wavelet coefficients. Metrics were calculated for entropy, fusion symmetry (FS), and FF achieving higher values for entropy and FF when compared to DWT-based neuro-fuzzy. Rajkumar et al.[15] introduced an INFA fusion method. A NFA with lifting wavelet transform is also implemented on CT and MRI images. For evaluation, comparison with existing methods, i.e. DWT and average method on six datasets of brain images, is taken. New methods outperformed from the existing ones with values of normalized correlation coefficient (CC), entropy, and structure similarity index are 0.9526, 6.5104, and 0.6574, respectively, using INFA.

The literature survey implies that the authors have used many decomposition and fusion methods such as DWT, IFCNN, IHS, NSCT, BWT and SVM, DTCWT, fuzzy-adaptive RPCNN, MBF with VIF, PCNN, and CBF. The DWT has the drawback of shift variance, and artifacts are also introduced causing redundancy. The DWT is also unable to perform well at the edges, and the contrast is also reduced. Though RDWT technique is translation invariant but it provides redundant information. The DTCWT is a shift invariant and has a directional sensitivity. The reconstruction property is also perfect in the case of DTCWT, but it is computationally expensive and demands huge memory. The authors have used CBF in the past for fusion. However, to further investigate its features, CBF is used in this work for decomposing the images, which is a prefusion requirement. On applying CBF, image is decomposed into two components, namely CBF component and detail component. Subtracting the CBF component from original image gives detail component. This detail component is used for further processing. The detail component of each modality is given as input to ANFIS for fusion. Although many researchers have proposed their method to improvise the fusion performance in Tang et al.'s study,[26] the authors have made a multimodal medical image fusion image database, then the fusion algorithms are applied on this database, and then, the quality of output is assessed.

In this paper, the edge-preserving capabilities of CBF are explored in order to enhance the multi-modality medical image fusion results. The proposed work verifies and compares devised technique with the techniques available in MATLAB Toolbox. The purpose is to enhance the fusion results which helps the oncologist to outline the tumor area more precisely than the existing techniques.


   Subjects and Methods Top


The proposed method considers different modality images A and B of the same organ as input. The proposed method is different from the other methods, as CBF is able to target the edges. Here, using CBF, one image is used to shape the kernel weights and is applied on the second image. In parallel to these calculations, biorthogonal wavelet (bior2.2) transform is applied on the source images A and B which gives approximate and detail components. Fuzzy inference system and average rule are performed on the decomposed parts for fusion. The block diagram of the proposed scheme is shown in [Figure 2] for two input images A and B.
Figure 2: Proposed image fusion framework

Click here to view



   Cross-bilateral filter Top


Decomposition is performed taking into consideration scale as well as orientation. High-pass and low-pass filters give the complete representation of the image in the decomposed parts, i.e., the source images are decomposed into sub-bands, each containing low- and high-frequency components in it. Low-frequency components are passed by low-pass filter denoting smooth regions whereas high-frequency components are smoothened by high-pass filter, denoting edges where smoothing is a process of convolution of image with uniform/Gaussian kernel. The CBF accomplishes edge-preserved smoothing by modifying the kernel based on the indigenous contents, which is impossible to achieve using Gaussian kernel. Using cross-bilateral filter-based decomposition, detailed coefficients are obtained.

The CBF component is calculated, as depicted in [Figure 2], for each input image (ACBF and BCBF) while adjusting the radiometric sigma and geometric sigma. Euclidean distance calculation is done to consider the neighboring pixels as well. When these CBF components are subtracted from their respective original images, detailed components are obtained, having equation:

ADETAIL = A − ACBF

BDETAIL = B − BCBF

These obtained details are then further decomposed using wavelets and from the various wavelet-based decomposition methods in the literature, like Daubechies, which is the extremal phase wavelets with a maximum of 15 vanishing points to choose from. On the other hand, Haar or db1 is the exclusive orthogonal wavelet with linear phase. Biorthogonal wavelets with different versions have been implemented and finally bior2.2 is used in this paper. The high-frequency components calculated by bior2.2 act as input to ANFIS for fusion as it contains maximum of information.

Adaptive neuro-fuzzy inference system-based fusion

ANFIS is a class of adaptive networks that are functionally equivalent to fuzzy inference systems (FISs) where tuning of the parameters of Takagi–Sugeno is done using hybrid learning method. Least-squares and backpropagation gradient descent methods are used in combination for modeling training data set. The ANFIS used in this study includes two inputs with five membership functions for each input, set of 25 rules, and single output, i.e. fused image. Neural networks are used with fuzzy logic in which the neurons adjust the membership functions. Apart from improving the performance of inference system, neural networks decline the development time and are automatically preceded.

The advantage of using ANFIS is that having training and testing data, it has the capacity to train. Hence, after the selection of membership type and number of membership functions, the rules are defined by the ANFIS, and [Figure 3] shows the rule surface viewer where the corresponding output is defined for the combination of inputs. The performance of the designed system can be checked with the training as well as testing data [Figure 4]a and [Figure 4]b, and graphical depiction allows tracing the performance in a user-friendly manner. In the proposed work, neuro-fuzzy inference system for fusion as shown in [Figure 5] is depicting 2 input nodes, 5 membership functions for each input node, 25 nodes signifying 25 rules, defuzzification nodes, and single output node.
Figure 3: Surface viewer of the rules

Click here to view
Figure 4: (a) The performance of adaptive neuro-fuzzy inference system with testing data and (b) The performance of adaptive neuro-fuzzy inference system with training data, respectively

Click here to view
Figure 5: Structure of adaptive neuro-fuzzy inference system model depicting 2 input nodes, 5 membership functions for each input node, 25 nodes signifying 25 rules, defuzzification nodes, and single output node

Click here to view


Evaluation parameters

To verify the output obtained after fusion, evaluation is done using metric calculation and comparing the values. On the other hand, as the image has to be finally presented to the radiation oncologist for planning and treatment, he/she should also be satisfied with the output image visually. Hence, the evaluation is categorized as[4] evaluation using conventional metrics, evaluation using objective metrics, and subjective evaluation.

Evaluation using conventional metrics

Various conventional metrics are available in the literature for evaluation of fusion results.[4],[26],[27],[28],[29] The brief discussion of various metrics used in the literature which are frequently used to verify results using the objective evaluation is as follows:

  • Average pixel intensity (API) calculates the mean of the pixel values to give the index of contrast.




where

f (i, j) =pixel size of the image

  • AG is calculated to find out clarity and sharpness in the fused image




  • Entropy: It gives the probability-based amount of information present in the image and is calculated by the formula:




where pk is the probability if intensity value k.

  • Mutual information (MIF), also called cross entropy, gives the overall MIF between source images and fused image, and to calculate it, we first calculate MIAF and MIBF and then addition of both will give the final MIF, i.e.,


MI = MIAF + MIBF





  • FS: The symmetry of the final fused image is compared with the input images using the formula.




  • CC: To measure the correlation between the output and the input images, rAF and rBF signifies the relevance of A and B with respect to F. The CC is then calculated as:






  • SF is calculated to find out the region-wise information levels.




The row and column frequencies (RF and CF) are calculated as:





Root-mean-square error (RMSE) is calculated to find the error as percentage between reconstructed image and original image and is given as:



where M and N is the dimension of images and Isource is the original image and Ifused is the output image.

  • PSNR is based on the RMSE and is calculated as


PSNR = 10 × log10 (M × N/[RMSE]^2)

Evaluation using objective metrics

Gradient information-based performance measuring is done in objective evaluation with factors:

  1. Total fusion performance, QAB/F, measures the information transferred from original image to fused image
  2. Fusion loss, LAB/F, measures the loss of information due to fusion
  3. Fusion artifact, NAB/F, tells about the undesired artifacts added in the image due to fusion.


The sum of the above three factors should be one, i.e.,

QAB/F + LAB/F + NAB/F = 1

Subjective evaluation

The subjective evaluation is done on the treatment planning system (TPS) by the oncologist, in which the fused image is verified with input images to confirm for availability of information from both the modalities in the output image. Comparison of tumor volume is done by contouring the tumor area in the input modalities and then in the fused image. The good fusion output should be able to present the precise tumor volume with the capability to save the organ at risk.


   Results Top


The open-source images from Harvard database[30] were taken to perform fusion, as the proposed method is a naive technique and its viability is defined in this paper which comes out to be better than the existing methods. The proposed method can be extended on real-time images as well. However, to perform on the real images, the registration phase needs to be performed at first. Diagnostically 1-cm cuts are required, but for image fusion for treatment planning in radiation oncology, 1-mm cuts are required, and keeping these details in view, the five cases were recommended by the radiation oncologist and medical physicist.

Case 1: Sarcoma with fusion of CT modality with MRI modality

Case 2: Metastatic adenocarcinoma with fusion of MR with gadolinium contrast medium (MR-GAD) modality with MR-T2 modality

Case 3: Meningioma with fusion of CT modality with MR-T2 modality

Case 4: Meningioma with fusion of CT modality with MR-GAD modality

Case 5: Astrocytoma with fusion of MR-GAD with PET-fluorodeoxyglucose (FDG) modality.

In Case 1, the patient was a 22-year-old man who was admitted for resection of Ewing's sarcoma. On examination, he was inattentive, confused, and had a right homonymous hemianopia, a left inferior quadrantanopia, right lower extremity hyperreflexia, and right extensor plantar response. CT and MRI modalities are available in this case.

Case 2 is metastatic carcinoma of the colon of a 62-year-old man suffering a first seizure, a tonic-clonic convulsion with focal onset. There was a history of carcinoma of the colon, with recent metastasis to the liver and lung. MR images show a lesion involving the right second frontal convolution and another in the cerebellum, near the fourth ventricle, also visible on the sagittal image map. The low signal on T2-weighted images of the frontal lesion is remarkable, since metastases are often associated with high signal. MR-GAD and MR-t2 images are available for fusing.

Case 3 is a meningioma case with CT and MR-T2 modalities available of a 75-year-old man. The CT is fused with MR-T2.

Case 4 is a meningioma case with CT and MR-GAD modalities available of a 75-year-old man with fusion of CT with MR-GAD to add precision to the treatment and planning.

Case 5 is a grand mal seizure in a 53-year-old case and brain biopsy revealed Grade IV astrocytoma. The MR-GAD and PET with FDG radiotracer images are available and are fused to figure out the tumor volume.

Fused image by the proposed method is compared with different methods which are available in MATLAB. The comparison of fusion algorithms based on conventional metrics for Case 1–Case 5 is given in [Table 1].The comparison of fusion algorithms based on objective metrics for Case 1–Case 5 is given in [Table 2]. The initial decomposition wavelet used is bior2.2 in each method at level 1. The linear image fusion is used with parameter value = 0.4. The proposed method used CBF with the following parameters: spatial sigma = 1.8, radiometric sigma = 25, kernel size = 5, and covariance window size = 11.
Table 1: Comparison of fusion algorithms based on conventional metrics for Case 1–Case 5

Click here to view
Table 2: Comparison of fusion algorithms based on objective metrics for Case 1–Case 5

Click here to view


A higher value of the metrics except for RMSE, LAB/F, and NAB/F is desired to be called a better fusion outcome. The higher values are bolded, and for RMSE, LAB/F, and NAB/F, lower values obtained are bolded.

The quantitative comparison shows that for Case 1, proposed method has achieved better API, entropy, MIF, FS, and CC, and for AG and SF, CBF method has performed well. This implies that the fused output has better pixel intensity, and entropy of the fused output is also better. The MIF has also increased and the symmetry of the information is also higher than the other methods. The proposed method is also capable of improving the correlation among input and output images. Moreover, in terms of objective measures, more information is transferred from input images to output image, loss of information is lowered, and addition of artifacts is also lesser when compared with the performance of other methods as shown in [Figure 6]a, [Figure 6]b, [Figure 6]c, [Figure 6]d, [Figure 6]e, [Figure 6]f, [Figure 6]g, [Figure 6]h. It is clear from [Figure 6]h that the image obtained from the proposed method is more informative in terms of MIF, correlation, entropy, and symmetry with respect to source. This case achieved the highest PSNR with value 38.4048.
Figure 6: Case 1 source images (a and b), Fused output (c-h)

Click here to view


For Case 2, the proposed method is able to improve only the pixel intensity, entropy of the fused output, and correlation between inputs and output. The linear fusion method has performed better in terms of MIF and FS. The CBF is able to perform well in terms of AG and SF. While looking at objective metrics, the proposed method is able to perform well in terms of minimum addition of artifacts. In this case, CBF has performed well in terms of transfer of information, which has increased with minimum loss of information. [Figure 7]a, [Figure 7]b, [Figure 7]c, [Figure 7]d, [Figure 7]e, [Figure 7]f, [Figure 7]g, [Figure 7]h presents input as well as output from different methods, and [Figure 7]h shows that intensity, entropy, and correlation are better but are unable to perform better in terms of MIF and symmetry, for which linear fusion method has performed better. The image also verifies that negligible amount of artifact is transferred in the fused image. The highest entropy with value 1.6241, CC with value 0.9176, and the lowest NAB/F with value 0.0001 are attained.
Figure 7: Case 2 source images (a and b), Fused output (c-h)

Click here to view


For Case 3, the proposed method is able to achieve better API, AG, entropy, MIF, symmetry of fused image, cross correlation, SF, PSNR, and a minimum of RMSE and is shown in [Figure 8]a, [Figure 8]b, [Figure 8]c, [Figure 8]d, [Figure 8]e, [Figure 8]f, [Figure 8]g, [Figure 8]h. The fused image is informative as it has more transfer of information with less loss of information and addition of artifacts. For this case, the lowest RMSE with value 31.0082, LAB/F with value 0.0969, and the highest QAB/F with value 0.8826 are obtained.
Figure 8: Case 3 source images (a and b), Fused output (c-h)

Click here to view


For Case 4, the values of metrics API, AG, entropy, MIF, FS, CC, SF, and PSNR are highest for the proposed method among the values obtained from all other methods. The new method was also capable to transfer maximum information from source images to fused image (QAB/F = 0.869) as shown in [Figure 9]a, [Figure 9]b, [Figure 9]c, [Figure 9]d, [Figure 9]e, [Figure 9]f, [Figure 9]g, [Figure 9]h. The information loss, LAB/F = 0.1410, and addition of artifacts, i.e. NAB/F = 0.0021, are also negligible. In this case, the highest FS with value 1.9999 and SF with value 35.4682 are achieved.
Figure 9: Case 4 source images (a and b), Fused output (c-h)

Click here to view


Similarly for Case 5, [Figure 10]a, [Figure 10]b, [Figure 10]c, [Figure 10]d, [Figure 10]e, [Figure 10]f, [Figure 10]g, [Figure 10]h shows input images as well as fused output. PET-FDG gives the functional activity of the tissues under study. Using the radioactive material, the radioactivity inside the body is captured to study the abnormalities. The conventional metrics have shown acceptable values and the objective metrics also showed values as expected by a good fusion algorithm. In this case, the highest API with value 7.1181, AG with value 18.2626, MIF with value 0.9893, and FS with value 1.9999 are achieved. For the subjective evaluation, the output images were visually inspected by the oncologist. All the cases were presented to the expert, as shown in [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15], and the following remarks were obtained:
Figure 10: Case 5 source images (a and b), Fused output (c-h)

Click here to view
Figure 11: Case 1 showing computerized tomography image, magnetic resonance imaging image, and the fused output, respectively

Click here to view
Figure 12: Case 2 showing magnetic resonance imaging-gadolinium contrast medium image, magnetic resonance-T2 image, and the fused output, respectively

Click here to view
Figure 13: Case 3 showing computerized tomography image, magnetic resonance -T2 image, and the fused output, respectively

Click here to view
Figure 14: Case 4 showing computerized tomography image, magnetic resonance imaging-gadolinium contrast medium image, and the fused output, respectively

Click here to view
Figure 15: Case 5 showing magnetic resonance imaginggadolinium contrast medium image, positron emission tomographyfluorodeoxyglucose image with missed tumor areas, and the fused output, respectively

Click here to view


For Case 1, CT image shows a particular length of the tumor (red arrow) and MRI image shows a lot of artifacts and is less sensitive than CT and shows a longer tumor (yellow arrow). Once the modalities are fused, one is actually able to see what is the actual site of the tumor and what will be the planned target volume and this precision in turn will be able to save a lot of normal tissues which otherwise may get over-radiated. Thus, the fused output is able to reduce the size of the planned target volume and finally the entire dose to the normal organs is also reduced.

For Case 2, as displayed in [Figure 7] and [Figure 12], contouring on MR-GAD displays more treatment volume (in red color) as compared to MR-T2 (in yellow color). After fusion, we can observe that there is a significant difference in treatment volume. Radiotherapy treatment should be executed as per the MR-T2 image which is actually the size of the tumor. It has increased the precision while executing radiotherapy treatment.

For Case 3, as shown in [Figure 8] and [Figure 13], MR-T2 shows good middle line shift and demarcation of the tumor (in yellow color) but without differentiation from the edema. Whereas CT does not shows the middle line shift but is able to present the basic tumor outline (in yellow color) and tumor area (in red color). On fusing the two modalities, the resultant image is able to precisely define tumor volume (in red) with the presence of middle line shift also. This also signifies that the CT-based treatment may not be able to perform well and hence fusing it with other modalities for more information is needed.

Similarly, for Case 4, as visible in [Figure 9] and [Figure 14], CT does not views the middle line shift but is able to present the outline of tumor area (in yellow color). MR-GAD is able to display the middle line shift and the tumor volume, but variation from edema is unclear. The tumor is going along the perisylvian fissure anteriorly in the MR-GAD image. On fusing the two modalities, the resultant image carry the desirable properties of both the modalities i.e. middle line shift as well as tumor volume.

For Case 5, as displayed in [Figure 10] and [Figure 15], the CT modality is able to capture the tumor present in the particular organ. In PET, the metabolic information shows that the cancerous area is visible but is not completely being figured out and hence a part of the tumor volume is missed. However, in the fused image, the tumor volume is precisely delineated and the healthy tissues are saved from radiations.

On the other hand, if the radiologist witnesses some additional information in the volume lying between the two contours, then the radiotherapy can be executed on the bigger volume. The patient's position uncertainty in the treatment room will advocate the use of outer contouring volume for the radiotherapy treatment. The radiation oncologist may suggest to treat the bigger volume if the involvement of lymph nodes is seen which could be the volume lying between the two contours. The statistical analysis of the metrics is also done using graphs to envision the performance of fusion methods in each case [Figure 16],[Figure 17],[Figure 18],[Figure 19],[Figure 20],[Figure 21],[Figure 22],[Figure 23],[Figure 24],[Figure 25],[Figure 26],[Figure 27].
Figure 16: Graph 1 – Average pixel intensity calculation

Click here to view
Figure 17: Graph 2 – Average gradient calculation

Click here to view
Figure 18: Graph 3 – Entropy calculation

Click here to view
Figure 19: Graph 4 – Mutual information of fused image calculation

Click here to view
Figure 20: Graph 5 – Fusion symmetry calculation

Click here to view
Figure 21: Graph 6 – Cross correlation calculation

Click here to view
Figure 22: Graph 7 – Spatial frequency calculation

Click here to view
Figure 23: Graph 8 – Root-mean-square error calculation

Click here to view
Figure 24: Graph 9 – Peak signal-to-noise ratio calculation

Click here to view
Figure 25: Graph 10 – Total fusion performance, QAB/F calculation

Click here to view
Figure 26: Graph 11 – Fusion loss, LAB/F calculationz

Click here to view
Figure 27: Graph 12 – Fusion artifacts, NAB/F calculation

Click here to view



   Discussion Top


A new technique to perform multi-modality medical image fusion is proposed using CBF and ANFIS. The proposed fusion method gives precise tumor area with preservation of edges and negligible loss in information. The output is more informative in terms of edge information, information transfer, and PSNR too and is quantified using different metrics as discussed in the result. The doctors also confirmed that the abnormality is more clearly viewed as well as addition of artifacts is minimal in the proposed algorithm's output when compared with the mentioned state-of-the-art methods.

Fusion output can play a vital role to execute an optimum dose to the tumor so that the condition of underdose or overdose is overlooked. In the radiotherapy practices, a dose above 107% of the recommended dose is called hot spot or overdose, and if the dose remains <95% of the suggested amount, then it is called cold spot or underdose. The dose quantity will directly affect the surrounding organs because a cold spot may lead to relapse/recurrence of the tumor whereas hot spot may disturb the functioning of the targeted organ.

The proposed method is skilled to perform fusion in a better as well as qualified way as only by adjusting the two parameters which allows tracking the size of the feature and contrast of the feature to be preserved. The noniterative nature of CBF allows ignoring the cumulative effect over several iterations, which may mislead the fusion process. To conclude, the proposed method is efficient than the mentioned fusion methods. The fusion method devised can be integrated with the TPS. The integration of fusion technique with the radiation output parameters will improvise the treatment planning. The present study has applied fusion on slices. In future, the work can be used to evolve over a surface in 3D domain instead of a single slice. Thus, proposed method can be applied on the voxels to create 3D volumes. The database used is open source and retrieved from Harvard University.[30]

Acknowledgment

We are very thankful to radiation oncologists, medical physicist/radiological safety officer at Behgal Institute of IT and Radiation Technology for evaluating the results visually and for comparing the output image with the input images.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
   References Top

1.
International Atomic Energy Agency (IAEA): Definition of Target Volumes and Organs at Risk; 2005. Available from: https://humanhealth.iaea.org/HHW/RadiationOncology/Treatingpatients/Treatment_planning_and_techniques/Training_Course/12_Defining_target_volumes_and_organs_at_risk.pdf. [Last accessed on 2021 Jan 01].  Back to cited text no. 1
    
2.
Huang B, Yang F, Yin M, Mo X, Zhong C. A review of multimodal medical image fusion techniques. Comput Math Methods Med 2020;2020:1-16.  Back to cited text no. 2
    
3.
El-Gamal FE, Elmogy M, Atwan A. Current trends in medical image registration and fusion. Egypt Inform J 2016;17:99-124.  Back to cited text no. 3
    
4.
Kumar BK. Image fusion based on pixel significance using cross bilateral filter. SIViP 2015;9:1193-204.  Back to cited text no. 4
    
5.
Du J, Li W, Tan H. Intrinsic image decomposition-based grey and pseudo-color medical image fusion. Proc IEEE Access 2019;7:56443-56.  Back to cited text no. 5
    
6.
Sabalan D, Hassan G. MRI and PET images fusion based on human retina model. J Zhejiang Univ Sci A 2007;8:1624-32.  Back to cited text no. 6
    
7.
Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L. IFCNN: A general image fusion framework based on convolutional neural network. Inf Fusion 2020;54:99-118.  Back to cited text no. 7
    
8.
Wang K, Cai K. Improving medical image fusion method using fuzzy entropy and nonsubsampling contourlet transform. Int J Imaging Syst Technol 2021;31:204-14.  Back to cited text no. 8
    
9.
Javed U, Riaz MM, Ghafoor A, Ali SS, Cheema TA. MRI and PET image fusion using fuzzy logic and image local features. ScientificWorldJournal 2014;2014:1-8.  Back to cited text no. 9
    
10.
Walia N, Singh H, Sharma A. ANFIS: Adaptive Neuro-Fuzzy Inference System – A Survey. Int J Comput Appl 2015;123:32-8.  Back to cited text no. 10
    
11.
Obi JC, Imainivan AA. Decision support system for the intelligient identification of alzheimer using neuro fuzzy logic. Int J Soft Comput 2011;2:25-38.  Back to cited text no. 11
    
12.
Bahadure NB, Ray AK, Thethi HP. Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. Int J Biomed Imaging 2017;2017:1-12.  Back to cited text no. 12
    
13.
Mazaheri S, Sulaiman PS, Wirza R, Dimon MZ, Khalid F, Moosavi Tayebi R. Hybrid pixel-based method for cardiac ultrasound fusion based on integration of PCA and DWT. Comput Math Methods Med 2015;2015:1-16.  Back to cited text no. 13
    
14.
Das S, Kundu MK. A neuro-fuzzy approach for medical image fusion. IEEE Trans Biomed Eng 2013;60:3347-53.  Back to cited text no. 14
    
15.
Rajkumar S, Bardhan P, Akkireddl SK, Munshi C. CT and MRI image fusion based on wavelet transform and neuro-fuzzy concepts with quantitative analysis. Proc ICECS 2014;2014:1-6.  Back to cited text no. 15
    
16.
Kavitha CT, Chellamuthu C. Fusion of PET and MRI images using adaptive neuro-fuzzy inference system. J Sci Ind Res (India) 2012;71:651-6.  Back to cited text no. 16
    
17.
Maqsood S, Javed U. Multi-modal medical image fusion based on two-scale image decomposition and sparse representation. Biomed Signal Process Control 2020;57:101810.  Back to cited text no. 17
    
18.
Hu J, Li S. The multiscale directional bilateral filter and its application to multisensor image fusion. Inf Fusion 2012;13:196-206.  Back to cited text no. 18
    
19.
Teng J, Wang S, Zhang J, Wang X. Fusion algorithm of medical images based on fuzzy logic. In: Proc 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery. IEEE 2010;4:546-50.  Back to cited text no. 19
    
20.
Shangli C. Medical image of PET/CT weighted fusion based on wavelet transform. Proc ICBBE 2008;2008:2523-5.  Back to cited text no. 20
    
21.
Shah P, Merchant SN. An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood. In: Proc 14th International Conference on Information Fusion.ISIF 2011;Part 2:1935-41.  Back to cited text no. 21
    
22.
Haribabu M, Bindu CH, Prasad KS. Multimodal medical image fusion of MRI – PET using wavelet transform. In: Proc International Conference on Advances in Mobile Network, Communication and Its Applications. IEEE 2012;127-30.  Back to cited text no. 22
    
23.
Nobariyan BK, Daneshvar S, Foroughi A. A new MRI and PET image fusion algorithm based on pulse coupled neural network. In: Proc 22nd Iran Conference Electrical Engineering. IEEE 2014;1950-55.  Back to cited text no. 23
    
24.
Bhateja V, Patel H, Krishn A, Sahu A, Lay-Ekuakille A. Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sens J 2015;15:6783-90.  Back to cited text no. 24
    
25.
Kavitha CT, Chellamuthu C. Multimodal medical image fusion based on integer wavelet transform and neuro-fuzzy. In: Proc 2010 International Conference on Signal Image Processing. IEEE 2010;296-300.  Back to cited text no. 25
    
26.
Tang L, Tian C, Li L, Hu B, Yu W, Xu K. Perceptual quality assessment for multimodal medical image fusion. Signal Processing:Image Communication. Elsevier 2020;85:1-8.  Back to cited text no. 26
    
27.
Li S, Hong R, Wu X. A novel similarity based quality metric for image fusion. In: Proc International Conference on Audio, Language and Image Processing. IEEE 2008;167-72.  Back to cited text no. 27
    
28.
Zheng Y, Essock EA, Hansen BC, Haun AM. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf Fusion 2007;8:177-92.  Back to cited text no. 28
    
29.
Prakash O, Kumar A, Khare A. Pixel-level image fusion scheme based on steerable pyramid wavelet transform using absolute maximum selection fusion rule. In: Proc International Conference on Issues and Challenges in Intelligent Computing Techniques. IEEE 2014;765-70.  Back to cited text no. 29
    
30.
The Whole Brain Atlas. Available from: http://www.med.harvard.edu/aanlib/. [Last accessed on 2021 Jan 01].  Back to cited text no. 30
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10], [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15], [Figure 16], [Figure 17], [Figure 18], [Figure 19], [Figure 20], [Figure 21], [Figure 22], [Figure 23], [Figure 24], [Figure 25], [Figure 26], [Figure 27]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
Print this article  Email this article
  

    

 
   Search
 
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Article in PDF (4,811 KB)
    Citation Manager
    Access Statistics
    Reader Comments
    Email Alert *
    Add to My List *
* Registration required (free)  


    Abstract
   Introduction
   Subjects and Methods
    Cross-bilateral ...
   Results
   Discussion
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed890    
    Printed23    
    Emailed0    
    PDF Downloaded102    
    Comments [Add]    

Recommend this journal