|Year : 2022 | Volume
| Issue : 1 | Page : 57-64
Classifying COVID-19 and viral pneumonia lung infections through deep convolutional neural network model using chest X-Ray images
Dhirendra Kumar Verma, Gaurav Saxena, Amit Paraye, Alpana Rajan, Anil Rawat, Rajesh Kumar Verma
Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh, India
|Date of Submission||19-Jul-2021|
|Date of Decision||01-Dec-2021|
|Date of Acceptance||01-Dec-2021|
|Date of Web Publication||31-Mar-2022|
Mr. Dhirendra Kumar Verma
Computer Division, Raja Ramanna Centre for Advanced Technology, Indore - 452 013, Madhya Pradesh
Source of Support: None, Conflict of Interest: None
| Abstract|| |
Context: Automated detection of COVID-19 in real time can greatly help clinicians to handle increasing number of cases for preliminary screening. Deep CNN models trained with sufficiently large datasets may become best candidates to meet the purpose. Aims: This study aims for automated detection and classification of COVID-19 and viral pneumonia diseases by applying deep CNN model using chest X-ray images. The proposed model performs multiclass classification to meet the purpose. Settings and Design: The proposed model is built on top of VGG16 architecture with pretrained ImageNet weights. The model was fine-tuned using additional custom layers to deliver better performance specific to the target. Subjects and Methods: A total of 15,153 samples are used in this work. These samples include chest X-ray images of COVID-19, viral pneumonia, and normal cases. The entire dataset was split into train and test sets, with a ratio of 80:20 before training the model. To enhance important image features, image preprocessing and augmentation were applied before feeding the image batches to the model. Statistical Analysis Used: Performance of the model is evaluated through accuracy, precision, recall, and F1 score performance metrics. The results produced by the model are also compared with other recent leading studies. Results: The proposed model has achieved a classification accuracy of 98% with 98% precision, 96% recall, and 97% F1 score on the test dataset for multiclass classification. The area under receiver operating characteristic curve score was 0.99 for all three cases of multiclass classification. Conclusions: The proposed classification model may be highly useful for the preliminary diagnosis of COVID-19 and viral pneumonia cases, especially during heavy workloads and large quantities.
Keywords: Chest X-ray, convolutional neural network, COVID-19, deep learning, transfer learning, viral pneumonia
|How to cite this article:|
Verma DK, Saxena G, Paraye A, Rajan A, Rawat A, Verma RK. Classifying COVID-19 and viral pneumonia lung infections through deep convolutional neural network model using chest X-Ray images. J Med Phys 2022;47:57-64
|How to cite this URL:|
Verma DK, Saxena G, Paraye A, Rajan A, Rawat A, Verma RK. Classifying COVID-19 and viral pneumonia lung infections through deep convolutional neural network model using chest X-Ray images. J Med Phys [serial online] 2022 [cited 2022 May 18];47:57-64. Available from: https://www.jmp.org.in/text.asp?2022/47/1/57/341425
| Introduction|| |
Respiratory infections caused by viral pneumonia or COVID-19 may become severe, especially for old age people, people already having a chronic medical illness, or people with weak immune system. Pneumonia is an infectious disease that causes inflammation in the lungs which may be caused by virus, bacteria, fungi, or other germs. The small sacs of the lungs are filled with pus and fluid, which make breathing painful due to limited oxygen intake. Pneumonia can also be a complication of COVID-19, the illness caused by the new coronavirus known as SARS-CoV-2. The symptoms of COVID-19 may be similar to the other kinds of viral pneumonia; therefore, it is hard to discover the cause of infection without a test.
In recent years, a large number of studies and research work have been carried out on applying machine learning models for automated detection of various diseases such as diabetic retinopathy,, breast cancer,, pulmonary nodules diagnosis, and lung cancer.
Deep learning methods reveal very subtle features in images, which are not visible otherwise. Specifically, the ability of convolutional neural network (CNN) in deep feature extraction and learning has made them the leading choice among researchers for classification-related tasks in medical imaging problems. Both pretrained deep CNNs and CNNs trained from scratch can produce very high results if trained and fine-tuned robustly.
This study aims to apply deep transfer learning capabilities from pretrained deep learning models to classify COVID-19 infections and viral pneumonia cases. In this study, we engineered and trained deep CNN model based on the VGG16 model with custom layers. The model is trained and validated on an open public dataset. The dataset contains images for COVID-19, viral pneumonia, and normal cases which is discussed in the “Subjects and Methods” section.
| Subjects and Methods|| |
CNN models comprise deep architectures, which make them suitable to extract robust image features and yield high performances in image classification tasks. Since the appearance of pneumonia in X-ray images may be often vague, it can be difficult for radiologists to precisely identify pneumonia. Further, the features can overlap and mimic other benign abnormalities. These discrepancies may cause variability among radiologists in the diagnosis of pneumonia. In this study, the deep CNN models are designed which take frontal view chest X-ray image as input and provide a binary and multiclass classification for indicating the absence or presence of pneumonia or COVID-19 diseases.
With the emergence of the COVID-19 pandemic, various researchers and doctors started publishing open public datasets containing X-ray, computed tomography, and ultrasound images of different lung diseases with labels. In this work, we used X-ray images of COVID-19, viral pneumonia, and normal cases obtained from the award winning dataset developed by the team of researchers and doctors., The images in the dataset are derived from various publicly available datasets, online sources, and published papers.,,,,,,,, We used COVID-19, viral pneumonia, and normal chest X-ray images from the second update of this dataset. It has a total of 15,153 images for all three categories. The dataset also contains lung opacity images that we have not used. There are 3616 images of COVID-19–positive cases, 1345 images of viral pneumonia, and 10,192 images of normal cases. All the images in the dataset are in PNG file format with a resolution of 299 × 299 pixels.
The images are made available through Kaggle. [Figure 1] shows random samples of images from the dataset. The entire dataset was first split into train and test sets, with a ratio of 80:20 before training the model. The images were used as a test and were never seen by the model during its training. These images were separated initially and kept aside for an unbiased evaluation of the model. The train set was then further divided into train–validation sets with 80:20 ratio. Finally, the train set contains a total of 9697 images, validation set contains 2425 images, and the test set contains 3031 images. The distribution of images under train set, validation set, and test set is depicted in [Figure 2].
|Figure 1: Sample X-ray images from dataset: (a) COVID-19, (b) viral pneumonia, (c) normal|
Click here to view
|Figure 2: Distribution of images under (a) train set, (b) validation set, (c) test set|
Click here to view
In medical datasets, positive samples are generally fewer than negative samples. It is clear from [Figure 2] that the distribution of classes is highly imbalanced as the dataset is skewed toward the “Normal” class. If we apply traditional cost-insensitive classifier, they are likely to make biased classification decisions. To avoid this, we applied cost-sensitive method that uses class weights inversely proportional to their respective class frequencies. While training, class weights provide higher penalty to the minority classes by adjusting the cost function so that the training algorithm could focus on reducing the errors of minority classes. We have used sklearn library function to calculate the class weights.
Image preprocessing and augmentation
To enhance important image features and suppress unwanted distortions, a few preprocessing algorithms (offline) and image augmentation techniques (during training) were applied to the training set. For image preprocessing, all the images were resized to 224 × 224 pixels which is the default input size for VGG16 model. Further, we applied VGG16-specific preprocess input function over training and validations sets before passing them to the model. This function converts the images from RGB to BGR color channels and makes each color channel zero-center w.r.t. the ImageNet dataset, without scaling. Although chest X-ray images are single channeled, we converted them to three color channels to fine tune the VGG16 model which is pretrained on ImageNet images.
Moreover, to expand the volume and variety of images, we applied image augmentation by rotating the image with an angle randomly chosen between 0° and 5° on training set. [Figure 3] shows few variants of an image generated after augmentation.
Model architecture and hyperparameters
The capability of deep CNN models to extract complicated lower level features in images from the original training dataset makes them the leading choice for the advancement of research work in the healthcare community.
In this work, we proposed a deep CNN model for the classification of COVID-19, viral pneumonia, and normal cases by analyzing chest X-rays. The proposed model is built on top of VGG16 architecture with pretrained ImageNet weights. The model was fine-tuned using additional custom layers. VGG16 is a simple and widely used CNN proposed by Simonyan and Zisserman. VGG16 scores 92.7% accuracy on the ImageNet dataset which include more than 14 million images belonging to 1000 categories. The proposed model can be used as a tool for initial screening and discrimination among COVID-19, viral pneumonia, and normal cases. [Figure 4] depicts the architecture of the proposed model.
|Figure 4: Architecture of proposed deep convolutional neural network model|
Click here to view
The proposed deep CNN model uses the VGG16 pretrained model as a base. Initially, we opened the whole model for training to retrain all the parameters of the pretrained network. However, this could not deliver the promising results. Then, we gradually opened the layers and examined the results. Finally, the model was found to perform best when last four layers were kept open (trainable) and remaining layers kept frozen (pretrained). We further added few custom layers at the top of this model. These additional layers made the model delivering much better results specific to our target.
Total five custom layers were added to the base model as depicted in [Figure 4]. AveragePooling2D down sampled the image using pool size (4, 4), flatten layer removed all the dimensions by making 1D array of elements, dense layer with 64 output neurons was applied, dropout layer was applied to ignore randomly selected neurons by the factor of 0.5, and dense layer with 3 output neurons was then applied. The final outcome of classification was produced using softmax activation function. Model weights were randomly initialized and continuously updated according to the gradient of the loss calculated with respect to the ground truth. The aim was to minimize the loss function. Validation set affects the learning of model in an indirect way, since the model never learns from these data. We used validation set to fine tune the model hyperparameters and to avoid overfitting. The final best weight was saved after training and used to evaluate the model on test set. The test set contains images never seen before by the model (during the training or validation) as it was kept aside from the original dataset and used only for evaluating the model. Model hyperparameters we applied are shown in [Table 1].
Accuracy, sensitivity, specificity, precision, recall, F1 score, and confusion matrix are the commonly used metrics for measuring the performance of classification models. These metrics are derived using four values which include true positive, true negative, false positive, and false negative. We used precision, recall, and F1 score to measure the performance of our model. We also plotted the receiver operating characteristic (ROC) curve which is considered to be the de facto standard for classification tasks in medical images.
Following equations show the formula of accuracy, precision, recall, and F1 score to measure performance of the model for multiclass classification. [Table 2] contains abbreviations used in below equations.
For any deep neural network, training phase is the most resource intensive part, where matrix multiplication operations are performed over input and weight arrays. The higher memory bandwidth to accommodate huge datasets and large number of cores to process massive datasets makes Graphics processing unit (GPUs) a leading choice for training deep learning models. This could be done using traditional approach over Central processing unit (CPUs) but would take too long, especially models with billions of parameters. In this work, all the experiments for training, validation, and testing are carried out on GPU nodes of high-performance computing cluster Kshitij-5. [Table 3] shows the hardware and software platform of a single GPU node used in this work.
| Results|| |
The proposed model is tested on test set including a total of 3031 images. These images are not used during the training and are never seen before by the model. The test set contains 2028 normal, 243 viral pneumonia, and 760 COVID-19 images. The training of the model was carried out on a total of 12,122 images, out of which 6528 are normal, 858 are viral pneumonia, and 2311 are COVID-19 images. [Figure 5] represents the performance of classification through ROC curves which are plotted for one class versus rest classes. It is evident from the plot that the area under the curve values are much higher which lead the model for better distinguishing between images infection and no infection. [Figure 6]a depicts how accurate the predictions are made by the model against true values. [Figure 6]b represents the behavior of the model after each step of optimization. It clears from [Figure 6] that learning of the model took place just after initial few epochs which remains steady till later epochs.
|Figure 5: Receiver operating characteristic curves on test set for (a) normal versus rest, (b) viral pneumonia versus rest, (c) COVID-19 versus rest. (d) Multiclass receiver operating characteristic curves for all three cases|
Click here to view
The training of the final model was performed for 250 epochs. The model has achieved an accuracy of 98% on the test dataset with precision, recall, and F1 score of 98%, 96%, and 97%, respectively. Detailed results are depicted in [Figure 5] and [Figure 6] and summarized in [Table 4] and [Table 5]. To achieve better results, we experimented with multiple image sizes (224, 299, and 512 pixels). The best result was achieved with 224 which is the default image size for VGG16 model. The original image size was 299 × 299 which we scaled up/down to obtain different image sizes by reconstructing and resampling image pixels using OpenCV library functions. Further, the batch size of 8, 16, and 32 and optimizers Adam and RMSProp were tried out. The optimum values of hyperparameters that delivered best results are listed in [Table 1]. While training, we also experimented with different number of trainable layers (blocks) of VGG16 model. Initially, all the layers of VGG16 model were opened for training, but model was found to deliver poor performance on validation set. Then, we gradually opened model layers, keeping other layers frozen. Along with custom layers, last 4, 8, and 12 layers of VGG16 were opened for training and the results were examined. The model found to deliver best results when last four layers of VGG16 were opened and rest layers kept frozen during the training. We consider the following to be the reason behind it. It is understood that the initial few layers of a model are responsible for low-level feature extraction and the last few layers deal with the features specific to target images. The low-level features of our dataset and the ImageNet dataset are very similar. That is why VGG16 was able to optimize quickly without training the lower layers in our case. Further, when we trained all the layers of VGG16, the feature extraction ability of the lower layers got disturbed, leading to a reduction in performance. [Table 4] and [Table 5] summarize best achieved the results.
|Table 4: Result for multiclass classification (COVID-19 versus viral pneumonia versus normal)|
Click here to view
|Table 5: Area under the receiver operating characteristic curve values for multiclass classification on test set|
Click here to view
Once the model is trained and deployed, the classification of new samples is the task of seconds. The response time of the model on a commodity hardware platform is in the range of 2–3 s, which is considerable for working clinical environments. The response time of the model is even better (less than 1 s) when deployed on high-end servers.
| Discussion|| |
In this work, we engineered and trained a deep CNN model for automated detection and classification of COVID-19 and viral pneumonia cases through chest X-ray images. The proposed classification model may be highly useful for the preliminary diagnosis of COVID-19 and viral pneumonia cases. The results produced by the model are encouraging since the model is capable to classify all three cases with very high precision and recall values.
We experimented with different pretrained models including Inception V3, InceptionResNetV2, and VGG19. The model based on VGG16 delivered the most promising results. The possible reasons behind the best results of VGG16 may be its small and fixed kernel size, with a limited number of layers which is most appropriate to extract features in chest X-ray images. Further, the image size, preprocessing methods, and model hyperparameters used are the important factors that lead the model to learn image features distinctly. The novelty of this work with respect to other related studies is that the model is trained and tested over a larger dataset. Since the dataset is public and not adjudicated by the team of clinicians or domain experts (as happen with private datasets), the occurrences of noise in images as well as in labels are likely to be more as compared to the small datasets. Any model which gives good classification results over large public datasets proves two facts. First, the model is able to learn important images features, and at the same time, it is able to ignore unrelated artifacts of images during the training. Second, the model is more robust, since it is capable to generalize over large test set containing new images samples, which is specifically good for working in clinical environments. Hence, it can be concluded that a model which is trained and tested with larger datasets will have the better learning and will be more robust to handle large distribution of new samples in working clinical conditions.
Gupta et al. proposed InstaCovNet-19 which is an integrated stacked deep convolution network. InstaCovNet-19 uses ResNet101, Xception, Inception V3, MobileNet, and NASNet pretrained models for detecting COVID-19 and pneumonia using chest X-ray images. The model was able to achieve an accuracy of 99.08% on three-class (COVID-19, viral pneumonia, and normal) classification and accuracy of 99.53% on two-class (COVID, non-COVID) classification. InstaCovNet-19 model was trained and tested on combined images from two public datasets. The dataset comprises 361, 1341, and 1345 images of COVID, normal, and viral pneumonia class, respectively.
The DarkNet model was presented by Ozturk et al. for binary (COVID vs. no-findings) and multiclass (COVID vs. no-findings vs. viral pneumonia) classification using raw chest X-ray images from two public datasets. The DarkNet model was implemented using 17 convolutional layers and different filtering on each layer. The model is able to perform binary and multiclass tasks with an accuracy of 98.08% and 87.02%, respectively. Jain et al. compared the performance of multiple pretrained models including Inception V3, Xception, and ResNeXt over the dataset collected from the Kaggle repository. Authors concluded that Xception model gives classification accuracy of 97.97% which is the highest among other models. A total of 6432 images were used out of which 5467 images were used as training set and 965 images as validation set. Ouchicha et al. proposed CVDNet, a deep CNN model, to classify chest X-ray images into three categories: normal, COVID-19, and viral pneumonia. CVDNet is based on the residual neural network. This model is trained on a public dataset containing a combination of 219 COVID-19, 1341 normal, and 1345 viral pneumonia chest X-ray images. The proposed CVDNet achieved an average precision, accuracy, recall, and F1 score of 96.72%, 96.69%, 96.84%, and 96.68%, respectively, for three-class classification.
Chowdhury M et al. reported that deep networks such as DenseNet perform better than shallow networks in classifying normal and viral pneumonia images. Authors reported accuracy, precision, and recall as 97.9%, 97.95%, and 97.9% on multiclass classification, respectively. Ibrahim et al. proposed the pretrained AlexNet-based deep neural network for binary and multiclass classification of COVID-19, non-COVID-19 pneumonia, and normal chest X-ray images. The model achieved 94.00% testing accuracy, 91.30% sensitivity, and 84.78% specificity for three-way classification.
Our results are at par with the results of Gupta et al. We consider the following to be the reasons behind this. Since larger datasets cover wider distribution of true image features, trained models that perform better on such datasets are considered to be robust. They are expected to perform better on unseen data samples. We have tested performance of our models on the test set of Gupta et al. (220 images) and our test set (3031 image). For the smaller test set, accuracy, precision, recall, and F1 score for our model come out to be 0.99, which is at par with Gupta et al. Moreover, since our model is performing equally well on large test set too [Table 6], we consider it to be more robust.
Comparison of this work with other recent leading studies is shown in [Table 6].
The results produced by the model are encouraging, since the model is capable to classify all three cases with very high precision and recall values. The future direction of this work includes enhancing the training dataset by incorporating images from our local hospitals to improve upon the generalization capabilities of the model. We also target to identify the most infectious region in the image by generating heat maps.
| Conclusion|| |
In this work we engineered and trained VGG16 based deep CNN model for automated detection and classification of COVID-19 and viral pneumonia cases through chest X-ray images. The proposed model may help clinicians for preliminary diagnosis of COVID-19 and viral pneumonia cases in real time. The results produced by the model are encouraging, since it is capable to classify all three cases with very high precision and recall values. The future direction of this work includes enhancing the training dataset by incorporating images from the local hospitals to improve the robustness of the model. Also we target to identify the most infectious regions in the images by generating class activation maps.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al.
Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016;316:2402-10.
Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017;124:962-9.
Amrane M, Oukid S, Gagaoua I, Ensari T. “Breast Cancer Classification Using Machine Learning,”2018 Electric Electronics, Computer Science, Biomedical Engineerings' Meeting (EBBT), Istanbul; 2018. p. 1-4.
Hussain L, Aziz W, Saeed S, Rathore S, Rafique M. “Automated Breast Cancer Detection Using Machine Learning Techniques by Extracting Different Feature Extracting Strategies,”2018 17th
IEEE International Conference on Trust, Security and Privacy in Computing and Communications/12th
IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), New York; 2018. p. 327-31.
Xie Y, Xia Y, Zhang J, Song Y, Feng D, Fulham M, et al.
Knowledge-based collaborative deep learning for benign-malignant lung nodule classification on chest CT. IEEE Trans Med Imaging 2019;38:991-1004.
Wu Q, Zhao W. “Small-Cell Lung Cancer Detection Using a Supervised Machine Learning Algorithm,”2017 International Symposium on Computer Science and Intelligent Controls (ISCSIC), Budapest; 2017. p. 88-91.
Krizhevsky A, Sutskever I, Hinton GE. “Imagenet Classification with Deep Convolutional Neural Networks,” In Advances in Neural Information Processing Systems; 2012. p. 1097-105.
Neuman MI, Lee EY, Bixby S, Diperna S, Hellinger J, Markowitz R, et al.
Variability in the interpretation of chest radiographs for the diagnosis of pneumonia in children. J Hosp Med 2012;7:294-8.
Chowdhury ME, Rahman T, Khandakar A, Mazhar R, Kadir MA, Mahbub ZB, et al
. Can AI help in screening viral and covid-19 pneumonia? IEEE Access 2020;8:132665-76.
Rahman T, Khandakar A, Qiblawey Y, Tahir A, Kiranyaz S, Kashem SB, et al
. Exploring the effect of image enhancement techniques on covid-19 detection using chest X-ray images. arXiv 2020;132:104319.
Arman H, Mahdiyar MM, Seokbum K. COVID-19 Chest X-Ray Image Repository. Figshare Dataset; 2020. Available from: http://www.eurorad.org/
w. [Last accessed on 2021 Mar 06].
Gupta A, Anjum, Gupta S, Katarya R. InstaCovNet-19: A deep learning classification model for the detection of COVID-19 patients using Chest X-ray. Appl Soft Comput 2021;99:106859.
Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Rajendra Acharya U. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med 2020;121:103792.
Jain R, Gupta M, Taneja S, Jude HD. Deep learning based detection and analysis of COVID-19 on chest X-ray images. Appl Intell 2021;51:1690-700.
Ouchicha C, Ammor O, Meknassi M. CVDNet: A novel deep learning architecture for detection of coronavirus (Covid-19) from chest x-ray images. Chaos Solitons Fractals 2020;140:110245.
Chowdhury M, Rahman T, Khandakar A, Mazhar R, Kadir M, Mahbub Z, et al
. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020;8:132665-76.
Ibrahim AU, Ozsoz M, Serte S, Al-Turjman F, Yakoi PS. Pneumonia classification using deep learning from chest X-ray images during COVID-19. Cognit Comput 2021;2021:1-13.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6]