×

Deep convolutional neural network for ulcer recognition in wireless capsule endoscopy: experimental feasibility and optimization. (English) Zbl 1423.92186

Summary: Wireless capsule endoscopy (WCE) has developed rapidly over the last several years and now enables physicians to examine the gastrointestinal tract without surgical operation. However, a large number of images must be analyzed to obtain a diagnosis. Deep convolutional neural networks (CNNs) have demonstrated impressive performance in different computer vision tasks. Thus, in this work, we aim to explore the feasibility of deep learning for ulcer recognition and optimize a CNN-based ulcer recognition architecture for WCE images. By analyzing the ulcer recognition task and characteristics of classic deep learning networks, we propose a HAnet architecture that uses ResNet-34 as the base network and fuses hyper features from the shallow layer with deep features in deeper layers to provide final diagnostic decisions. 1,416 independent WCE videos are collected for this study. The overall test accuracy of our HAnet is 92.05%, and its sensitivity and specificity are 91.64% and 92.42%, respectively. According to our comparisons of F1, F2, and ROC-AUC, the proposed method performs better than several off-the-shelf CNN models, including VGG, DenseNet, and Inception-ResNet-v2, and classical machine learning methods with handcrafted features for WCE image classification. Overall, this study demonstrates that recognizing ulcers in WCE images via the deep CNN method is feasible and could help reduce the tedious image reading work of physicians. Moreover, our HAnet architecture tailored for this problem gives a fine choice for the design of network structure.

MSC:

92C55 Biomedical imaging and signal processing
92-08 Computational methods for problems pertaining to biology
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Shen, L.; Shan, Y.-S.; Hu, H.-M., Management of gastric cancer in Asia: resource-stratified guidelines, The Lancet Oncology, 14, 12, e535-e547 (2013) · doi:10.1016/s1470-2045(13)70436-4
[2] Liao, Z.; Hou, X.; Lin-Hu, E.-Q., Accuracy of magnetically controlled capsule endoscopy, compared with conventional gastroscopy, in detection of gastric diseases, Clinical Gastroenterology and Hepatology, 14, 9, 1266-1273.e1 (2016) · doi:10.1016/j.cgh.2016.05.013
[3] Yuan, Y.; Li, B.; Meng, M. Q.-H., Bleeding frame and region detection in the wireless capsule endoscopy video, IEEE Journal of Biomedical and Health Informatics, 20, 2, 624-630 (2015) · doi:10.1109/jbhi.2015.2399502
[4] Shamsudhin, N.; Zverev, V. I.; Keller, H., Magnetically guided capsule endoscopy, Medical Physics, 44, 8, e91-e111 (2017) · doi:10.1002/mp.12299
[5] Li, B.; Meng, M. Q.-H., Computer aided detection of bleeding regions for capsule endoscopy images, IEEE Transactions on Biomedical Engineering, 56, 4, 1032-1039 (2009) · doi:10.1109/tbme.2008.2010526
[6] He, J.-Y.; Wu, X.; Jiang, Y.-G.; Peng, Q.; Jain, R., Hookworm detection in wireless capsule endoscopy images with deep learning, IEEE Transactions on Image Processing, 27, 5, 2379-2392 (2018) · Zbl 1409.94220 · doi:10.1109/tip.2018.2801119
[7] Yuan, Y.; Meng, M. Q.-H., Deep learning for polyp recognition in wireless capsule endoscopy images, Medical Physics, 44, 4, 1379-1389 (2017) · doi:10.1002/mp.12147
[8] Li, B.; Meng, M. Q.-H., Automatic polyp detection for wireless capsule endoscopy images, Expert Systems with Applications, 39, 12, 10952-10958 (2012) · doi:10.1016/j.eswa.2012.03.029
[9] Karargyris, A.; Bourbakis, N., Detection of small bowel polyps and ulcers in wireless capsule endoscopy videos, IEEE Transactions on Biomedical Engineering, 58, 10, 2777-2786 (2011) · doi:10.1109/tbme.2011.2155064
[10] Yu, L.; Yuen, P. C.; Lai, J., Ulcer detection in wireless capsule endoscopy images, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), IEEE
[11] Yeh, J.-Y.; Wu, T.-H.; Tsai, W.-J., Bleeding and ulcer detection using wireless capsule endoscopy images, Journal of Software Engineering and Applications, 7, 5, 422-432 (2014) · doi:10.4236/jsea.2014.75039
[12] Fu, Y.; Zhang, W.; Mandal, M.; Meng, M. Q.-H., Computer-aided bleeding detection in WCE video, IEEE Journal of Biomedical and Health Informatics, 18, 2, 636-642 (2014) · doi:10.1109/jbhi.2013.2257819
[13] Charisis, V.; Hadjileontiadis, L.; Sergiadis, G., Enhanced ulcer recognition from capsule endoscopic images using texture analysis, New Advances in the Basic and Clinical Gastroenterology (2012), London, UK: IntechOpen, London, UK · doi:10.5772/32940
[14] Kundu, A. K.; Fattah, S. A., An asymmetric indexed image based technique for automatic ulcer detection in wireless capsule endoscopy images, Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), IEEE · doi:10.1109/r10-htc.2017.8289062
[15] Ribeiro, E.; Uhl, A.; Georg, W.; Häfner, M., Exploring deep learning and transfer learning for colonic polyp classification, Computational and Mathematical Methods in Medicine, 2016 (2016) · doi:10.1155/2016/6584725
[16] Li, B.; Meng, M. Q.-H., Computer-based detection of bleeding and ulcer in wireless capsule endoscopy images by chromaticity moments, Computers in Biology and Medicine, 39, 2, 141-147 (2009) · doi:10.1016/j.compbiomed.2008.11.007
[17] Li, B.; Qi, L.; Meng, M. Q.-H.; Fan, Y., Using ensemble classifier for small bowel ulcer detection in wireless capsule endoscopy images, Proceedings of the 2009 IEEE International Conference on Robotics & Biomimetics · doi:10.1109/ROBIO.2009.5420455
[18] Aoki, T.; Yamada, A.; Aoyama, K., Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network, Gastrointestinal Endoscopy, 89, 2, 357-363.e2 (2019) · doi:10.1016/j.gie.2018.10.027
[19] Li, B.; Meng, M. Q.-H., Texture analysis for ulcer detection in capsule endoscopy images, Image and Vision Computing, 27, 9, 1336-1342 (2009) · doi:10.1016/j.imavis.2008.12.003
[20] Yuan, Y.; Meng, M. Q.-H., A novel feature for polyp detection in wireless capsule endoscopy images, Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE · doi:10.1109/iros.2014.6943274
[21] Krizhevsky, A.; Sutskever, I.; Hinton, G., ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25, 2 (2012), Red Hook, NY, USA: Curran Associates, Inc., Red Hook, NY, USA
[22] Zeiler, M. D.; Fergus, R., Visualizing and understanding convolutional networks, Proceedings of the European Conference on Computer Vision, Springer
[23] Szegedy, C.; Liu, W.; Jia, Y., Going deeper with convolutions, Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE · doi:10.1109/cvpr.2015.7298594
[24] Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z., Rethinking the inception architecture for computer vision, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/cvpr.2016.308
[25] He, K.; Zhang, X.; Ren, S.; Sun, J., Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/cvpr.2016.90
[26] Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K. Q., Densely connected convolutional networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/cvpr.2017.243
[27] Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A., Inception-v4, inception-ResNet and the impact of residual connections on learning, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence
[28] Simonyan, K.; Zisserman, A., Very deep convolutional networks for large-scale image recognition, https://arxiv.org/abs/1409.1556
[29] LeCun, Y.; Bengio, Y.; Hinton, G., Deep learning, Nature, 521, 7553, 436-444 (2015) · doi:10.1038/nature14539
[30] Turan, M.; Ornek, E. P.; Ibrahimli, N., Unsupervised odometry and depth learning for endoscopic capsule robots, https://arxiv.org/abs/1803.01047
[31] Ye, M.; Johns, E.; Handa, A.; Zhang, L.; Pratt, P.; Yang, G.-Z., Self-supervised Siamese learning on stereo image pairs for depth estimation in robotic surgery, https://arxiv.org/abs/1705.08260
[32] Faisal, M.; Durr, N. J., Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy, Medical Image Analysis, 48, 230-243 (2018) · doi:10.1016/j.media.2018.06.005
[33] Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P. A., Integrating online and offline three-dimensional deep learning for automated polyp detection in colonoscopy videos, IEEE Journal of Biomedical and Health Informatics, 21, 1, 65-75 (2017) · doi:10.1109/jbhi.2016.2637004
[34] Shvets, A. A.; Iglovikov, V. I.; Rakhlin, A.; Kalinin, A. A., Angiodysplasia detection and localization using deep convolutional neural networks, Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), IEEE · doi:10.1109/icmla.2018.00098
[35] Fan, S.; Xu, L.; Fan, Y.; Wei, K.; Li, L., Computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images, Physics in Medicine & Biology, 63, 16, 165001 (2018) · doi:10.1088/1361-6560/aad51c
[36] Liu, W.; Anguelov, D.; Erhan, D., SSD: single shot multibox detector, Proceedings of the European Conference on Computer Vision, Springer · doi:10.1007/978-3-319-46448-0_2
[37] Redmon, J.; Farhadi, A., YOLO9000: better, faster, stronger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/cvpr.2017.690
[38] Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S., Feature pyramid networks for object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/cvpr.2017.106
[39] Lin, M.; Chen, Q.; Yan, S., Network in network, https://arxiv.org/abs/1312.4400
[40] Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P., Focal loss for dense object detection, Proceedings of the IEEE International Conference on Computer Vision · doi:10.1109/iccv.2017.324
[41] Roth, H. R.; Oda, H.; Hayashi, Y., Hierarchical 3D fully convolutional networks for multi-organ segmentation, https://arxiv.org/abs/1704.06382
[42] Liu, X.; Bai, J.; Liao, G.; Luo, Z.; Wang, C., Detection of protruding lesion in wireless capsule endoscopy videos of small intestine, Proceedings of the Medical Imaging 2018: Computer-Aided Diagnosis · doi:10.1117/12.2293303
[43] Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A., Learning deep features for discriminative localization, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/CVPR.2016.319
[44] Wang, F.; Jiang, M.; Qian, C., Residual attention network for image classification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition · doi:10.1109/cvpr.2017.683
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.