Research article

A weakly supervised learning-based segmentation network for dental diseases


  • Received: 28 September 2022 Revised: 31 October 2022 Accepted: 03 November 2022 Published: 11 November 2022
  • With the development of deep learning, medical image segmentation has become a promising technique for computer-aided medical diagnosis. However, the supervised training of the algorithm relies on a large amount of labeled data, and the private dataset bias generally exists in previous research, which seriously affects the algorithm's performance. In order to alleviate this problem and improve the robustness and generalization of the model, this paper proposes an end-to-end weakly supervised semantic segmentation network to learn and infer mappings. Firstly, an attention compensation mechanism (ACM) aggregating the class activation map (CAM) is designed to learn complementarily. Then the conditional random field (CRF) is introduced to prune the foreground and background regions. Finally, the obtained high-confidence regions are used as pseudo labels for the segmentation branch to train and optimize using a joint loss function. Our model achieves a Mean Intersection over Union (MIoU) score of 62.84% in the segmentation task, which is an effective improvement of 11.18% compared to the previous network for segmenting dental diseases. Moreover, we further verify that our model has higher robustness to dataset bias by improved localization mechanism (CAM). The research shows that our proposed approach improves the accuracy and robustness of dental disease identification.

    Citation: Yue Li, Hongmei Jin, Zhanli Li. A weakly supervised learning-based segmentation network for dental diseases[J]. Mathematical Biosciences and Engineering, 2023, 20(2): 2039-2060. doi: 10.3934/mbe.2023094

    Related Papers:

  • With the development of deep learning, medical image segmentation has become a promising technique for computer-aided medical diagnosis. However, the supervised training of the algorithm relies on a large amount of labeled data, and the private dataset bias generally exists in previous research, which seriously affects the algorithm's performance. In order to alleviate this problem and improve the robustness and generalization of the model, this paper proposes an end-to-end weakly supervised semantic segmentation network to learn and infer mappings. Firstly, an attention compensation mechanism (ACM) aggregating the class activation map (CAM) is designed to learn complementarily. Then the conditional random field (CRF) is introduced to prune the foreground and background regions. Finally, the obtained high-confidence regions are used as pseudo labels for the segmentation branch to train and optimize using a joint loss function. Our model achieves a Mean Intersection over Union (MIoU) score of 62.84% in the segmentation task, which is an effective improvement of 11.18% compared to the previous network for segmenting dental diseases. Moreover, we further verify that our model has higher robustness to dataset bias by improved localization mechanism (CAM). The research shows that our proposed approach improves the accuracy and robustness of dental disease identification.



    加载中


    [1] R. K. Meleppat, M. V. Matham, L. K. Seah, An efficient phase analysis-based wavenumber linearization scheme for swept source optical coherence tomography systems, Laser Phys. Lett., 12 (2015), 055601. https://dx.doi.org/10.1088/1612-2011/12/5/055601 doi: 10.1088/1612-2011/12/5/055601
    [2] K. M. Ratheesh, L. K. Seah, V. M. Murukeshan, Spectral phase-based automatic calibration scheme for swept source-based optical coherence tomography systems, Phys. Med. Biol., 61 (2016), 7652. https://dx.doi.org/10.1088/0031-9155/61/21/7652 doi: 10.1088/0031-9155/61/21/7652
    [3] R. K. Meleppat, C. Shearwood, L. K. Seah, M. V. Matham, Quantitative optical coherence microscopy for the in situ investigation of the biofilm, J. Biomed. Opt., 21 (2016), 127002. https://doi.org/10.1117/1.JBO.21.12.127002 doi: 10.1117/1.JBO.21.12.127002
    [4] R. K. Meleppat, P. Prabhathan, S. L. Keey, M. V. Matham, Plasmon resonant silica-coated silver nanoplates as contrast agents for optical coherence tomography, J. Biomed. Nanotechnol., 12 (2016), 1929–1937. https://doi.org/10.1166/jbn.2016.2297 doi: 10.1166/jbn.2016.2297
    [5] W. J. Park, J. B. Park, History and application of artificial neural networks in dentistry, Eur. J. Dent., 12 (2018), 594–601. https://doi.org/10.4103/ejd.ejd_325_18 doi: 10.4103/ejd.ejd_325_18
    [6] A. Rana, G. Yauney, L. C. Wong, O. Gupta, A. Muftu, P. Shah, Automated segmentation of gingival diseases from oral images, in 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT), (2017), 144–147. https://doi.org/10.1109/HIC.2017.8227605
    [7] X. Xu, C. Liu, Y. Zheng, 3D tooth segmentation and labeling using deep convolutional neural networks, IEEE Trans. Vis. Comput. Graph., 25 (2019), 2336–2348. https://doi.org/10.1109/TVCG.2018.2839685 doi: 10.1109/TVCG.2018.2839685
    [8] S. Sivagami, P. Chitra, G. S. R. Kailash, S. R. Muralidharan, UNet architecture based dental panoramic image segmentation, in 2020 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), (2020), 187–191. https://doi.org/10.1109/WiSPNET48689.2020.9198370
    [9] S. Li, Z. Pang, W. Song, Y. Guo, W. You, A. Hao, et al., Low-shot learning of automatic dental plaque segmentation based on local-to-global feature fusion, in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), (2020), 664–668. https://doi.org/10.1109/ISBI45749.2020.9098741
    [10] M. C. Kaya, G. B. Akar, Dental x-ray image segmentation using octave convolution neural network, in 2020 28th Signal Processing and Communications Applications Conference (SIU), (2020), 1–4. https://doi.org/10.1109/SIU49456.2020.9302495
    [11] W. Van Gansbeke, S. Vandenhende, S. Georgoulis, L. Van Gool, Unsupervised semantic segmentation by contrasting object mask proposals, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 10052–10062. https://doi.org/10.1109/ICCV48922.2021.00990
    [12] Z. Chen, T. Wang, X. Wu, X. S. Hua, H. Zhang, Q. Sun, Class re-activation maps for weakly-supervised semantic segmentation, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), 959–968. https://doi.org/10.1109/CVPR52688.2022.00104
    [13] J. Ahn, S. Kwak, Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 4981–4990. https://doi.org/10.1109/CVPR.2018.00523
    [14] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 3431–3440. https://doi.org/10.1109/CVPR.2015.7298965
    [15] J. Kim, J. Canny, Interpretable learning for self-driving cars by visualizing causal attention, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 2961–2969. https://doi.org/10.1109/ICCV.2017.320
    [16] Z. Zhang, Y. Xie, F. Xing, M. McGough, L. Yang, MDNet: A semantically and visually interpretable medical image diagnosis network, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3549–3557. https://doi.org/10.1109/CVPR.2017.378
    [17] C. Song, Y. Huang, W. Ouyang, L. Wang, Box-driven class-wise region masking and filling rate guided loss for weakly supervised semantic segmentation, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3131–3140. https://doi.org/10.1109/CVPR.2019.00325
    [18] M. Tang, F. Perazzi, A. Djelouah, I. Ben Ayed, C. Schroers, Y. Boykov, On regularized losses for weakly-supervised CNN segmentation, in European Conference on Computer Vision, (2018), 524–540. https://doi.org/10.1007/978-3-030-01270-0_31
    [19] K. K. Maninis, S. Caelles, J. Pont-Tuset, L. Van Gool, Deep extreme cut: From extreme points to object segmentation, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 616–625. https://doi.org/10.1109/CVPR.2018.00071
    [20] X. Zhang, Y. Wei, J. Feng, Y. Yang, T. S. Huang, Adversarial complementary learning for weakly supervised object localization, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 1325–1334. https://doi.org/10.1109/CVPR.2018.00144
    [21] Y. Wei, J. Feng, X. Liang, M. M. Cheng, Y. Zhao, S. Yan, Object region mining with adversarial erasing: A simple classification to semantic segmentation approach, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 6488–6496. https://doi.org/10.1109/CVPR.2017.687
    [22] X. Wang, S. You, X. Li, H. Ma, Weakly-supervised semantic segmentation by iteratively mining common object features, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 1354–1362. https://doi.org/10.1109/CVPR.2018.00147
    [23] Z. Huang, X. Wang, J. Wang, W. Liu, J. Wang, Weakly-supervised semantic segmentation network with deep seeded region growing, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 7014–7023. https://doi.org/10.1109/CVPR.2018.00733
    [24] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2921–2929. https://doi.org/10.1109/CVPR.2016.319
    [25] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 618–626. https://doi.org/10.1109/ICCV.2017.74
    [26] K. Baek, M. Lee, H. Shim, Psynet: Self-supervised approach to object localization using point symmetric transformation, in Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 10451–10459. https://doi.org/10.1609/aaai.v34i07.6615
    [27] F. Wan, P. Wei, J. Jiao, Z. Han, Q. Ye, Min-entropy latent model for weakly supervised object detection, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 1297–1306. https://doi.org/10.1109/CVPR.2018.00141
    [28] Y. Wang, J. Zhang, M. Kan, S. Shan, X. Chen, Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 12272–12281. https://doi.org/10.1109/CVPR42600.2020.01229
    [29] P. Krähenbühl, V. Koltun, Parameter learning and convergent inference for dense random fields, in Proceedings of the 30th International Conference on International Conference on Machine Learning, (2013), 513–521.
    [30] K. Simonyan, A. Vedaldi, A. Zisserman, Deep in-side convolutional networks: Visualising image classification models and saliency maps, preprint, arXiv: 1312.6034
    [31] T. Joy, A. Desmaison, T. Ajanthan, R. Bunel, M. Salzmann, P. Kohli, et al., Efficient relaxations for dense crfs with sparse higher-order potentials, SIAM J. Imaging Sci., 12 (2019), 287–318. https://doi.org/10.1137/18M1178104 doi: 10.1137/18M1178104
    [32] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, Q. V. Le, Autoaugment: Learning augmentation policies from data, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 113–123. https://doi.org/10.1109/CVPR.2019.00020
    [33] Z. Wu, C. Shen, A. Van Den Hengel, Wider or deeper: Revisiting the resnet model for visual recognition, Pattern Recognit., 90 (2019), 119–133. https://doi.org/10.1016/j.patcog.2019.01.006 doi: 10.1016/j.patcog.2019.01.006
    [34] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, et al., Accurate, large minibatch SGD: Training imageNet in 1 hour, preprint, arXiv.1706.02677
    [35] T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, M. Li, Bag of tricks for image classification with convolutional neural networks, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 558–567. https://doi.org/10.1109/CVPR.2019.00065
    [36] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Semantic image segmentation with deep convolutional nets and fully connected CRFs, preprint, arXiv.1412.7062
    [37] W. Shang, Z. Li, Y. Li, Identification of common oral disease lesions based on U-Net. in 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), (2021), 194–200. https://doi.org/10.1109/ICFTIC54370.2021.9647420
    [38] T. Zhou, M. Zhang, J. Li, Regional semantic contrast and aggregation for weakly supervised semantic segmentation. in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), 4289–4299. https://doi.org/10.1109/CVPR52688.2022.00426
    [39] Z. Wu, C. Shen, A. Van Den Hengel, Wider or deeper: Revisiting the resnet model for visual recognition, Pattern Recognit., 90 (2019), 119–133. https://doi.org/10.1016/j.patcog.2019.01.006 doi: 10.1016/j.patcog.2019.01.006
    [40] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv.1409.1556
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1231) PDF downloads(114) Cited by(1)

Article outline

Figures and Tables

Figures(9)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog