Skip to main content

Learning-Based Confidence Estimation for Multi-modal Classifier Fusion

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2019)

Abstract

We propose a novel confidence estimation method for predictions from a multi-class classifier. Unlike existing methods, we learn a confidence-estimator on the basis of a held-out set from the training data. The predicted confidence values by the proposed system are used to improve the accuracy of multi-modal emotion and sentiment classification. The scores of different classes from the individual modalities are superposed on the basis of confidence values. Experimental results demonstrate that the accuracy of the proposed confidence based fusion method is significantly superior to that of the classifier trained on any modality separately, and achieves superior performance compared to other fusion methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/soujanyaporia/multimodal-sentiment-analysis.

  2. 2.

    https://github.com/soujanyaporia/multimodal-sentiment-analysis.https://github.com/SenticNet/MELD.

  3. 3.

    https://github.com/soujanyaporia/multimodal-sentiment-analysis.

  4. 4.

    https://github.com/soujanyaporia/multimodal-sentiment-analysis.

References

  1. Devarakota, P.R., Mirbach, B., Ottersten, B.: Confidence estimation in classification decision: a method for detecting unseen patterns. In: Advances in Pattern Recognition, pp. 290–294. World Scientific (2007)

    Google Scholar 

  2. Alam, M.R., Bennamoun, M., Togneri, R., Sohel, F.: A confidence-based late fusion framework for audio-visual biometric identification. Pattern Recogn. Lett. 52, 65–71 (2015)

    Article  Google Scholar 

  3. El-Sayed, A.: Multi-biometric systems: a state of the art survey and research directions. Int. J. Adv. Comput. Sci. Appl. 6, 128–138 (2015)

    Google Scholar 

  4. Nadeem, U., Shah, S.A.A., Bennamoun, M., Togneri, R., Sohel, F.: Real time surveillance for low resolution and limited-data scenarios: an image set classification approach. arXiv preprint arXiv:1803.09470 (2018)

  5. Nadeem, U., Shah, S.A.A., Bennamoun, M., Sohel, F., Togneri, R.: Efficient image set classification using linear regression based image reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 99–108 (2017)

    Google Scholar 

  6. Busso, C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42(4), 335 (2008)

    Article  Google Scholar 

  7. Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., Mihalcea, R.: MELD: a multimodal multi-party dataset for emotion recognition in conversations. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019)

    Google Scholar 

  8. Poria, S., Cambria, E., Hazarika, D., Majumder, N., Zadeh, A., Morency, L.-P.: Context-dependent sentiment analysis in user-generated videos. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1), pp. 873–883 (2017)

    Google Scholar 

  9. Poria, S., Cambria, E., Hazarika, D., Majumder, N., Zadeh, A., Morency, L.-P.: Multi-level multiple attentions for contextual multimodal sentiment analysis. In: IEEE International Conference on Data Mining, pp. 1033–1038 (2017)

    Google Scholar 

  10. Zadeh, A.B., Liang, P.P., Poria, S., Cambria, E., Morency, L.-P.: Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2236–2246 (2018)

    Google Scholar 

  11. Baltrušaitis, T., Ahuja, C., Morency, L.-P.: Multimodal machine learning: a survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41(2), 423–443 (2018)

    Article  Google Scholar 

  12. Poh, N., Kittler, J.: A unified framework for biometric expert fusion incorporating quality measures. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 3–18 (2011)

    Article  Google Scholar 

  13. Poria, S., Chaturvedi, I., Cambria, E., Hussain, A.: Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: IEEE 16th International Conference on Data Mining, pp. 439–448. IEEE (2016)

    Google Scholar 

  14. Wang, H., Meghawat, A., Morency, L.-P., Xing, E.P.: Select-additive learning: improving generalization in multimodal sentiment analysis. In: IEEE International Conference on Multimedia and Expo, pp. 949–954. IEEE (2017)

    Google Scholar 

  15. Wörtwein, T., Scherer, S.: What really matters–an information gain analysis of questions and reactions in automated PTSD screenings. In: Seventh International Conference on Affective Computing and Intelligent Interaction, pp. 15–20. IEEE (2017)

    Google Scholar 

  16. Nojavanasghari, B., Gopinath, D., Koushik, J., Baltrušaitis, T., Morency, L.-P.: Deep multimodal fusion for persuasiveness prediction. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 284–288. ACM (2016)

    Google Scholar 

  17. Zadeh, A., Chen, M., Poria, S., Cambria, E., Morency, L.-P.: Tensor fusion network for multimodal sentiment analysis. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1103–1114 (2017)

    Google Scholar 

  18. Wagner, J., Andre, E., Lingenfelser, F., Kim, J.: Exploring fusion methods for multimodal emotion recognition with missing data. IEEE Trans. Affect. Comput. 2(4), 206–218 (2011)

    Article  Google Scholar 

  19. Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S., et al.: Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Stat. 26(5), 1651–1686 (1998)

    Article  MathSciNet  Google Scholar 

  20. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001)

    Article  MathSciNet  Google Scholar 

  21. Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A.: RUSBoost: a hybrid approach to alleviating class imbalance. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 40(1), 185–197 (2009)

    Article  Google Scholar 

  22. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)

    Google Scholar 

  23. Eyben, F., Wöllmer, M., Schuller, B.: OpenSMILE : the Munich versatile and fast open-source audio feature extractor. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1459–1462. ACM (2010)

    Google Scholar 

  24. White, L., Togneri, R., Liu, W., Bennamoun, M.: Neural Representations of Natural Language, vol. 783. Springer, Singapore (2018). https://doi.org/10.1007/978-981-13-0062-2

    Book  Google Scholar 

  25. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  26. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  27. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

Download references

Acknowledgments

We greatly acknowledge NVIDIA for providing a Tesla K40c GPU for the experiments involved in this research. This work was supported by the SIRF scholarship from the University of Western Australia (UWA) and by Australian Research Council under Grant DP150100294.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uzair Nadeem .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nadeem, U., Bennamoun, M., Sohel, F., Togneri, R. (2019). Learning-Based Confidence Estimation for Multi-modal Classifier Fusion. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Lecture Notes in Computer Science(), vol 11954. Springer, Cham. https://doi.org/10.1007/978-3-030-36711-4_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36711-4_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36710-7

  • Online ISBN: 978-3-030-36711-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics