Abstract
We propose a novel confidence estimation method for predictions from a multi-class classifier. Unlike existing methods, we learn a confidence-estimator on the basis of a held-out set from the training data. The predicted confidence values by the proposed system are used to improve the accuracy of multi-modal emotion and sentiment classification. The scores of different classes from the individual modalities are superposed on the basis of confidence values. Experimental results demonstrate that the accuracy of the proposed confidence based fusion method is significantly superior to that of the classifier trained on any modality separately, and achieves superior performance compared to other fusion methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Devarakota, P.R., Mirbach, B., Ottersten, B.: Confidence estimation in classification decision: a method for detecting unseen patterns. In: Advances in Pattern Recognition, pp. 290–294. World Scientific (2007)
Alam, M.R., Bennamoun, M., Togneri, R., Sohel, F.: A confidence-based late fusion framework for audio-visual biometric identification. Pattern Recogn. Lett. 52, 65–71 (2015)
El-Sayed, A.: Multi-biometric systems: a state of the art survey and research directions. Int. J. Adv. Comput. Sci. Appl. 6, 128–138 (2015)
Nadeem, U., Shah, S.A.A., Bennamoun, M., Togneri, R., Sohel, F.: Real time surveillance for low resolution and limited-data scenarios: an image set classification approach. arXiv preprint arXiv:1803.09470 (2018)
Nadeem, U., Shah, S.A.A., Bennamoun, M., Sohel, F., Togneri, R.: Efficient image set classification using linear regression based image reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 99–108 (2017)
Busso, C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42(4), 335 (2008)
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., Mihalcea, R.: MELD: a multimodal multi-party dataset for emotion recognition in conversations. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019)
Poria, S., Cambria, E., Hazarika, D., Majumder, N., Zadeh, A., Morency, L.-P.: Context-dependent sentiment analysis in user-generated videos. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1), pp. 873–883 (2017)
Poria, S., Cambria, E., Hazarika, D., Majumder, N., Zadeh, A., Morency, L.-P.: Multi-level multiple attentions for contextual multimodal sentiment analysis. In: IEEE International Conference on Data Mining, pp. 1033–1038 (2017)
Zadeh, A.B., Liang, P.P., Poria, S., Cambria, E., Morency, L.-P.: Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2236–2246 (2018)
Baltrušaitis, T., Ahuja, C., Morency, L.-P.: Multimodal machine learning: a survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41(2), 423–443 (2018)
Poh, N., Kittler, J.: A unified framework for biometric expert fusion incorporating quality measures. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 3–18 (2011)
Poria, S., Chaturvedi, I., Cambria, E., Hussain, A.: Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: IEEE 16th International Conference on Data Mining, pp. 439–448. IEEE (2016)
Wang, H., Meghawat, A., Morency, L.-P., Xing, E.P.: Select-additive learning: improving generalization in multimodal sentiment analysis. In: IEEE International Conference on Multimedia and Expo, pp. 949–954. IEEE (2017)
Wörtwein, T., Scherer, S.: What really matters–an information gain analysis of questions and reactions in automated PTSD screenings. In: Seventh International Conference on Affective Computing and Intelligent Interaction, pp. 15–20. IEEE (2017)
Nojavanasghari, B., Gopinath, D., Koushik, J., Baltrušaitis, T., Morency, L.-P.: Deep multimodal fusion for persuasiveness prediction. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 284–288. ACM (2016)
Zadeh, A., Chen, M., Poria, S., Cambria, E., Morency, L.-P.: Tensor fusion network for multimodal sentiment analysis. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1103–1114 (2017)
Wagner, J., Andre, E., Lingenfelser, F., Kim, J.: Exploring fusion methods for multimodal emotion recognition with missing data. IEEE Trans. Affect. Comput. 2(4), 206–218 (2011)
Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S., et al.: Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Stat. 26(5), 1651–1686 (1998)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001)
Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A.: RUSBoost: a hybrid approach to alleviating class imbalance. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 40(1), 185–197 (2009)
Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)
Eyben, F., Wöllmer, M., Schuller, B.: OpenSMILE : the Munich versatile and fast open-source audio feature extractor. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1459–1462. ACM (2010)
White, L., Togneri, R., Liu, W., Bennamoun, M.: Neural Representations of Natural Language, vol. 783. Springer, Singapore (2018). https://doi.org/10.1007/978-981-13-0062-2
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
Acknowledgments
We greatly acknowledge NVIDIA for providing a Tesla K40c GPU for the experiments involved in this research. This work was supported by the SIRF scholarship from the University of Western Australia (UWA) and by Australian Research Council under Grant DP150100294.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Nadeem, U., Bennamoun, M., Sohel, F., Togneri, R. (2019). Learning-Based Confidence Estimation for Multi-modal Classifier Fusion. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Lecture Notes in Computer Science(), vol 11954. Springer, Cham. https://doi.org/10.1007/978-3-030-36711-4_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-36711-4_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36710-7
Online ISBN: 978-3-030-36711-4
eBook Packages: Computer ScienceComputer Science (R0)