Skip to main content
Log in

User emotion recognition from a larger pool of social network data using active learning

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The use of social networks has grown exponentially in recent years. The large amount of data available in these networks can be effectively utilized in many machine learning applications. This paper proposes a framework of an emotion recognition system that fetches huge amount of face images from the social networks into a cloud. In the cloud, the problem of the unlabeled facial images is handled by applying an active learning approach. For the feature extraction, an interlaced derivative pattern is used, while for a base classifier, an extreme learning machine is utilized. Once the emotion is recognized in the cloud, it can be shared with the end users to meet their interests. Several experiments were performed using some publicly available databases and heterogeneous images from the social networks. Experimental results showed that the proposed framework may effectively be used in the emotion recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Averbeck BB, Bobina T, Evansa S, Shergill SS (2012) Emotion recognition and oxytocin in patients with schizophrenia. Psychol Med 42(02):259–266

    Article  Google Scholar 

  2. Bettadapura V (2012) Face expression recognition and analysis: the state of the art. College of Computing, Georgia Institute of Technology. Available at: http://arxiv.org/ftp/arxiv/papers/1203/1203.6722.pdf

  3. Domes G, Kumbier E, Heinrichs M, Herpertz SC (2014) Oxytocin promotes facial emotion recognition and Amygdala reactivity in adults with Asperger syndrome. Neuropsychopharmacology 39:698–706

    Article  Google Scholar 

  4. Fang Q, Xu C, Sang J, Hossain MS, Muhammad G (2015) Word-of-mouth understanding: entity-centric multimodal aspect opinion mining in social media. IEEE Trans Multimed 17(12):2281–2296

    Article  Google Scholar 

  5. Freytag A, Rodner E, Bodesheim P, Denzler J (2013) Labeling examples that matter: relevance-based active learning with Gaussian processes. Proc GCPR, 282–291

  6. Fu Y, Li B, Zhu X, Zhang C (2014) Active learning without knowing individual instance labels: a pairwise label homogeneity query approach. IEEE Trans Knowl Data Eng 26(4):808–822

    Article  Google Scholar 

  7. Haque MM, Holder LB, Skinner MK, Cook DJ (2013) Generalized query-based active learning to identify differentially methylated regions in DNA. IEEE/ACM Trans Comput Biol Bioinf 10(3):632–644

    Article  Google Scholar 

  8. Hossain MS, Muhammad G (2015) Audio-visual emotion recognition using multi-directional regression and Ridgelet transform. J Multimodal User Interf

  9. Hossain MS, Muhammad G, Alhamid MF, Song B, Al-Mutib K (2016) Audio-visual emotion recognition using big data towards 5G. Mob Netw Appl. doi:10.1007/s11036-016-0685-9

    Google Scholar 

  10. Hossain MS, Muhammad G, Song B, Hassan M, Alelaiwi A, Alamri A (2015) Audio-visual emotion-aware cloud gaming framework. IEEE Trans Circuits Syst Video Technol 25(12):2105–2118

    Article  Google Scholar 

  11. Huang G-B, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 42(2):513–529

    Article  Google Scholar 

  12. Jiang D, Cui Y, Zhang X, Fan P, Ganzalez I, Sahli H (2010) Audio visual emotion recognition based on triple-stream dynamic bayesian network models. In: D’Mello S, et al LNCS 6974, p 609–618

  13. Kanade T, Cohn J, Tian Y (2000) Comprehensive database for facial expression analysis. Proc. IEEE international conference on face and gesture recognition, 46–53

  14. Lawrence K, Ruth C, Skuse D (2015) Age, gender, and puberty influence the development of facial emotion recognition. Front Psychol 6:761

    Article  Google Scholar 

  15. Liu K-H, Xu C-G (2009) A genetic programming-based approach to the classification of multiclass microarray datasets. Bioinformatics 25(3):331–337

    Article  Google Scholar 

  16. Majumder A, Behera L, Subramanian VK (2014) Emotion recognition from geometric facial features using self-organizing map. Pattern Recogn 47(3):1282–1293

    Article  Google Scholar 

  17. Mansoorizadeh M, Charkari NM (2010) Multimodal information fusion application to human emotion recognition from face and speech. Multimed Tools Appl 49(2):277–297

    Article  Google Scholar 

  18. Martin O, Kotsia I, Macq B, Pitas I (2006) The eNTERFACE’05 audiovisual emotion database. Proc. ICDEW’2006, 8, Atlanta, GA

  19. Muhammad G (2015) Automatic speech recognition using interlaced derivative pattern for cloud based healthcare system. Clust Comput 18(2):795–802

    Article  Google Scholar 

  20. Muhammad G, Melhem M (2014) Pathological voice detection and binary classification using MPEG-7 audio features. Biomed Signal Process Control 11:1–9. doi:10.1016/j.bspc.2014.02.001

    Article  Google Scholar 

  21. Qian S, Zhang T, Xu C, Hossain MS (2015) Social event classification via boosted multi-modal supervised latent Dirichlet allocation. ACM Trans Multimedia Comput Commun Appl (ACM TOMM.) 11(2) Article 1, Article. 27: 27.1–27.22

  22. Sachse M, Schlitt S, Hainz D, Ciaramidaro A, Walter H, Poustka F, Bölte S, Freitag CM (2014) Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder. Schizophr Res 159(2–3):509–514

    Article  Google Scholar 

  23. Senechal T, Rapp V, Salam H, Seguier R, Bailly K, Prevost L (2012) Facial action recognition combining heterogeneous features via Multikernel learning. IEEE Trans Syst Man Cybern B Cybern 42(4):993–1005

    Article  Google Scholar 

  24. Shobeirinejad A, Gao Y (2010) Gender classification using interlaced derivative patterns. Proceedings of the 20th International Conference on Pattern Recognition (ICPR), 1509–1512

  25. Sourati J, Erdogmus D, Dy JG, Brooks DH (2013) Accelerated learning-based interactive image segmentation using Pairwise constraints. IEEE Trans Image Process 23(7):3057–3070

    Article  MathSciNet  Google Scholar 

  26. Viola P, Jones M (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154

    Article  Google Scholar 

  27. Yang X, Zhang T, Xu C, Hossain MS (2015) Automatic visual concept learning for social event understanding. IEEE Trans Multimed 17(3):46–58

    Article  Google Scholar 

Download references

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University, Riyadh, Saudi Arabia for funding this work through the research group project no. RGP-1436-023

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ghulam Muhammad.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muhammad, G., Alhamid, M.F. User emotion recognition from a larger pool of social network data using active learning. Multimed Tools Appl 76, 10881–10892 (2017). https://doi.org/10.1007/s11042-016-3912-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3912-2

Keywords

Navigation