skip to main content
10.1145/3297156.3297189acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaiConference Proceedingsconference-collections
research-article

Sentiment Analysis of Network Comments Based on GCNN

Published:08 December 2018Publication History

ABSTRACT

Recently, satisfactory performance has been achieved on sentiment analysis tasks by using recurrent neural network (RNN). As a derived model of RNN, gated recurrent units (GRU) model has a great advantage in dealing with long text sequence problems, while the performance is limited when working with short texts. Therefore, to effectively extract the contextual information of network comments, and take advantage of convolutional neural network (CNN) that can capture the local features, we propose a GCNN (gated convolutional neural network) model based on GRU and CNN. In this paper, we compare the proposed GCNN model with CNN and RNN models on standard datasets. Experiment results show that the GCNN model achieves state-of-the-art performance on the sentiment analysis tasks, with 94.9% and 90.0% F1-score on hotel comments (ChnSentiCorp) corpus and the IMDB corpus respectively.

References

  1. Turney, P. D. 2002. Thumbs up or thumbs down: semantic orientation applied to unsupervised classification of reviews. Meeting on Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2002:417--424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Bo, P. Lee, L. Vaithyanathan, S. 2002. Thumbs up sentiment classification using machine learning techniques. Proceedings of Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA:Association for Computational Linguistics, 2002:79--86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Lewis, D. D. 1992. An evaluation of phrasal and clustered representations on a text categorization task. Copenhagen, Denmark: International ACM SIGIR Conference on Research and Development in Information Retrieval. June. DBLP, 1992:37--50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Post, M. Bergsma, S. 2013. Explicit and implicit syntactic features for text classification. Association for Computational Linguistics, 2013:866--872.Google ScholarGoogle Scholar
  5. Ng, A. Y. 2004. Feature selection, l1 vs. l2 regularization, and rotational invariance. ACM, 2004: 78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Cover, T. M. Thomas, J. A. 2006. Elements of information theory(2nd ed.).Canada: John W, Sons, 2006:7--38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Liu, G. S. He, W. L. Zhu, J. et al. 2011. Feature representation based on sentimental orientation classification. China Communications, 2011, 8(3): 90--98.Google ScholarGoogle Scholar
  8. Mikolov, T. Sutskever, I. Chen, K. et al. 2013. Distributed representations of words and phrases and their compositionality. Proceedings of International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2013, 26:3111--3119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mikolov, T. Chen, K. Corrado, G. et al. 2013 Efficient Estimation of Word Representations in Vector Space. Proceedings of International Conference on Intelligent Text Processing and Computational Linguistics. Berlin, Germany: Springer, 2013:430--443.Google ScholarGoogle Scholar
  10. Pennington, J. Socher, R. Manning, C. D. 2014. Glove: Global Vectors for Word Representation. Conference on Empirical Methods in Natural Language Processing(EMNLP), 2014: 1532--1543.Google ScholarGoogle Scholar
  11. Kim, Y. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computatinal Linguistics, 2014:1746--1751.Google ScholarGoogle ScholarCross RefCross Ref
  12. Zeng, D. J. Liu, K. Lai, S. W. et al. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, 2014: 2335--2344.Google ScholarGoogle Scholar
  13. Kalchbrenner, K. N. Grefendtette, E. Blunsom, P. 2014. A convolutional neural network for modelling sentence. In proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:ACL, 2014: 655--665.Google ScholarGoogle ScholarCross RefCross Ref
  14. Zhou, X. J. Wan, X. J. Xiao, J. G. 2016. Attention-based LSTM network for cross-lingual sentiment classification. In proceedings of the 2016 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA:ACL, 2016:247--256.Google ScholarGoogle ScholarCross RefCross Ref
  15. Nakov, P. Ritter, A. Rosenthal, S. et al. 2016. SemEval-2016 Task 4: Sentiment Analysis in Twitter. In proceedings of the 10th International Workshop on Semantic Evaluation(SemEval-2016). San Diego, California: Association for Computational Linguistics, 2016:1--18.Google ScholarGoogle Scholar
  16. Severyn, A. Moschitti, A. 2015. Unitn: Training deep convolutional neural network for twitter sentiment classification. In proceedings of the 9th International Workshop on Semantic Evaluation(SemEval-2015). Denver, Colorado: Association for Computational Linguistics, 2015:464--469.Google ScholarGoogle ScholarCross RefCross Ref
  17. Stojanovski, D. Strezoski, G. Madjarov, G. et al. 2016. Finki at semeval-2016 task 4: Deep learning architecture for twitter sentiment analysis. In proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). San Diego, California: Association for Computational Linguistics, 2016:149--154.Google ScholarGoogle Scholar
  18. He, H. Y. Zhang, J. Zhang, Z. P. 2017. Text sentiment analysis based on convolutional neural network combined with part of speech features.Computer Engineering, 2017.Google ScholarGoogle Scholar
  19. Hu, Z. J. Zhao, X. W. 2018. Sentiment analysis based on word vector technology and hybrid neural network. Application Research of Computers, 2018, Vol. 35 No. 12Google ScholarGoogle Scholar
  20. Hochreiter, S. Schmidhuber, J. 1997. Long short-term memory. Neural Computation, 1997, 9.8:1735--1780. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Cho, K. Merrienboer, B. V. Gulcehre, C. et al. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Eprint Arxiv, 2014.Google ScholarGoogle Scholar

Index Terms

  1. Sentiment Analysis of Network Comments Based on GCNN

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      CSAI '18: Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence
      December 2018
      641 pages
      ISBN:9781450366069
      DOI:10.1145/3297156

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 8 December 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader