基于BERT和BiGRU的在线评论文本情感分类研究
Research on sentiment classification for online reviews based on BERT and BiGRU
-
摘要: 针对互联网用户在线评论文本情感分类不准确的问题,提出一种基于BERT和BiGRU的在线评论文本情感分类模型.该模型首先使用Word2Vec框架对文本内容进行词向量表示,然后利用BERT预训练语言模型提取词向量的深层动态表示,最后将其输入BiGRU网络进行情感分类.实验结果表明,与双向LSTM结合Attention机制模型(W2V-BiLSTM-Attention)、传统卷积神经网络模型(W2V-CNN)和传统循环神经网络模型(W2V-RNN)相比,本文模型的MicroF1值最高(0.91),分类效果最好.Abstract: Aiming at the problem of inaccurate sentiment classification for online comment texts of Internet users, an online reviews sentiment classification model was proposed based on BERT and BiGRU.The model used the Word2Vec framework to represent the word vector of the text content, then extracted the deep dynamic representation of the word vector by the BERT pre-training model,and finally input it into the BiGRU network for sentiment classification.The experimental results demonstrated that compared with the dual-path LSTM combined with Attention mechanism model (W2V-BiLSTM-Attention), traditional convolutional neural network model (W2V-CNN) and traditional recurrent neural network model (W2V-RNN), the MicroF1 value of this model was the highest (0.91) with the best classification results.
-
Key words:
- deep learning /
- sentiment classification /
- BERT /
- Word2Vec /
- BiGRU
-
-
[1]
赵妍研,秦兵,刘挺.文本情感分析[J].软件学报,2010,21(8):1834.
-
[2]
LIU B,ZHANG L.A survey of opinion mining and sentiment analysis[M]//AGGARWAL C C,ZHAI C X.Mining text data.New York:Springer,2012:415-463.
-
[3]
周咏梅,杨佳能,阳爱民.面向文本情感分析的中文情感词典构建方法[J].山东大学学报(工学版),2013,43(6):27.
-
[4]
PANG B,LEE L,VAITHYANATHAN S.Thumbs up sentiment classification using machine learning techniques[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP).Stroudsburg:Association for Computational Linguistics,2002:79.
-
[5]
姜杰.社交媒体文本情感分析[D].南京:南京理工大学,2017.
-
[6]
王利利.基于深度学习的中文文本情感分类研究及应用[D].徐州:中国矿业大学,2019.
-
[7]
SOCHER R,PENNINGTON J,HUANG E H,et al.Semi-supervised recursive autoencoders for predicting sentiment distributions[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing.Stroudsburg:Association for Computational Linguistics,2011:151.
-
[8]
-
[9]
MIKOLOV T,CHEN K,CORRADO G S,et al.Efficient estimation of word representations in vector space[C]//Proceedings of the 2013 International Conference on Learning Representations.[S.l.:s.n.],2013.
-
[10]
DEVLIN J,CHANG M W,LEE K,et al.BERT:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the North American Chapter of the Association for Computational Linguistics.Stroudsburg:Association for Computational Linguistics,2019:4171.
-
[11]
ELMAN J L.Finding structure in time[J].Cognitive Science,1990,14(2):179.
-
[12]
CUI Y,CHE W,LIU T,et al.Pre-training with whole word masking for Chinese BERT[J].(2019-10-29)[2020-05-31] https://arxiv.org/pdf/1906.08101.pdf.
-
[1]
计量
- PDF下载量: 37
- 文章访问数: 2352
- 引证文献数: 0