[1] LIU B, WEI H, NIU D, et al.Asking questions the human way:Scalable question-answer generation from text Corpus[J]//Proceedings of the web conference https://doi.10.1145/3366423.3380270. [2] YANG Z, HU J, SALAKHUTDINOV R, et al.Semi-supervised QA with generative domain-adaptive nets[J].https://doi.org/10.48550/arXiv.1702.02206. [3] KENNETH M C,SYLVIA W,FRANKLIN D H.Artificial paranoia[J]. https://doi.org/10.1016/0004 B702(71)90002-6. [4] 李岩,胡文岭.基于知识图谱的农业知识问答系统研究[J].智慧农业导刊,2021,1(11):20-22. [5] 王郝日钦,王晓敏,缪祎晟,等.基于BERT-Attention-DenseBiGRU的农业问答社区问句相似度匹配[J].农业机械学报,2022,53(1):244-252. [6] 吴赛赛,周爱莲,谢能付,等.基于深度学习的作物病虫害可视化知识图谱构建[J].农业工程学报,2020,36(24):177-185. [7] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J].2018, arXiv:1810.04805. [8] WEI J, REN X, LI X, et al.NEZHA: Neural contextualized representation for Chinese language understanding[J]. https://doi.org/10.48550/arXiv.1909.00204 [9] CUI Y, CHE W, LIU T, et al.Pre-training with whole word masking for Chinese BERT[J]. https://doi.org/10.48550/arXiv.1906.08101. [10] PAULIUS M, SHARAN N, JONAH A, et al.Mixed precision training[J]. https://doi.org/10.48550/arXiv.1710.03740 [11] YOU Y, LI J, HSEU J, et al.Reducing bert pre-training time from 3 days to 76 minutes[J]. https://doi.org/10.48550/arXiv.1904.00962 [12] DONG L, YANG N, WANG W, et al.Unified Language Model Pre-training for Natural Language Understanding and Generation[A]. NIPS'19: Proceedings of the 33rd International Conference on Neural Information Processing Systems[C]. 2019:13063-13075. [13] RADFORD A, SALIMANS T.Improving language understanding by generative pre-training[A]. https://www.cs.ubc.ca/~amuham 01/LING530/papers/radford2018impr oving. [14] 孙宝山,谭浩.基于ALBERT-UniLM模型的文本自动摘要技术研究[J].http://kns.cnki.net/kcms/detail/11.2127.TP.202108 02.0922.002.html [15] MADRY A, MAKELOV A, SCHMIDT L, et al.Towards deep learning models resistant to adversarial attacks[J].https://doi.org/10.48550/arXiv.1706.06083 [16] GOODFELLOW I J, SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J]. https://doi.org/10.48550/arXiv.1412.6572 [17] 周青宇,周明.基于深度神经网络的文本问题生成技术综述[J].智能计算机与应用,2020,10(8):10-13,18. [18] 吴云芳,张仰森.问题生成研究综述[J].中文信息学报,2021,35(7):1-9. [19] LIN C Y.Rouge:A package for automatic evaluation of summaries[A]. Proceedings of the workshop on text summarization branches out,2004(1):74-81. [20] PAPINENI K, ROUKOS S, WARD T, et al.BLEU: a method for automatic evaluation of machine translation[J]. https://doi.org/10.3115/1073083.1073135. |