Welcome to Francis Academic Press

Academic Journal of Computing & Information Science, 2023, 6(9); doi: 10.25236/AJCIS.2023.060901.

ASMAM: An Answer Summarization Mechanism Based on Multi-layer Attention Model


Jian Song1, Meixuan Jiang2

Corresponding Author:
Jian Song

1Jiangsu Expressway Network Operation & Management Co., LTD, Nanjing, 210049, China

2The University of Warwick, Coventry, CV47AL, United Kingdom


At present, deep learning technologies have been widely used in the field of natural language process, such as text summarization. In Community Question Answering (CQA), the answer summary could help users get a complete answer quickly. There are still some problems with the current answer summary mechanism, such as semantic inconsistency, repetition of words, etc. In order to solve this, we propose a novel mechanism called ASMAM, which stands for Answer Summarization based on Multi-layer Attention Mechanism. Based on the traditional sequence to sequence (seq2seq), we introduce self-attention and multi-head attention mechanism respectively during sentence and text encoding, which could improve text representation ability of the model. In order to solve “long distance dependence” of Recurrent Neural Network (RNN) and too many parameters of Long Short-Term Memory (LSTM), we all use gated recurrent unit (GRU) as the neuron at the encoder and decoder sides. Experiments over the Yahoo! Answers dataset demonstrate that the coherence and fluency of the generated summary are all superior to the benchmark model in ROUGE evaluation system.


answer summarization, attention mechanism, encoder-decoder framework, recurrent neural network

Cite This Paper

Jian Song, Meixuan Jiang. ASMAM: An Answer Summarization Mechanism Based on Multi-layer Attention Model. Academic Journal of Computing & Information Science (2023), Vol. 6, Issue 9: 1-10. https://doi.org/10.25236/AJCIS.2023.060901.


[1] M. Gambhir and V. Gupta (2017). Recent automatic text summarization techniques: a survey. Artificial Intelligence Review, vol.47, no.1, pp.1-66.

[2] U. Hahn and I. Mani (2000). The challenges of automatic summarization. Computer, vol.33, no.11, pp.29-36.

[3] Y. D. Liu, J. Bian and E. Agichtein (2008). Predicting information seeker satisfaction in community question answering. Proceedings of The 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.483-490.

[4] R. Barzilay and M. Elhadad (1997). Using lexical chains for text summarization.  Proceedings of  The ACL/ EACL 1997 Workshop on Intelligent Scalable Text Summarization, pp. 10-17.

[5] J. M. Conroy and D. P. O'Leary (2001). Text summarization via hidden Markov models. Proceedings of The 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.406-407.

[6] X. J. Wan and J. W. Yang (2008). Multi-document summarization using cluster-based link analysis. Proceedings of The 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.299-306.

[7] G. Erkan and D. R. Radev (2004). Lex Rank: graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, vol 22, no.2004, pp.457-479.

[8] R. Nallapati, F. F. Zhai and B. W. Zhou (2017). Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Proceedings of  The 31st AAAI Conference on Artificial Intelligence, pp.3075–3081.

[9] Q. Y. Zhou, N. Yang, F. R. Wei, S. H. Huang, M. Zhou and T. J. Zhao (2018). Neural document summarization by jointly learning to score and select sentences. Proceedings of The 56th Annual Meeting of the Association for Computational Linguistics, pp.654-663.

[10] H.  Y.  Fang, W. M. Lu, F. Wu, Y. Zhang, X. D. Shang, J. Shao and Y. T. Zhuang (2015). Topic aspect-oriented summarization via group selection. Neurocomputing, vol.149, no.2015, pp.1613–1619.

[11] M. Yasunaga, R. Zhang, K. Meelu, A. Pareek, K. Srinivasan and D. Radev (2017). Graph-based Neural Multi-Document Summarization. Proceedings of the SIGNLL Conference on Computational Natural Language Learning, pp.452-462.

[12] J. U. Heu, I. Qasim and D. H. Lee (2015). FoDoSu: Multi-document summarization exploiting semantic analysis based on social Folksonomy. Information Processing & Management, vol.51, no.1, pp.212-225.

[13] Z. Q. Cao, F. R. Wei, L. Dong, S. J. Li and M. Zhou (2015). Ranking with recursive neural networks and its application to multi-document summarization. Proceedings of The 29th AAAI Conference on Artificial Intelligence, pp.2153–2159.

[14] Y. Ko and J. Seo (2008). An effective sentence-extraction technique using contextual information and statistical approaches for text summarization. Pattern Recognit Letters, vol.29, no.9, pp.1366-1371.

[15] Y. Ko and J. Seo (2004). Learning with unlabeled data for text categorization using a bootstrapping and a feature projection technique. Proceedings of The 42nd Annual Meeting of the Association for Computational Linguistics, pp. 255-262.

[16] S. Banerjee, P. Mitra and K. Sugiyama (2015). Multi-document abstractive summarization using ILP based multi-sentence compression. Proceedings of The 24th International Conference on Artificial Intelligence, pp.1208-1214.

[17] I. F. Moawad and M. Aref (2012). Semantic graph reduction approach for abstractive Text Summarization. Proceedings of  The 7th International Conference on Computer Engineering & Systems (ICCES), pp.132-138.

[18] D. Bahdanau, K. Cho and Y. Bengio (2014). Neural Machine Translation by Jointly Learning to Align and Translate. Computer Science.

[19] Z. Yang, D. Yang, C. Dyer, X. D. He, A. Smola and E. Hovy (2016). Hierarchical Attention Networks for Document Classification. Proceedings of The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.1480-1489.

[20] Z. H. Lin, M. W. Feng, C. N. Santos, M. Yu, B. Xiang, B. W. Zhou and Y. Bengio (2017). A Structured Self-attentive Sentence Embedding. Proceedings of The 5th International Conference on Learning Representations(ICLR), pp.1-15.

[21] M. Tomasoni and M. Huang (2010). Metadata-aware measures for answer summarization in community Question Answering. Proceedings of  The 48th Annual Meeting of the Association for Computational Linguistics, pp.760-769.

[22] C. Y. Lin (2004). ROUGE: A Package for Automatic Evaluation of summaries. Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004).

[23] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser and I. Polosukhin (2017). Attention is all you need. Proceedings of Advances in Neural Information Processing Systems, pp.5998–6008.