Hercules

Hercules: Attributable and Scalable Opinion Summarization

By representing sentences from reviews as paths through a discrete hierarchy, we can generate abstractive summaries that are informative, attributable and scale to hundreds of input reviews.

We propose a method for unsupervised opinion summarization that encodes sentences from customer reviews into a hierarchical discrete latent space, then identifies common opinions based on the frequency of their encodings. We are able to generate both abstractive summaries by decoding these frequent encodings, and extractive summaries by selecting the sentences assigned to the same frequent encodings. Our method is attributable, because the model identifies sentences used to generate the summary as part of the summarization process. It scales easily to many hundreds of input reviews, because aggregation is performed in the latent space rather than over long sequences of tokens. We also demonstrate that our model enables a degree of control, generating aspect-specific summaries by restricting the model to parts of the encoding space that correspond to desired aspects (e.g. location or food). Automatic and human evaluation on two datasets from different domains demonstrates that our method generates summaries that are more informative than prior approaches and better grounded in the input reviews.

@inproceedings{hosking-etal-2023-attributable,
  title = "Attributable and Scalable Opinion Summarization",
  author = "Hosking, Tom  and
      Tang, Hao  and
      Lapata, Mirella",
  booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
  month = jul,
  year = "2023",
  address = "Toronto, Canada",
  publisher = "Association for Computational Linguistics",
  url = "https://aclanthology.org/2023.acl-long.473",
  pages = "8488--8505",
}