🔦 Paper Spotlight: Beyond Goldfish Memory

How can we tackle the problem? 🤔

Facebook AI Research has recently addressed methods for long-term open-domain conversation in their work “Beyond Goldfish Memory: Long-Term Open-Domain Conversation”¹. Additionally, they collected an English dataset entitled Multi-Session Chat (MSC), consisting of human-crowdworker chats spanning over five sessions, each with up to 14 utterances. Each session also contains annotations regarding essential topics discussed in previous exchanges to fuel the following conversations.

Figure 1: Example of a conversation from the MSC dataset.
  1. A Retrieval-Augmentation method that uses a retrieval system to find and select which part of the context to include in the encoding.
  2. A Summarization Memory-Augmentation method that summarizes the knowledge from previous dialogues and only stores that piece of information, thus being more efficient than the latter.

Results 📊

Throughout their experiments, the authors observed an improvement in perplexity (defined as the exponentiated average negative log-likelihood of a sequence) when adding the dialogue history compared to a no-context scenario. They observed an increase in performance when using the session summaries as annotated by crowdworkers, which are potentially more informative than the dialogue history. The gain in performance is even more noticeable when evaluating the opening responses of a session.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Automaise

Automaise

We are a low-code AI platform powering the digital transformation of businesses: customer care and beyond || www.automaise.com