The model’s performance on summarization is going to depend most of all on how popular the book is. Remember, in this context you’re relying on the model’s memorization of the book, which only really works if there’s a lot of stuff written about it. This also means the book can’t be new.
TLDR: Just use a long context model like Gemini and put the whole book into your prompt when asking it to summarize instead of trusting the LLM to remember.
Login to reply
Replies (1)
I did it with very popular books.
I did find that Gemini was not making the summarization errors that ChatGPT was making.