prompt injection is still a big deal sadly… there’s actually dedicated models for summarization out there.
you could fine tune existing summarization models to take additional context input like reply chains via tokens
Login to reply
Replies (1)
Curator llms for doomer and whitepill feeds