Came across this library called context-compressor , it uses models like BERT, T5, and BART to compress text while keeping the meaning intact. Pretty handy for RAG pipelines or cutting token costs on OpenAI/Claude API calls.
submitted by /u/Leading_Opposite_280 to r/Python
[link] [comments]
Laisser un commentaire