Python library that shrinks text for LLMs by up to 80%

Came across this library called context-compressor , it uses models like BERT, T5, and BART to compress text while keeping the meaning intact. Pretty handy for RAG pipelines or cutting token costs on OpenAI/Claude API calls.

PyPI: https://pypi.org/project/context-compressor/

submitted by /u/Leading_Opposite_280 to r/Python
[link] [comments]


Commentaires

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *