how to process the entire context of a long code exceeding the token limits of llm for the code analysis .

developing an application based on llms for the analysis of the code but since we have limitations of tokens we can’t put the entire code to the input also dividing to chunks can loose the relevance of the code context. What can be done in this senario.

submitted by /u/gandakda to r/learnmachinelearning
[link] [comments]


Commentaires

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *