Larger context
S
Serge
With a recent update we can actually see how many lines of code Cascade is analyzing, and it's only 50-200 lines per file! And if that's not bad enough, it's 1 flow credit per 200 lines checked... Huge waste of time and waste of credits, and if it stops early and misses important parts of the file, it can't even provide proper code.
Marvin
Limited context results from the model used to power Codeium / Windsurf.
You can either carefully manage precious available context. See the blog post below for details how to do that.
Or you can upvote the feature request for adding the Gemini 2.0 model which gives you 1 million tokens of context.
A
Abdul Fatah
Marvin I also use this approach of starting with documents and task specific chats and it works really well. It also helps in asking Windsurf to generate documents for better explainability and maintaining a log of how code evolved.
Christian Brunetti
Marvin This apporach partially work on my opinion. I ntoiced that working on small chat icrease behave of the LLM, but for concern multi-file changes (such as code or documentation) it gets confused so quickly
S
Serge
I agree, it shouldn't have to analyze a file more than once. Analyze the FULL file any time it analyzes a file.
Роман Б
I'm ready to pay more for this feature!
S
Sachin Bhat
would be nice if larger context models were supported out of the box. One approach that can be taken is to index the codebase and use that to gather the relevant context based on the prompt.
D
Dean Mikan
When it "Analyses" a file, looking at the first 0-50 lines is simply not enough to gather context, it misses almost all business logic most of the time.
When looking at a React component, it never even reaches the JSX.