J
Jack C Crawford
MCP?
C
Cicero Jacobi
This is a must have!
Z
Zach Drake
This would also let me code on a plane, or a train, etc.
Martin
This would be a game changer for enterprise
E
Eric Strohmaier
YESS!! I do not want to switch the editor, when i am on a flight or offline
Wayne Lundall
Eric Strohmaier look in the cascade folder for a data base it contains all the embedding you can train your own llm to use this to continue
Billy McCord
I would be fine with paying even up to the $10/ month and given the choose to use local Ollama models as well as the pro credits, I bet there are often problems where you could more than solve in a local LLM, but then have the option to use your credits to submit for review by Claude or Open AI. it would be a force multiplier. i have been working on an app for a few days and my pro plan is saying "low on credits" I think it would be a win/win, it would lower the stress on the bulk AI token buys for Codeium, and would allow users to make their budget go much farther !
Luke Jen O'Connor
I'd be willing to pay for this feature.
E
Euan Jonker
... or llama.cpp
singularity
Would be really great to be able to, say, have an option for a one-time payment in order to use a local LLM (i.e., pay once to access the integration, but the model is fully run on your own machine). I think this for individual users would (at least partially) offset the cost of creating this compatibility/integration, and then leaves the door open for charging enterprises (who might need to run local models for security or other reasons) to make real profit.
Also, this would provide some baseline usability for Windsurf offline.
Billy McCord
singularity
You make excellent points. As a manager at a medium-to-large company, I would definitely be interested in a corporate plan that includes the ability to run on a large, company-based LLM server or cluster. I see significant value in paying a monthly cost per user to ensure continuous updates and new features.
Right now, the real limitations seem to be the overstressed Codeium infrastructure and integrations with platforms like Claude and OpenAI. Moving these capabilities closer to the "edge" and giving companies and users the ability to manage their own platforms/performance would be a game-changer.
While this approach may go against the prevailing trend of cloud-based solutions dominating corporate culture, there’s a strong case to be made for giving companies local control over their platforms—even if they choose to host them within a cloud service.
I have my own private and local LLM/GPU server that I use for my personal projects and even to aid in my development of automated solutions to aid in automation at work. Winsurf is a force multiplier, that is so close to prime time but needs some additional features to make it over the top.
With 30+ years of IT operations experience, I genuinely hope Codeium would consider reaching out to discuss this further.
O
Olivier
Billy McCord, as long a IT history, same aim and scope as you, this would alleviate some or most of internal compliance team stress knowing that chats via public LLM could be captured and/or used in further training. I am an enterprise architecture manager in finance and the last thing we love is knowingly or unknowingly have data exfiltration in any shape or form, so this would vastly reduce the concerns raised by companies that have compliance governance. Whit the added benefit of reducing the stress on the public LLMs infrastructure and Codeium systems, I run a 6x node AI cluster in our lab to run Windsurf and local LLMs to their paste, all in isolation, and have a homelab to test other facets of the same coin.