Model Blacklisting in Cascade
P
Paul
Implement a model blacklisting feature in Cascade that allows users to selectively disable specific AI models, including DeepSeek models, to address IP and privacy concerns. This will make sure that a user doesn't accidentally select a model and share code with a questionable privacy policy.
## Key Components:
User-Configurable Blacklist:
- Add a settings panel where users can view all available models.
- Provide toggles or checkboxes to enable/disable each model.
## Optional
### Global and Project-Specific Settings:
- Allow blacklisting at both the global (user-wide) and project-specific levels.
### Integration with Existing Security Features:
- Tie the blacklist feature into existing security measures, such as the option to disable data training and information sharing.
### Admin Controls:
For enterprise users, provide admin-level controls to enforce model restrictions across the organization.
## Benefits:
- Enhanced IP Protection: Users can prevent potentially sensitive code or data from being processed by specific models they don't trust.
- Compliance: Helps organizations adhere to data privacy regulations and internal security policies.
- Customizable Security: Allows for fine-grained control over which AI models interact with user data.
- Trust and Transparency: Increases user confidence in the platform by providing clear control over AI model usage.
P
Paul
I think my last feature request was not clear enough. The current setup where Deepseek is enabled by default is unacceptable from a security standpoint. I urgently need a blacklist option given the serious privacy and security risks involved.
Your claim of SOC 2 Type 2 Compliance is difficult to reconcile with the multiple documented incidents involving Deepseek in the past month related to confidentiality breaches, processing integrity issues, and privacy violations.
I strongly support implementing model blacklisting as a critical security feature. This should be prioritized as a mandatory feature rather than optional.