Project CodeGuard is an open-source, model-agnostic security framework that embeds secure-by-default practices into AI coding agent workflows. It provides comprehensive security rules that guide AI assistants to generate more secure code automatically.
AI coding agents are transforming software engineering, but this speed can introduce security vulnerabilities. Is your AI coding agent implementation introducing security vulnerabilities?
Project CodeGuard solves this by embedding security best practices directly into AI coding agent workflows. It supports agent skills and rules that can be used in most coding agent platforms to make this happen.
Project CodeGuard has been donated to the Coalition for Secure AI (CoSAI).
🌟 New Repository: https://github.com/cosai-oasis/project-codeguard
Please visit the new repository for the latest updates and how to contribute to the project.