Code Review Best Practices v2.
Applicable To: Engineering, ML, Data, and R&D Teams
Effective Date: April 2025
Version: 2.0
1. Purpose
To establish a standardized process for reviewing code that ensures:
• High code quality and maintainability
• Secure, efficient, and bug-free deployments
• Alignment with architectural principles for RAG and AI systems
• Improved team collaboration and knowledge sharing
2. Scope
This policy applies to all code repositories maintained by the
engineering teams (including backend, frontend, ML, and DevOps) that
contribute to RAG pipelines, AI model integrations, and internal
platforms.
3. Pre-Review Checklist for Authors
Before submitting a pull request (PR), the author must:
• ✅ Ensure code compiles/runs successfully and passes unit &
integration tests
• ✅ Follow naming conventions, linting rules, and code style
guides
• ✅ Include descriptive commit messages and a clear PR title
• ✅ Write or update relevant documentation (README, API docs,
usage examples)
• ✅ Add test coverage for new functionality and edge cases
• ✅ Verify that secret keys, credentials, and other sensitive data
are not hardcoded
• ✅ Tag the appropriate reviewers and stakeholders
4. Code Reviewer Responsibilities
Each reviewer should:
• 🔍 Understand the context of the change by reading the PR
description and issue/ticket
• 🧠 Assess the logic, readability, and design decisions in the code
• 🧪 Verify the presence of automated tests and validate them if
needed
• 🛡️ Check for security flaws, error handling, and data protection
compliance (especially for GDPR/CCPA-relevant components)
• 📊 Ensure performance efficiency, especially in inference
pipelines or memory-heavy modules
• 📁 Confirm file structure, modularity, and code reuse are
optimized
• 🧩 Flag any unnecessary dependencies, tech debt, or scope
creep
• ✅ Approve only when the code is production-ready or request
clear changes
5. Review Process Flow
1. PR Created
o Author assigns reviewers and labels (e.g., bugfix, feature,
refactor)
2. Reviewer Acknowledges
o Within 24 hours (working days), reviewers acknowledge PR
and start review
3. Feedback & Discussion
o All review comments must be constructive and solution-
oriented
o Larger discussions (e.g., architectural debates) move to
Slack or async calls
4. Changes Implemented
o Author addresses feedback, rebases if needed, and pushes
updates
5. Final Approval
o Two approvals (at least one from a senior dev or tech lead)
required to merge
6. Merge & Post-Merge Tasks
o Ensure CI/CD pipeline passes
o Link PR to Jira/Trello issue, update changelog if needed
6. Code Quality Guidelines
Aspect Best Practice
Naming Use clear, meaningful names (query_embedding vs. qe)
Functions Single responsibility, max 50 lines
Comments Explain why, not what – avoid redundant comments
DRY Principle Avoid code duplication – refactor shared logic
Error
Use try/except judiciously – fail gracefully with logs
Handling
Log model version, input/output shapes, runtime
AI Modules
latency
7. RAG-Specific Code Considerations
• ✅ Ensure retriever-query interfaces are modular and testable
• ✅ Validate embedding pipeline efficiency (batching, caching,
etc.)
• ✅ Include checks for knowledge drift or outdated indexed
sources
• ✅ Protect against prompt injection or insecure input handling
• ✅ Monitor latency and memory usage of combined retrieval +
generation workflows
• ✅ Adhere to data compliance standards when accessing client-
specific documents
8. Tools & Automation
Tool Purpose
GitHub/GitLab Pull request management
Prettier/Black/ESLint Code formatting and linting
PyTest/Jest Unit and integration testing
CodeCov Test coverage reporting
Static code analysis and vulnerability
SonarQube
scanning
Git Hooks (Husky/Pre-
Pre-checks before commits and pushes
commit)
9. Review Metrics and KPIs
• Average PR turnaround time (goal: <48 hours)
• % of PRs merged without review (goal: <5%)
• PR rework rate (goal: <10%)
• Test coverage delta per PR
• Post-release bug rate traced to recent PRs
10. Continuous Improvement
• Monthly review retrospectives to improve process flow
• Rotate peer reviewers to foster cross-team understanding
• Maintain a “Code Review Hall of Fame” for mentoring and team
morale
• Encourage blameless postmortems when review gaps lead to
incidents
to a .docx file or added into a central Engineering Handbook?