Claude Integration
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
Section titled “Project Overview”caro (formerly cmdai) is a single-binary Rust CLI tool that converts natural language descriptions into safe POSIX shell commands using local LLMs. The tool prioritizes safety, performance, and developer experience with Apple Silicon optimization via MLX framework.
Note: The project was renamed from
cmdaitocaroin December 2025. See docs/NAMING_HISTORY.md for details.
Core Goals:
- Single binary under 50MB (without embedded model)
- Startup time < 100ms, first inference < 2s on M1 Mac
- Safety-first approach with comprehensive command validation
- Extensible backend system (MLX, vLLM, Ollama)
- Hugging Face model caching with offline capability
Project Structure
Section titled “Project Structure”caro/├── src/│ ├── main.rs # CLI entry point with clap configuration│ ├── backends/ # Inference backend implementations│ │ ├── mod.rs # Backend trait system│ │ ├── mlx.rs # Apple Silicon MLX backend (FFI)│ │ ├── vllm.rs # vLLM HTTP API backend│ │ └── ollama.rs # Ollama local backend│ ├── cache/ # Hugging Face model caching│ ├── safety/ # Command validation and safety checks│ └── config/ # Configuration management├── tests/│ ├── integration/ # End-to-end workflow tests│ └── unit/ # Component-specific tests└── .devcontainer/ # Development environment setupArchitecture Overview
Section titled “Architecture Overview”Backend Trait System
Section titled “Backend Trait System”All model backends implement ModelBackend trait:
- Async inference with
Result<String>responses - Availability checking with graceful fallbacks
- Unified configuration through
BackendConfig - JSON-only response parsing with multiple fallback strategies
Safety-First Design
Section titled “Safety-First Design”Safety module provides:
- Pattern matching for dangerous commands (
rm -rf /,mkfs, fork bombs) - POSIX compliance validation
- Path quoting and validation
- Risk level assessment (Safe, Moderate, High, Critical)
- User confirmation workflows
Platform Optimization
Section titled “Platform Optimization”- MLX backend uses FFI with cxx crate for Apple Silicon
- Conditional compilation with feature flags
- Cross-platform cache directory management
- Shell-specific optimizations and detection
Sync Module (Planned)
Section titled “Sync Module (Planned)”Local-first sync with Jazz.tools for multi-device command history:
src/sync/: Rust sync library (identity, encryption, IPC client)sync-daemon/: Node.js companion for Jazz SDK integration- IPC: Unix socket at
~/.config/caro/sync.sock - Encryption: AES-256-GCM with Argon2id key derivation from BIP39 phrase
- Privacy: E2E encrypted, zero-knowledge relay sync
- See
specs/005-jazz-sync-integration/for full specification
Development Commands
Section titled “Development Commands”!IMPORTANT: Before running
cargoor any rust development command in the shell, check the the command is installed withwhichand inspect the$PATHfor the relevant bin.
If it doesn’t run
. "$HOME/.cargo/env"in your shell before command execution
Git Workflow
Section titled “Git Workflow”When reverting changes: Use git revert <commit-hash> or git reset, NOT manual file edits. Manual edits break git history and introduce inconsistencies. Always use git commands to manage history.
Building & Testing
Section titled “Building & Testing”# Build the projectcargo build --release
# Run all testscargo test
# Run specific testcargo test test_name
# Run with loggingRUST_LOG=debug cargo run -- "list all files"
# Check code formattingcargo fmt --check
# Run lintercargo clippy -- -D warnings
# Security auditcargo auditDevelopment Environment
Section titled “Development Environment”# Start development containerdevcontainer open .
# Watch for changes during developmentcargo watch -x check -x test -x runImplementation Phases
Section titled “Implementation Phases”Phase 1: Core CLI Structure
Section titled “Phase 1: Core CLI Structure”- Command-line argument parsing with clap
- Mock inference backend for initial testing
- Basic safety validation implementation
- Configuration and cache directory setup
Phase 2: Safety & Validation
Section titled “Phase 2: Safety & Validation”- Comprehensive dangerous command patterns
- POSIX compliance checking
- User confirmation workflows
- Risk assessment and color-coded output
Phase 3: Remote Backends
Section titled “Phase 3: Remote Backends”- vLLM HTTP API integration
- Ollama local API support
- Error handling and retry mechanisms
- Response format standardization
Phase 4: MLX Integration
Section titled “Phase 4: MLX Integration”- FFI bindings using cxx crate
- Metal Performance Shaders integration
- Unified memory architecture handling
- Apple Silicon performance optimization
Key Dependencies
Section titled “Key Dependencies”Core:
clap- Command-line argument parsingserde+serde_json- JSON serializationtokio- Async runtimeanyhow- Error handlingreqwest- HTTP client for remote backends
Platform-Specific:
cxx- Safe C++ FFI for MLX integrationdirectories- Cross-platform directory managementcolored- Terminal color output
Development:
tokio-test- Async testing utilitiestempfile- Temporary file creation for tests
Safety Validation Patterns
Section titled “Safety Validation Patterns”Dangerous Commands to Block
Section titled “Dangerous Commands to Block”- Filesystem destruction:
rm -rf /,rm -rf ~ - Disk operations:
mkfs,dd if=/dev/zero - Fork bombs:
:(){ :|:& };: - System path modification: Operations on
/bin,/usr,/etc - Privilege escalation:
sudo su,chmod 777 /
POSIX Compliance Requirements
Section titled “POSIX Compliance Requirements”- Use standard utilities (ls, find, grep, awk, sed, sort)
- Proper path quoting for spaces and special characters
- Avoid bash-specific features for maximum portability
- Validate command syntax before execution
System Prompt Template
Section titled “System Prompt Template”The tool uses a strict system prompt for JSON-only responses:
- Single command generation with safety constraints
- POSIX-compliant utilities only
- Proper file path quoting
- Destructive operation avoidance
- Clear JSON format:
{"cmd": "command_here"}
Performance Requirements
Section titled “Performance Requirements”Startup Optimization
Section titled “Startup Optimization”- Lazy loading of all dependencies
- Efficient JSON parsing with fallback strategies
- Minimal memory allocations during initialization
- Cached model loading when available
Inference Performance
Section titled “Inference Performance”- MLX backend: < 2s on Apple Silicon
- Remote backends: < 5s with network latency
- Streaming support where beneficial
- Memory-conscious resource management
Testing Strategy
Section titled “Testing Strategy”Unit Tests
Section titled “Unit Tests”- Safety pattern validation
- Command parsing and validation
- Configuration management
- Cache directory operations
Integration Tests
Section titled “Integration Tests”- End-to-end command generation workflows
- Backend communication and error handling
- Cross-platform compatibility
- Performance benchmarks
Property Tests
Section titled “Property Tests”- Safety validation with random inputs
- POSIX compliance checking
- Error recovery mechanisms
Specialized Agent Usage
Section titled “Specialized Agent Usage”When working on specific components:
- Complex architecture changes: Use
rust-cli-architectagent - LLM integration & backends: Use
llm-integration-expertagent - MLX/Apple Silicon features: Use
macos-unix-systems-expertagent - Test-driven development: Use
tdd-rust-engineeragent - Documentation updates: Use
technical-writeragent
Quality Standards
Section titled “Quality Standards”- All public APIs must have documentation
- Comprehensive error handling with helpful messages
- No panics in production code - use
Resulttypes - Memory safety without unnecessary allocations
- Security-first approach for system-level operations
- POSIX compliance for maximum portability
Spec-Driven Development Workflows
Section titled “Spec-Driven Development Workflows”This project uses dual spec-driven workflows optimized for different feature sizes:
Spec-Kitty Workflow (Rapid Development)
Section titled “Spec-Kitty Workflow (Rapid Development)”Use for: Small/medium features (< 2 weeks), bug fixes, enhancements, parallel development
Location: kitty-specs/ (git worktrees)
Commands: /spec-kitty.* slash commands in .claude/commands/
Workflow:
bin/sk-new-feature "description"- Creates isolated worktree/spec-kitty.specify- Create spec.md/spec-kitty.plan- Create plan.md/spec-kitty.tasks- Generate work packages/spec-kitty.implement- Execute tasks/spec-kitty.accept- Run acceptance checks/spec-kitty.merge- Merge and cleanup worktree
Benefits:
- ✅ Parallel development (multiple features simultaneously)
- ✅ No branch switching overhead
- ✅ Real-time dashboard (http://127.0.0.1:9237)
- ✅ Automated task management
- ✅ Perfect for rapid iteration with tools like Charm.land Crush
Dashboard: bin/sk-dashboard to monitor all features
Spec-Kit Workflow (Large Features)
Section titled “Spec-Kit Workflow (Large Features)”Use for: Large features (> 2 weeks), major architecture changes, extensive research
Location: specs/ (traditional directories)
Commands: Custom slash commands in .codex/prompts/
Workflow:
- Manual directory creation in
specs/NNN-feature-name/ - Create spec.md, plan.md, tasks.md manually
- Use
.specify/templates/for structure - Follow
.specify/memory/constitution.mdprinciples - Standard git workflow on feature branches
Benefits:
- ✅ Better for complex, long-running features
- ✅ Explicit constitution-based governance
- ✅ Flexible structure for research-heavy work
Decision Matrix: Which Workflow?
Section titled “Decision Matrix: Which Workflow?”| Criteria | Spec-Kitty | Spec-Kit |
|---|---|---|
| Feature size | < 2 weeks | > 2 weeks |
| Complexity | Low-Medium | High |
| Parallel dev | Multiple features at once | One at a time |
| Research phase | Light research | Extensive research |
| Architecture | Incremental changes | Major refactoring |
| Examples | Add caching, Fix bug, New API endpoint | MLX backend, Safety system, Multi-backend |
When to Use Spec-Kitty
Section titled “When to Use Spec-Kitty”✅ DO use spec-kitty when:
- Adding a new feature that takes < 2 weeks
- Fixing a bug that requires changes across multiple files
- Building an enhancement to existing functionality
- Working on multiple features in parallel (e.g., with Charm.land Crush)
- You want visual tracking via the dashboard
- The feature has clear, well-defined scope
Example scenarios:
- “Add Redis caching with TTL support”
- “Fix memory leak in MLX initialization”
- “Add Prometheus metrics endpoint”
- “Implement command history feature”
- “Add JSON output format option”
When to Use Spec-Kit
Section titled “When to Use Spec-Kit”✅ DO use spec-kit when:
- Building a major new system (> 2 weeks)
- Extensive research or prototyping needed
- Architectural decisions require deep investigation
- Multiple competing approaches need evaluation
- Long-running feature with many unknowns
Example scenarios:
- “Implement complete MLX backend with C++ FFI”
- “Design and build multi-backend inference system”
- “Create comprehensive safety validation framework”
- “Research and implement model quantization pipeline”
- “Architect distributed caching system”
Both Workflows Coexist
Section titled “Both Workflows Coexist”The project supports both workflows simultaneously:
kitty-specs/for rapid, parallel developmentspecs/for large, complex features
Example: You can work on a large MLX backend feature in specs/004-implement-ollama-and/ while simultaneously using spec-kitty for quick bug fixes in kitty-specs/001-fix-memory-leak/.
Integration Points
Section titled “Integration Points”Shared resources:
- Both follow the same constitution principles (
.specify/memory/constitution.md) - Both use the same testing standards
- Both require security-first approach
- Both commit to the same git repository
Different tools:
- Spec-kitty: Automated task management, worktrees, dashboard
- Spec-kit: Manual planning, traditional branches, constitution-driven
Quick Reference
Section titled “Quick Reference”# Spec-Kitty workflowbin/sk-new-feature "Add caching" # Create featurecd kitty-specs/001-add-caching/ # Enter worktree/spec-kitty.specify # Generate spec/spec-kitty.implement # Execute tasksbin/sk-dashboard # Monitor progress
# Spec-Kit workflowmkdir -p specs/005-new-feature/ # Create directory# Manually create spec.md, plan.md# Use .specify/templates/ as reference# Follow constitution-based developmentSee docs/SPEC_KITTY_GUIDE.md for comprehensive spec-kitty documentation.
Project Management Workflow
Section titled “Project Management Workflow”This project uses a structured project management system with GitHub Projects, milestones, and roadmap tracking. The /caro.roadmap skill helps agents select work aligned with project priorities.
Quick Start: Using /caro.roadmap
Section titled “Quick Start: Using /caro.roadmap”Before starting any work, use /caro.roadmap to select aligned tasks:
# 1. Check roadmap status and get recommendation/caro.roadmap # Shows milestones, progress, blockers, and suggested next work
# 2. Get next recommended work (auto-selected by priority algorithm)/caro.roadmap next # Returns highest-scored issue with breakdown
# 3. Start work on recommended issue (auto-routes to spec-kitty or spec-kit)/caro.roadmap start #123 # Creates worktree or spec directory, updates issue status
# 4. When done, mark complete/caro.roadmap complete #123 # Closes issue, updates roadmap, suggests next workTypical workflow:
- Run
/caro.roadmap→ See status - Run
/caro.roadmap next→ Get top recommendation - Run
/caro.roadmap start #XXX→ Begin implementation - Implement the feature
- Run
/caro.roadmap complete #XXX→ Mark done, get next task
Optional - Set your expertise for better work matching:
/caro.roadmap profile rust # If working on Rust/CLI/caro.roadmap profile docs # If writing documentation/caro.roadmap profile devops # If doing CI/CD/releasesRoadmap Structure
Section titled “Roadmap Structure”ROADMAP.md defines three release milestones:
- v1.1.0 (Feb 15, 2026): Core improvements - production-ready functionality
- v1.2.0 (Mar 31, 2026): Website & docs launch - public marketing
- v2.0.0 (Jun 30, 2026): Advanced features - innovation and capabilities
GitHub Projects:
- Caro Product Development - Technical work (36 items)
- Caro Marketing & DevRel - Content work (29 items)
Each project uses custom fields:
- Status: Todo, In Progress, Done
- Priority: Critical, High, Medium, Low, Backlog
- Type: Feature, Bug, Infrastructure, Research, Documentation, Marketing
- Area: Core CLI, Safety, Backends, DevOps, DX, Website
The /caro.roadmap Skill
Section titled “The /caro.roadmap Skill”Use /caro.roadmap to intelligently select and manage work:
/caro.roadmap # Show roadmap status overview/caro.roadmap next # Get next recommended work item/caro.roadmap select # Interactive work selection/caro.roadmap start #123 # Begin work on issue (routes to spec-kitty or spec-kit)/caro.roadmap complete #123 # Mark issue as done/caro.roadmap blocked # List all release blockers/caro.roadmap profile # Show/set agent expertiseWork Selection Algorithm
Section titled “Work Selection Algorithm”The skill uses a weighted scoring system to recommend work:
- Blocker Check (+1000): Issues labeled
release-blockertake absolute priority - Milestone Priority (+100-300): Earlier milestones ranked higher (v1.1.0 > v1.2.0 > v2.0.0)
- Priority Level (+10-50): Critical > High > Medium > Low
- Area Matching (+25): Matches agent expertise to issue area
- Status Filter: Only suggests “Todo” items, skips “blocked” or assigned items
Agent Expertise Profiles
Section titled “Agent Expertise Profiles”Configure your expertise in .claude/agent-profiles.yaml to get better work matches:
Available profiles:
rust- Rust CLI Expert (areas: Core CLI, Backends, Safety)docs- Documentation Writer (areas: DX, Website)devops- DevOps Engineer (areas: DevOps, Infrastructure)web- Web Developer (areas: Website, DX)marketing- Marketing Specialist (areas: Website, DX)security- Security Engineer (areas: Safety, Core CLI)ai- AI/ML Engineer (areas: Backends, Core CLI)
Switch profiles with: /caro.roadmap profile <name>
Workflow Routing: Spec-Kitty vs Spec-Kit
Section titled “Workflow Routing: Spec-Kitty vs Spec-Kit”When starting work, the skill automatically routes to the appropriate workflow:
| Criteria | Spec-Kitty | Spec-Kit |
|---|---|---|
| Scope | < 2 weeks (small/medium) | > 2 weeks (large) |
| Complexity | Low-Medium | High |
| Labels | quick-fix, enhancement, bug | architecture, research, major-refactor |
| Workflow | Worktree-based rapid iteration | Constitution-based manual process |
Spec-Kitty routing (automatic):
/caro.roadmap start #123# → Creates .worktrees/NNN-feature/Spec-Kit routing (manual):
/caro.roadmap start #456# → Creates specs/NNN-feature/ directory# → Follow .specify/memory/constitution.mdIntegration with Development Workflow
Section titled “Integration with Development Workflow”-
Before starting work: Check roadmap and select aligned issue
Terminal window /caro.roadmap # View current status/caro.roadmap next # Get recommended work -
Start implementation: Route to appropriate workflow
Terminal window /caro.roadmap start #123 # Auto-routes to spec-kitty or spec-kit -
Complete work: Update status and get next task
Terminal window /caro.roadmap complete #123 -
Check blockers: Before releases, verify no blockers
Terminal window /caro.roadmap blocked
This ensures all work aligns with project milestones, priorities, and strategic themes documented in ROADMAP.md.
Release Management Workflow
Section titled “Release Management Workflow”This project enforces a security-first release workflow using Claude Code slash commands. All releases MUST go through feature branches and pull requests - direct commits to main for release-related changes are prohibited.
Release Skills
Section titled “Release Skills”The release workflow is automated through 6 Claude skills in .claude/commands/:
-
/caro.release.prepare- Start a new release- Creates
release/vX.Y.Zbranch from main - Runs pre-flight checks (CI status, release blockers)
- Lists pending changes since last tag
- Prerequisite: Must be on
mainbranch with clean working directory
- Creates
-
/caro.release.security- Security audit and fixes- Runs
cargo auditand categorizes vulnerabilities - Guides through fixing critical/unsound issues
- Updates dependencies to maintained versions
- Runs tests and creates detailed commit
- Prerequisite: Must be on
release/*orhotfix/*branch
- Runs
-
/caro.release.version- Version bump and changelog- Updates version in
Cargo.toml - Updates
CHANGELOG.md(moves [Unreleased] to [X.Y.Z]) - Runs
cargo checkfor verification - Creates version bump commit
- Prerequisite: Must be on
release/*orhotfix/*branch
- Updates version in
-
/caro.release.publish- Create PR, merge, and tag- Creates pull request with release checklist
- Monitors CI checks and waits for approval
- Merges PR to main
- Creates and pushes annotated git tag
- Monitors automated publish workflows
- Prerequisite: Must be on
release/*orhotfix/*branch
-
/caro.release.verify- Post-release verification- Installs from crates.io and verifies version
- Runs functionality tests
- Checks GitHub release creation
- Verifies documentation links
- Prerequisite: None (can run from any branch)
-
/caro.release.hotfix- Emergency security patches- Creates hotfix branch from latest tag
- Fast-tracks critical security fixes
- Publishes security advisories
- Use ONLY for: Critical vulnerabilities, data loss, crashes
- Prerequisite: None (emergency mode)
Standard Release Flow
Section titled “Standard Release Flow”Execute commands in this order:
# 1. Start release (creates release/vX.Y.Z branch)/caro.release.prepare
# 2. Run security audit and fix vulnerabilities/caro.release.security
# 3. Bump version and update changelog/caro.release.version
# 4. Create PR, merge, tag, and publish/caro.release.publish
# 5. Verify published release/caro.release.verifyEmergency Hotfix Flow
Section titled “Emergency Hotfix Flow”For critical security vulnerabilities only:
# Creates hotfix branch, applies fix, and fast-tracks release/caro.release.hotfixBranch Enforcement
Section titled “Branch Enforcement”Each command enforces branch requirements:
- prepare: Must start on
main - security, version, publish: Must be on
release/*orhotfix/* - verify: Can run from any branch
- hotfix: Can start from any branch (emergency mode)
Commands will REFUSE to proceed if branch requirements aren’t met, preventing accidental direct commits to main.
Design Principles
Section titled “Design Principles”- Security-first: Mandatory security audits before every release
- Consistency: Same process every time, no missed steps
- Transparency: All actions documented in commits
- Enforcement: Branch protection enforced by commands
- Automation: Reduces manual errors
See docs/RELEASE_PROCESS.md for complete release procedures and security requirements.
Session Continuity (Continuous-Claude)
Section titled “Session Continuity (Continuous-Claude)”This project integrates Continuous-Claude for session state preservation across context clears. Instead of relying on lossy compaction (summarizing conversations repeatedly), we use a “clear, don’t compact” philosophy.
The Problem
Section titled “The Problem”When Claude Code runs low on context, it compacts (summarizes) conversations. Multiple compactions create “a summary of a summary of a summary,” degrading quality and eventually producing hallucinations.
The Solution
Section titled “The Solution”Save state to a ledger, clear context cleanly, and resume with full signal integrity.
Directory Structure
Section titled “Directory Structure”thoughts/├── ledgers/ # In-session state files (survive /clear)│ └── CONTINUITY_CLAUDE-*.md # Active session ledgers└── shared/ ├── handoffs/ # Cross-session transfer documents ├── plans/ # Implementation plans └── research/ # Research documentsCore Skills
Section titled “Core Skills”Continuity Ledger (/continuity_ledger)
- Creates/updates ledgers for state preservation within a session
- Use before running
/clear - Use when context usage approaches 70%+
- Ledgers survive
/clearand reload automatically on resume
Create Handoff (/create_handoff)
- Creates end-of-session transfer documents
- Includes task status, learnings, artifacts, and next steps
- Perfect for handing off work to another session
Resume Handoff (/resume_handoff)
- Resumes work from a handoff document
- Validates current state against handoff
- Creates todo list from action items
Onboard (/onboard)
- Analyzes brownfield codebases
- Creates initial continuity ledger
- Use when first working in an existing project
Natural Language Triggers
Section titled “Natural Language Triggers”The system responds to conversational cues:
| Phrase | Action |
|---|---|
| ”save state”, “update ledger” | Updates continuity ledger |
| ”done for today”, “create handoff” | Creates handoff document |
| ”resume work”, “continue from handoff” | Loads and continues |
| ”onboard”, “analyze this project” | Runs codebase analysis |
When to Use Continuity
Section titled “When to Use Continuity”Use ledgers when:
- Context usage approaching 70%+
- Multi-day implementations
- Complex refactors you pick up/put down
- Any session expected to hit 85%+ context
Use handoffs when:
- Ending a work session
- Transferring work to another session/person
- Need detailed context for future work
Don’t use when:
- Quick tasks (< 30 min)
- Simple bug fixes
- Single-file changes
Quick Reference
Section titled “Quick Reference”# Save state before clearing context/continuity_ledger
# Clear context (ledger reloads automatically)/clear
# Create end-of-session handoff/create_handoff
# Resume from a handoff/resume_handoff thoughts/shared/handoffs/feature-name/2025-01-15_14-30-00_description.md
# Onboard to a new codebase/onboardComparison with Other Tools
Section titled “Comparison with Other Tools”| Tool | Scope | Fidelity |
|---|---|---|
| CLAUDE.md | Project | Always fresh, stable patterns |
| TodoWrite | Turn | Survives compaction, but understanding degrades |
| CONTINUITY_CLAUDE-*.md | Session | External file - never compressed, full fidelity |
| Handoffs | Cross-session | External file - detailed context for new session |
See .claude/skills/ for detailed skill documentation.
Multi-Agent Development Process
Section titled “Multi-Agent Development Process”This project follows spec-driven development with coordinated multi-agent teams:
- Specification phase with clear requirements
- Architecture and design review
- Phased implementation with safety validation
- Quality assurance and documentation
Each phase includes specific agent coordination for optimal development flow and maintains alignment with project constitution and safety standards.
PRD-First Feature Development
Section titled “PRD-First Feature Development”Rule: All new features with cultural, regional, or significant user-facing impact MUST follow PRD-first development.
When to Create a PRD
Section titled “When to Create a PRD”Create a PRD before implementation when:
- Adding holiday themes or cultural features
- Building features that affect users from specific regions/cultures
- Implementing accessibility-sensitive features
- Adding features with localization requirements
- Creating features that require cultural research or sensitivity review
PRD Workflow
Section titled “PRD Workflow”-
Create PRD in appropriate directory:
- Holiday themes:
docs/prds/holidays/ - Localization:
docs/prds/i18n/ - Accessibility:
docs/prds/a11y/ - General features:
docs/prds/features/
- Holiday themes:
-
PRD Approval: Get stakeholder review before implementation
-
Route to Spec Workflow:
- Small/medium features (< 2 weeks): Use Spec-Kitty (
/caro.feature) - Large features (> 2 weeks): Use Spec-Kit (
specs/directory)
- Small/medium features (< 2 weeks): Use Spec-Kitty (
-
Implementation: Follow the chosen spec workflow
-
Cultural Review (if applicable): Verify cultural accuracy before launch
PRD Template Location
Section titled “PRD Template Location”See website/GLOBAL_HOLIDAY_THEMES_PLAN.md for the holiday theme PRD template.
Example: Adding a New Holiday Theme
Section titled “Example: Adding a New Holiday Theme”# 1. Create PRDmkdir -p docs/prds/holidays# Write PRD based on template in GLOBAL_HOLIDAY_THEMES_PLAN.md
# 2. After PRD approval, start implementation/caro.feature # For spec-kitty workflow
# 3. Follow cultural sensitivity guidelines# 4. Get cultural review before mergeThis ensures cultural sensitivity, user experience quality, and proper documentation for all culturally-significant features.