No Cloud Dependency
Everything runs locally. No API keys, no rate limits, no privacy concerns. Works offline after the initial model download.
Comprehensive validation blocks dangerous commands before execution. Pattern matching for rm -rf, fork bombs, and more.
Optimized for Apple Silicon with MLX. Sub-2-second inference on M1/M2/M3/M4 chips with GPU acceleration.
All inference runs locally on your machine. Your commands and data never leave your computer.
Generated commands work across all Unix systems. Standard utilities like find, grep, awk, and sed.
# Install via cargocargo install caro
# Or with Homebrew (macOS)brew install wildcard/tap/caro# Convert natural language to shell commandscaro "find all rust files modified in the last week"# → find . -name "*.rs" -mtime -7
caro "show disk usage sorted by size"# → du -sh * | sort -hr
caro "count lines in all python files"# → find . -name "*.py" -exec wc -l {} + | tail -1# Set your preferred backendexport CARO_BACKEND=mlx # Apple Siliconexport CARO_BACKEND=ollama # Any platform
# Optional: customize modelexport CARO_MODEL=codellamaNo Cloud Dependency
Everything runs locally. No API keys, no rate limits, no privacy concerns. Works offline after the initial model download.
Single Binary
One Rust binary under 50MB. No Python environments, no containers, no complex dependencies to manage.
Multiple Backends
Supports MLX for Apple Silicon, Ollama for local inference, and vLLM for high-performance serving.
Developer Friendly
Intuitive CLI with colored output, confirmation prompts, and helpful error messages.