AI-Powered Features
Skylos uses LiteLLM as its LLM gateway, giving you one interface across cloud and local providers (OpenAI-compatible APIs). You can use OpenAI, Anthropic, and local runtimes like Ollama / LM Studio / vLLM by changing the model name and (optionally) the base URL—without changing Skylos code.
Architecture Overview
Skylos uses a hybrid analysis approach that combines the strengths of both static analysis and LLMs:
┌─────────────────────┐ ┌─────────────────────┐
│ Static Analysis │ │ LLM Analysis │
│ (Deterministic) │ │ (Context-Aware) │
│ │ │ │
│ • Fast & Free │ │ • Logic bugs │
│ • Pattern matching │ │ • Understands │
│ • AST-based │ │ intent │
│ • No hallucinations│ │ • Explains issues │
└──────────┬──────────┘ └──────────┬──────────┘
│ │
└───────────┬───────────────────┘
▼
┌─────────────────────┐
│ Merge & Classify │
│ │
│ Both found → HIGH │
│ Static only → MED │
│ LLM only → REVIEW │
└─────────────────────┘
This approach catches more issues than either method alone while maintaining high confidence through cross-validation.
Why Hybrid Analysis (And Why LLMs Alone Are Not Enough)
LLMs are powerful at understanding intent — but they are not reliable static analyzers.
Static analysis is deterministic and rule-based. LLMs are probabilistic and can behave differently across runs. That means an LLM-only scanner can miss real issues or invent issues that don’t exist.
What LLMs are bad at (and why Skylos doesn’t rely on them alone)
LLMs struggle with:
- Determinism: the same file may produce different results across runs, temperatures, or providers
- Completeness: token limits force truncation → the model may miss code paths or entire files
- Precision: models sometimes report issues that sound plausible but are not actually present
- Exactness: line numbers, symbol names, call graphs, and imports can be wrong
- Large codebases: cross-file reasoning is limited by context size and missing definitions
- Ground truth guarantees: an LLM cannot prove reachability, data flow, or exploitability
Because of these limitations, Skylos uses static analysis as the truth layer and uses LLMs to provide contextual reasoning and remediation suggestions, not as the source of truth.
Best practice: Treat LLM findings as a review layer, not a final verdict.
Setup
API Key Configuration
Skylos resolves API keys per provider:
skylos key(stored in system keyring)- Environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
- Interactive prompt (only if allowed and no key is found)
skylos key
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
Keys entered interactively are saved to your system keyring for future use.
Local LLM Configuration (Ollama / LM Studio / vLLM)
For local LLMs, an API key is usually not required (depends on how your local server is configured). Set the base URL:
# Ollama
export SKYLOS_LLM_BASE_URL="http://localhost:11434/v1"
# LM Studio
export SKYLOS_LLM_BASE_URL="http://localhost:1234/v1"
# vLLM
export SKYLOS_LLM_BASE_URL="http://localhost:8000/v1"
Or pass it directly via CLI:
skylos agent analyze . --base-url http://localhost:11434/v1 --model qwen2.5-coder:7b
Provider Selection
Skylos automatically detects the provider from the model name, or you can force it:
# Auto-detect (default)
skylos agent analyze . --model gpt-4.1
skylos agent analyze . --model claude-sonnet-4-20250514
# Force provider explicitly
skylos agent analyze . --provider openai --model my-custom-model
skylos agent analyze . --provider anthropic --model my-custom-model
If you force a provider, make sure the model name is valid for that provider/base URL.
Environment variable alternative:
export SKYLOS_LLM_PROVIDER=openai
export SKYLOS_LLM_BASE_URL=http://localhost:11434/v1
Commands
skylos agent analyze
Run hybrid AI-powered analysis with full project context:
skylos agent analyze ./src
How it works:
- Runs static analysis first to build project context (
defs_map) - Passes the context to LLM agents for deeper analysis
- Merges findings and assigns confidence scores
- Detects issues static analysis can't: logic bugs, hallucinated function calls, business logic flaws
Options:
| Flag | Description |
|---|---|
--model | Model to use (default: gpt-4.1) |
--provider | Force provider: openai or anthropic |
--base-url | Custom endpoint for local LLMs |
--format | Output format: table, tree, json, sarif |
--output, -o | Write output to file |
--min-confidence | Filter by confidence: high, medium, low |
--fix | Generate fix proposals for findings |
--apply | Apply approved fixes to files |
--yes | Auto-approve prompts (use with --apply) |
Examples:
# Basic analysis
skylos agent analyze ./src
# With local Ollama
skylos agent analyze ./src \
--provider openai \
--base-url http://localhost:11434/v1 \
--model codellama:13b
# Generate and apply fixes
skylos agent analyze ./src --fix --apply
# Output as SARIF for CI integration
skylos agent analyze ./src --format sarif --output results.sarif
skylos agent security-audit
Security audit to catch security vulnerabilities:
skylos agent security-audit ./src
How it works:
- Collects Python files under the target path
- If
--interactiveis set (and inquirer is installed), shows a file picker - Estimates API cost before proceeding
- Runs comprehensive analysis on selected files
- Reports findings with explanations and suggestions
Options:
| Flag | Description |
|---|---|
--model | Model to use (default: gpt-4.1) |
--provider | Force provider: openai or anthropic |
--base-url | Custom endpoint for local LLMs |
--format | Output format: table, tree, json, sarif |
--output, -o | Write output to file |
--interactive, -i | Force interactive file selection |
Interactive file selection:
? Select files to audit (Space to select)
❯ ◉ [CHANGED] api/views.py (12.3 KB)
◯ models.py (8.1 KB)
◉ [CHANGED] utils/helpers.py (4.2 KB)
◯ config.py (1.5 KB)
Cost estimation:
Audit: 3 files, ~12,500 tokens, ~$0.0234
? Proceed? (Y/n)
skylos agent fix
Generate a fix for a specific issue:
skylos agent fix ./src/api.py --line 45 --message "SQL injection vulnerability"
Options:
| Flag | Description |
|---|---|
--line, -l | Line number of the issue (required) |
--message, -m | Description of the issue (required) |
--model | Model to use |
--provider | Force provider |
--base-url | Custom endpoint |
skylos agent review
Review only git-changed files:
skylos agent review
This is a convenience command that:
- Finds files changed in git (
git diff --name-only HEAD) - Runs analysis only on those files
- Perfect for pre-commit or PR review workflows
Legacy Commands
The following flags still work on the main command for quick, single-file operations:
--fix flag
skylos . --danger --quality --fix
For best results, use skylos agent analyze and skylos agent security-audit instead. They provide full project context to the LLM, enabling detection of cross-file issues and hallucinated function calls.
What Gets Detected
Static Analysis Finds
- Unused functions, imports, variables, classes
- Security patterns (SQL injection, command injection, XSS)
- Code quality issues (complexity, nesting depth)
- Hardcoded secrets
LLM Analysis Adds
- Logic bugs: Off-by-one errors, incorrect conditions, missing edge cases
- Hallucinations: Calls to functions that don't exist in your codebase
- Business logic flaws: Authentication bypasses, broken access control
- Context-dependent issues: Problems that require understanding intent
Confidence Scoring
Findings are scored based on source:
| Source | Confidence | Meaning |
|---|---|---|
| Static + LLM agree | HIGH | Very likely a real issue |
| Static only | MEDIUM | Deterministic match |
| LLM only | MEDIUM | Needs human review |
| Conflict | REVIEW | Flagged for manual inspection |
Local LLM Setup
Ollama (Recommended)
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a code model
ollama pull qwen2.5-coder:7b
# Use with Skylos
skylos agent analyze ./src \
--provider openai \
--base-url http://localhost:11434/v1 \
--model qwen2.5-coder:7b
LM Studio
- Download from lmstudio.ai
- Load a model (e.g.,
codellama-13b-instruct) - Start the local server (default port: 1234)
- Use with Skylos:
skylos agent analyze ./src \
--provider openai \
--base-url http://localhost:1234/v1 \
--model codellama-13b-instruct
Recommended Models
| Model | Size | Use Case |
|---|---|---|
qwen2.5-coder:7b | 4GB | Fast, good for most tasks |
codellama:13b | 8GB | Better reasoning |
deepseek-coder:6.7b | 4GB | Strong code understanding |
codellama:34b | 20GB | Best accuracy, requires GPU |
Configuration
pyproject.toml
[tool.skylos]
model = "gpt-4.1"
[tool.skylos.llm]
provider = "openai"
# base_url = "http://localhost:11434/v1" # Uncomment for local
Environment Variables
| Variable | Description |
|---|---|
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
SKYLOS_LLM_PROVIDER | Force provider: openai or anthropic |
SKYLOS_LLM_BASE_URL | Custom base URL for OpenAI-compatible APIs |
OPENAI_BASE_URL | Alternative to SKYLOS_LLM_BASE_URL |
Best Practices
For Analysis
- Use
skylos agent analyzefor full project context - Start with static analysis (
skylos .) to see baseline findings for free - Review LLM-only findings carefully — they may need validation
- Trust HIGH confidence findings — both engines agree
For Fixes
- Never blindly apply fixes — always review the diff
- Run tests after applying — verify functionality
- Fix incrementally — one category at a time
For CI/CD
# GitHub Actions example
- name: Skylos Security Scan
run: |
skylos agent analyze ./src \
--format sarif \
--output skylos-results.sarif \
--min-confidence high
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: skylos-results.sarif
Troubleshooting
"No API key found"
For cloud providers:
export OPENAI_API_KEY="sk-..."
# or
export ANTHROPIC_API_KEY="sk-ant-..."
For local LLMs, ensure --base-url points to your server (and provide a key only if your server requires one):
skylos agent analyze . --base-url http://localhost:11434/v1 --model qwen2.5-coder:7b
"Connection refused" (local LLM)
- Verify your LLM server is running:
curl http://localhost:11434/v1/models # Ollama
curl http://localhost:1234/v1/models # LM Studio - Check the port matches your
--base-url
"Model not found"
- Cloud: Verify your API key has access to the model
- Ollama: Run
ollama pull <model-name>first - LM Studio: Load the model in the UI before using
High costs
- Use
--min-confidence highto reduce output noise - Use
skylos agent reviewto analyze only changed files - Switch to local LLMs for development iteration
- Use smaller models (e.g.,
gpt-4o-mini,claude-haiku)
Slow responses
- Use local LLMs with GPU acceleration
- Reduce file count with selective auditing
- Try smaller context with
--max-chunk-tokens