Quick Start
Run your first benchmark in 5 minutes.
You need the blxbench command on your PATH. Install the published package @bitslix/blxbench first (see Installation).
Step 1: Create Your .env
Create a .env file in the directory where you run blxbench. OpenRouter is the default provider alias (opr):
OPENROUTER_API_KEY=
SUMMARY_PROVIDER=openrouter
SUMMARY_MODEL=qwen/qwen3-235b-a22b-2507
VALIDATION_MODEL=openai/gpt-5.4-miniFill OPENROUTER_API_KEY with your OpenRouter key. SUMMARY_PROVIDER and SUMMARY_MODEL enable AI-generated run summaries, while VALIDATION_MODEL is used to validate coding_ui fixtures. Without VALIDATION_MODEL, those validation checks are skipped.
Provider keys are different from your BLXBench web key:
OPENROUTER_API_KEY,OPENAI_API_KEY, and similar keys pay and authenticate with the model provider.BLXBENCH_API_KEYis only for--submituploads to the leaderboard.
Step 2: Run a Benchmark (headless)
Outside an interactive terminal, or when you pass --headless, blxbench runs without the TUI. There is no run subcommand — pass flags directly:
blxbench --headless --provider opr --models openai/gpt-5.4-mini(--provider defaults to opr if omitted.) This runs the suite against the given model id(s).
Step 3: Filter Tests (Optional)
Run only specific categories or levels:
# Only speed tests
blxbench --headless --provider opr --models openai/gpt-5.4-mini --category speed
# Only easy difficulty
blxbench --headless --provider opr --models openai/gpt-5.4-mini --level easyStep 4: View Results
After the run completes, results are saved under ~/.blxbench/reports/:
# Find the newest report
find ~/.blxbench/reports -name report.json -o -name index.htmlOn Windows, the same directory is %USERPROFILE%\.blxbench\reports\. Use /set output-dir PATH in the TUI if you want reports somewhere else for a single run.
TUI: interactive run, manual upload, resume
If you start blxbench in an interactive terminal (no --headless), you get the full TUI. After a run:
- Use
/report submit on|offto control automatic upload when the report is written, or leave it off and presssorron the result screen to manually upload the samereport.json. - Use
/resumeto pick a previousreport.json(under your report directory) to review or upload — useful after a failed upload or if you only want to push results later.
Details: TUI and Commands — After a run.
Step 5: Upload to Leaderboard (Optional)
To submit a report after a headless run:
export BLXBENCH_API_KEY=your-key
blxbench --headless --provider opr --models openai/gpt-5.4-mini --submitUploads require an account, a BLXBench API key, and a paid pass tier that includes submission quota (see Account and Pass / pricing).
Common Options
| Flag | Description |
|---|---|
--headless | Force non-TUI mode (optional when stdout is not a TTY) |
--provider | Provider alias: opr, oai, hgf, tgr, ptk, cfr (see docs) |
--models | One or more model IDs for that provider |
--category | Filter by fixture category (e.g. speed, security, coding_ui) |
--level | Filter by difficulty (easy, medium, hard) |
--limit | Max tests per category |
--save-json | Custom output path for JSON results |
--fail-fast | Stop on first failure |
--submit | POST report.json after the run (needs API key + quota) |
Next Steps
- blxbench Reference — All available commands
- Understanding Results — How to read the results