BLXBench Docs
BLXBench Docs
LeaderboardOur TestsSponsor / PartnershipDocumentationInstallationQuick StartTUICommandsHeadless ModeConfigurationLeaderboardOur TestsAccountAboutFAQSupport

Headless Mode

Running benchmarks in automated environments.

Headless mode allows BLXBench to run in CI/CD pipelines, scripts, and automated workflows. Install the blxbench command via @bitslix/blxbench (see Installation) before running the examples below.

Basic Usage

blxbench --headless --provider opr --models openai/gpt-5.4-mini

Omit --headless if the process already has no TTY (typical in CI); blxbench then enters the same headless path automatically.

Reports are written to the user's report directory by default:

  • Linux/macOS: ~/.blxbench/reports/
  • Windows: %USERPROFILE%\.blxbench\reports\

Use --save-json PATH for an additional JSON copy, or use the TUI's /set output-dir PATH when running interactively.

Integration with CI/CD

GitHub Actions

name: Benchmark
on: [push, pull_request]

jobs:
  benchmark:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: oven-sh/setup-bun@v1

      - name: Run benchmark
        run: |
          bun install -g @bitslix/blxbench
          blxbench --headless --provider opr --models openai/gpt-5.4-mini

      - name: Upload results
        uses: actions/upload-artifact@v4
        with:
          name: blxbench-results
          path: ${{ env.HOME }}/.blxbench/reports/

GitLab CI

stages:
  - benchmark

benchmark:
  image: oven/bun:1
  script:
    - bun install -g @bitslix/blxbench
    - blxbench --headless --provider opr --models openai/gpt-5.4-mini
  artifacts:
    paths:
      - $HOME/.blxbench/reports/

Exit Codes

CodeDescription
0Success
1General error
2Invalid arguments
3Test failure (with --fail-fast)

Rate Limiting

Use --ratelimit to avoid hitting provider rate limits:

# Default (60 RPM)
blxbench --headless --provider opr --models openai/gpt-5.4-mini --ratelimit

# Custom (30 requests per minute)
blxbench --headless --provider opr --models openai/gpt-5.4-mini --ratelimit 30

Output Handling

Save JSON Results

blxbench --headless --provider opr --models openai/gpt-5.4-mini --save-json ./my-results.json

--save-json is an extra export. The regular run folder, HTML report, report.json, screenshots, artifacts, and aggregate ranking files still go under ~/.blxbench/reports/ unless you configure another results directory in the TUI.

Capture Output

# Suppress progress output
blxbench --headless --provider opr --models openai/gpt-5.4-mini 2>/dev/null

# Log to file
blxbench --headless --provider opr --models openai/gpt-5.4-mini >> benchmark.log 2>&1

Automated Submission

Set environment variables for automatic submission:

export BLXBENCH_API_KEY=your-key
export BLXBENCH_SUBMIT=1

blxbench --headless --provider opr --models openai/gpt-5.4-mini

Or use the flag:

blxbench --headless --provider opr --models openai/gpt-5.4-mini --submit --api-key your-key

Non-Interactive Detection

BLXBench automatically detects non-TTY environments and skips the TUI. To force the same behavior in a terminal:

blxbench --headless --provider opr --models openai/gpt-5.4-mini

Commands

Complete reference for all blxbench commands.

Configuration

Configure blxbench via files, environment variables, and flags.

On this page

Basic UsageIntegration with CI/CDGitHub ActionsGitLab CIExit CodesRate LimitingOutput HandlingSave JSON ResultsCapture OutputAutomated SubmissionNon-Interactive Detection