Reference
GNO ships a fine-tuned local expansion model trained on Apple Silicon with MLX LoRA.
GNO publishes a fine-tuned retrieval expansion model —gno-expansion-slim-retrieval-v1 — trained locally on Apple Silicon with MLX LoRA. It’s the default on fresh installs and benchmarks meaningfully above the base slim preset.
gno models use slim-tuned
gno models pullThe training harness lives in the main GNO repo under research/. It uses MLX LoRA for local fine-tuning and can export GGUF artifacts you can install via the Hugging-Face-backed custom preset format.
Before promoting a model change, run the benchmark suite. See the benchmarks feature page for the full canonical corpora and the autonomous candidate search.
bun run eval:hybrid
bun run bench:code-embeddings --candidate bge-m3 --write