Liquid AI desk

Run the local LFM2 transcript model through Ollama, score outputs, and capture JSONL tuning pairs.

Local-only tuning surface. Browser calls your Ollama endpoint; no hosted model API is required.

LiquidAI / LFM2 integration

Live tuning loop for captions, summaries, and market-desk notes

This desk points at your local GGUF mirror and an Ollama-compatible endpoint. Run a prompt, review the answer, write the corrected target, then export JSONL for later SFT or evaluation.

Start command cd "/Volumes/qbitOS/00.dev/cursor/train/train" && ./scripts/run-liquid-ai-live.sh

Runtime

ready to connect

Output

not run yet
Run the model to stream output here.

Live tuning buffer

0 pairs

Saved pairs stay in this browser until you download or clear site data.

Local wiring

  • GGUF: /Volumes/qbitOS/03.models/01-ollama/hf-mirror/LiquidAI/LFM2-2.6B-Transcript-GGUF/LFM2-2.6B-Transcript-Q4_K_M.gguf
  • Modelfile: train/Modelfile.liquidai-lfm2-transcript
  • One-command loop: cd "/Volumes/qbitOS/00.dev/cursor/train/train" && ./scripts/run-liquid-ai-live.sh
  • CORS note: the startup script launches Ollama with local Astro origins when it needs to start the server. If Ollama is already running, restart it with matching OLLAMA_ORIGINS if the browser blocks the request.