LiquidAI / LFM2 integration
Live tuning loop for captions, summaries, and market-desk notes
This desk points at your local GGUF mirror and an Ollama-compatible endpoint. Run a prompt, review the answer, write the corrected target, then export JSONL for later SFT or evaluation.
Start command
cd "/Volumes/qbitOS/00.dev/cursor/train/train" && ./scripts/run-liquid-ai-live.sh Output
not run yetRun the model to stream output here.
Live tuning buffer
0 pairsSaved pairs stay in this browser until you download or clear site data.
Local wiring
- GGUF:
/Volumes/qbitOS/03.models/01-ollama/hf-mirror/LiquidAI/LFM2-2.6B-Transcript-GGUF/LFM2-2.6B-Transcript-Q4_K_M.gguf - Modelfile:
train/Modelfile.liquidai-lfm2-transcript - One-command loop:
cd "/Volumes/qbitOS/00.dev/cursor/train/train" && ./scripts/run-liquid-ai-live.sh - CORS note: the startup script launches Ollama with local Astro origins when it needs
to start the server. If Ollama is already running, restart it with matching
OLLAMA_ORIGINSif the browser blocks the request.