Train · Live relay
A technical guide to the live captions desk
This page is for education and research layout only — not investment advice. It borrows structural patterns from long-form product guides and research portals (clear sections, dated evidence, and explorable cards) while keeping execution and compliance outside this static site.
Live captions and Granite Tiny side chat
Companion to the qbitOS Bloomberg tool in bloomberg/live chat/browser-page/: reference stream, then
caption-driven chat when the local Python relay is running.
This site does not start the relay. In a terminal, run the copied command (or python3 server.py from that folder), wait for “Serving http://…”, then
use the buttons. If a new tab does not appear, try Safari or Chrome; the buttons fall back to navigating this tab
when pop-ups are blocked.
Relay CLI, HTTP API, and dashboard tools
— inventory of everything in bloomberg/live chat/.
Captions and model reactions stream from the relay; this directory site stays static on GitHub Pages.
Quick setup
Drop-in relay next to this static Train shell
Change one working directory, run one command, open one port. Works with the bundled Python forwarder,
the Bloomberg browser page, Granite Tiny over Ollama (or your host), and any tab that can reach 127.0.0.1:8765.
-
From this repo root (
…/cursor/train), runpython3 server.py— it forwards to../bloomberg/live chat/browser-page(default port 8765). Orcdthere and runpython3 server.pydirectly. - Use Open relay once the server prints “Serving…” — for the full dashboard (captions, metrics, chat).
-
HTTPS production hosts cannot embed
http://127.0.0.1; use localhost over HTTP during dev if you need both in one tab.
Relay tools & API
Checklist for the Bloomberg live chat tree: terminal harness, HTTP
relay, and the dashboard you open at http://127.0.0.1:8765/ after
starting the server. Published alongside this desk at fornevercollective.github.io/train/live-captions/.
Prerequisites
- Python 3 — relay and harness.
- yt-dlp — discovers the live English caption playlist for the configured YouTube URL.
- Ollama (or compatible
POST …/api/generatehost) with a chat model; default in repo isgranite-4-h-tiny-local(seeModelfile.granite-4-h-tiny-localbeside the harness).
Ways to start the browser relay
- Train repo root —
python3 server.py(shim runs../bloomberg/live chat/browser-page/server.py, default127.0.0.1:8765). - Browser package —
cd …/browser-pagethenpython3 server.pyor./run-local-agent.sh(respectsOLLAMA_HOST,BLOOMBERG_CHAT_MODEL/OLLAMA_MODEL).
Terminal-only caption harness
live_caption_sidechat.py (same live chat
folder, not the browser server) polls captions and prints CAPTION / SIDECHAT lines to stdout for a bounded number of generations — useful
for smoke tests without HTTP.
Flags: --video-url, --language, --model, --poll-interval, --history-cues, --min-new-cues, --max-generations.
server.py CLI (browser relay)
--host/--port— bind address (default127.0.0.1:8765).--video-url— YouTube live URL for captions + embed.--model— Ollama model name.-
--ollama-host— base URL or full…/api/generate; elseOLLAMA_URL/OLLAMA_HOST. --poll-interval,--min-new-cues,--history-cues— caption polling cadence and buffer.-
--chat-temperature,--output-temperature,--system-prompt-suffix— sampling and optional side-chat style hint (break / wrap / forecast cards use output temperature).
HTTP surface
- GET
/,/index.html— dashboard. - GET
/app.js,/styles.css— client assets. - GET
/api/state— JSON for captions, chat, metrics, embed URL, agent snapshot. - GET
/api/health— lightweightokping. - POST
/api/output— JSON object with string fieldmodeset tobreak,wrap, orforecast(same as the dashboard output buttons). - POST
/api/agent-settings— live update model, Ollama host, chat/output temperatures, and side-chat system suffix (same fields as Your local agent in the UI).
Dashboard tools (in-browser)
- Toolbar: Refresh, Copy yt-dlp reference command, connection status, collapsible top region.
- Error panel with reproducible Copy for the failing command.
- Your local agent — model, Ollama base URL, side-chat and output-card temperatures, extra side-chat system instructions, Apply to server.
- Live counters and Statistical analysis — model status, caption/chat rates, token and cost estimates, react / question / watch mix.
- Live captions transcript with optional speaker inference and per-cue model reactions.
- Main stage: reference clock, reaction strip above the player, YouTube embed, compact caption mirror when the top chrome is hidden.
- Granite Tiny side chat — reading-direction toggle, per-kind counts (watch / react / question), streaming chat lines.
- Break / wrap / forecast output — Commercial break, Wrap up, Estimate / forecast buttons; commercial-break card can also emit automatically after prolonged caption silence.
How the desk works
Most chat tools run linearly: you type, the model answers. This desk is designed more like a review workflow: captions arrive as a noisy primary source, the side chat proposes structure, and you reorganize claims before anything is treated as “settled.”
When a parameter is missing — which venue, which contract month, which session window — the UI should ask before promoting text to the evidence log. That mirrors how agent-style guides treat intent: narrow the trigger, then lock the action.
Living memo loop
The workflow you described is a session-long research brief: repeated searches, multiple sources, and one canonical document that changes as new evidence arrives. The captions desk should treat every cue, web hit, archive clip, and connector result as evidence that can update the draft without losing provenance.
Search + capture
Each μsearch or connector run adds source cards: query, URL, timestamp, excerpt, and confidence. Nothing becomes prose until it is attached to a source record.
Route + section
Use routing for source type and intent: live caption, market archive, quote, document, or conflicting claim. This mirrors the simple routing pattern in Anthropic’s agent guidance.
Reconcile draft
Send the previous memo plus new evidence into a constrained revise step: update sections, merge duplicates, preserve citations, and mark conflicts instead of hiding them.
Checkpoint + diff
Every AI edit creates a versioned checkpoint, so a bad merge can be rolled back and paragraph-level provenance still shows which search or transcript turn changed the text.
mueee.qbitos.ai handoff
The next product step is to let the live captions desk emit a portable research-session bundle for
mueee.qbitos.ai: current draft, source cards, citations,
conflicts, and model feedback. That supports prompt chaining for outline → draft → review, plus
evaluator-optimizer loops when a section needs another search pass.
Interpretability & data
Explorable explainers and HCI research are useful references when you label caption clusters or explain why a theme fired. External links open in a new tab.
PAIR — human-centered ML
Explainability, interpretability, and fairness framing for anything you infer from captions or side chat.
- HCI
- Interpretability
PAIR research archive
Paper-style references when you want the desk to read like a literature-backed review, not a tweet thread.
- Research
- Papers
AI Explorables
Interactive explainers — good mental model for turning caption streams into explorable sections.
- Explorable
- Visualization
Public — Agents prompting guide
Long-form “how we think about intent” layout: triggers, boundaries, and review before anything goes live.
- Agents
- Prompting
Public Trading API changelog
Dated deltas and capabilities — mirror this shape for “what changed in our digest since last session.”
- Changelog
- API
Working paper from captions
Below is a stub digest: the browser groups seed caption lines by theme. When the relay writes JSON (captions + chat turns), replace the script source with that payload and re-render the same sections — abstract, claims, and catalog cross-check.
Abstract
Loading caption-derived summary…
Claims under review
Catalog cross-check
Map each theme to symbols present in the Erika directory (metals: GC=F, SI=F; benchmarks: ES=F, QQQ; majors: BTC-USD). Coverage rows on the Directory tab remain the ground truth for what is actually
in catalog.json.
Evidence log
Dated deltas for the desk itself — same rhythm as an API changelog.
-
Relay inventory on desk page
Live captions route now documents the full Bloomberg `live chat` surface: terminal harness, `run-local-agent.sh`, relay CLI flags, HTTP endpoints, and dashboard controls so GitHub Pages matches the local toolset.
-
Desk shell: captions + catalog cross-check
Introduced a dedicated captions route with a working-paper scaffold. Claims list is client-generated from seed caption lines until the relay exposes JSON.
-
Relay panel shared with home
Bloomberg live relay card is a single component so the same DOM contract (`#home-bloomberg-live`) works on home and on this desk for deep links.
-
Market alignment stub
Priority markets from the Erika catalog (metals, benchmarks, crypto presets) are the default “universe” for tagging caption-derived themes.
External references
Design inspiration only — no affiliation. Agent workflows, recipe cards, Public’s guide and API changelog, and Google PAIR references for explainable research surfaces.