Research desk

Rebuild a fact-style brief from captions and conversation, then cross-check themes against the static market catalog.

Built from Erika dataset metadata for browsing, filtering, and static profile lookup.

Train · Live relay

A technical guide to the live captions desk

May 11, 2026

This page is for education and research layout only — not investment advice. It borrows structural patterns from long-form product guides and research portals (clear sections, dated evidence, and explorable cards) while keeping execution and compliance outside this static site.

Bloomberg live relay

Live captions and Granite Tiny side chat

Companion to the qbitOS Bloomberg tool in bloomberg/live chat/browser-page/: reference stream, then caption-driven chat when the local Python relay is running.

This site does not start the relay. In a terminal, run the copied command (or python3 server.py from that folder), wait for “Serving http://…”, then use the buttons. If a new tab does not appear, try Safari or Chrome; the buttons fall back to navigating this tab when pop-ups are blocked.

Reference stream
Granite Tiny side chat

Captions and model reactions stream from the relay; this directory site stays static on GitHub Pages.

Quick setup

Drop-in relay next to this static Train shell

Change one working directory, run one command, open one port. Works with the bundled Python forwarder, the Bloomberg browser page, Granite Tiny over Ollama (or your host), and any tab that can reach 127.0.0.1:8765.

Train
Python
Granite
Browser
:8765
And more
  1. From this repo root (…/cursor/train), run python3 server.py — it forwards to ../bloomberg/live chat/browser-page (default port 8765). Or cd there and run python3 server.py directly.
  2. Use Open relay once the server prints “Serving…” — for the full dashboard (captions, metrics, chat).
  3. HTTPS production hosts cannot embed http://127.0.0.1; use localhost over HTTP during dev if you need both in one tab.
Relay link targets 127.0.0.1:8765

Relay tools & API

Checklist for the Bloomberg live chat tree: terminal harness, HTTP relay, and the dashboard you open at http://127.0.0.1:8765/ after starting the server. Published alongside this desk at fornevercollective.github.io/train/live-captions/.

Prerequisites

  • Python 3 — relay and harness.
  • yt-dlp — discovers the live English caption playlist for the configured YouTube URL.
  • Ollama (or compatible POST …/api/generate host) with a chat model; default in repo is granite-4-h-tiny-local (see Modelfile.granite-4-h-tiny-local beside the harness).

Ways to start the browser relay

  • Train repo rootpython3 server.py (shim runs ../bloomberg/live chat/browser-page/server.py, default 127.0.0.1:8765).
  • Browser packagecd …/browser-page then python3 server.py or ./run-local-agent.sh (respects OLLAMA_HOST, BLOOMBERG_CHAT_MODEL / OLLAMA_MODEL).

Terminal-only caption harness

live_caption_sidechat.py (same live chat folder, not the browser server) polls captions and prints CAPTION / SIDECHAT lines to stdout for a bounded number of generations — useful for smoke tests without HTTP.

Flags: --video-url, --language, --model, --poll-interval, --history-cues, --min-new-cues, --max-generations.

server.py CLI (browser relay)

  • --host / --port — bind address (default 127.0.0.1:8765).
  • --video-url — YouTube live URL for captions + embed.
  • --model — Ollama model name.
  • --ollama-host — base URL or full …/api/generate; else OLLAMA_URL / OLLAMA_HOST.
  • --poll-interval, --min-new-cues, --history-cues — caption polling cadence and buffer.
  • --chat-temperature, --output-temperature, --system-prompt-suffix — sampling and optional side-chat style hint (break / wrap / forecast cards use output temperature).

HTTP surface

  • GET /, /index.html — dashboard.
  • GET /app.js, /styles.css — client assets.
  • GET /api/state — JSON for captions, chat, metrics, embed URL, agent snapshot.
  • GET /api/health — lightweight ok ping.
  • POST /api/output — JSON object with string field mode set to break, wrap, or forecast (same as the dashboard output buttons).
  • POST /api/agent-settings — live update model, Ollama host, chat/output temperatures, and side-chat system suffix (same fields as Your local agent in the UI).

Dashboard tools (in-browser)

  • Toolbar: Refresh, Copy yt-dlp reference command, connection status, collapsible top region.
  • Error panel with reproducible Copy for the failing command.
  • Your local agent — model, Ollama base URL, side-chat and output-card temperatures, extra side-chat system instructions, Apply to server.
  • Live counters and Statistical analysis — model status, caption/chat rates, token and cost estimates, react / question / watch mix.
  • Live captions transcript with optional speaker inference and per-cue model reactions.
  • Main stage: reference clock, reaction strip above the player, YouTube embed, compact caption mirror when the top chrome is hidden.
  • Granite Tiny side chat — reading-direction toggle, per-kind counts (watch / react / question), streaming chat lines.
  • Break / wrap / forecast outputCommercial break, Wrap up, Estimate / forecast buttons; commercial-break card can also emit automatically after prolonged caption silence.

How the desk works

Most chat tools run linearly: you type, the model answers. This desk is designed more like a review workflow: captions arrive as a noisy primary source, the side chat proposes structure, and you reorganize claims before anything is treated as “settled.”

When a parameter is missing — which venue, which contract month, which session window — the UI should ask before promoting text to the evidence log. That mirrors how agent-style guides treat intent: narrow the trigger, then lock the action.

Living memo loop

The workflow you described is a session-long research brief: repeated searches, multiple sources, and one canonical document that changes as new evidence arrives. The captions desk should treat every cue, web hit, archive clip, and connector result as evidence that can update the draft without losing provenance.

01

Search + capture

Each μsearch or connector run adds source cards: query, URL, timestamp, excerpt, and confidence. Nothing becomes prose until it is attached to a source record.

02

Route + section

Use routing for source type and intent: live caption, market archive, quote, document, or conflicting claim. This mirrors the simple routing pattern in Anthropic’s agent guidance.

03

Reconcile draft

Send the previous memo plus new evidence into a constrained revise step: update sections, merge duplicates, preserve citations, and mark conflicts instead of hiding them.

04

Checkpoint + diff

Every AI edit creates a versioned checkpoint, so a bad merge can be rolled back and paragraph-level provenance still shows which search or transcript turn changed the text.

mueee.qbitos.ai handoff

The next product step is to let the live captions desk emit a portable research-session bundle for mueee.qbitos.ai: current draft, source cards, citations, conflicts, and model feedback. That supports prompt chaining for outline → draft → review, plus evaluator-optimizer loops when a section needs another search pass.

Interpretability & data

Explorable explainers and HCI research are useful references when you label caption clusters or explain why a theme fired. External links open in a new tab.

Working paper from captions

Below is a stub digest: the browser groups seed caption lines by theme. When the relay writes JSON (captions + chat turns), replace the script source with that payload and re-render the same sections — abstract, claims, and catalog cross-check.

Abstract

Loading caption-derived summary…

Claims under review

Catalog cross-check

Map each theme to symbols present in the Erika directory (metals: GC=F, SI=F; benchmarks: ES=F, QQQ; majors: BTC-USD). Coverage rows on the Directory tab remain the ground truth for what is actually in catalog.json.

Evidence log

Dated deltas for the desk itself — same rhythm as an API changelog.

  1. Relay inventory on desk page

    Live captions route now documents the full Bloomberg `live chat` surface: terminal harness, `run-local-agent.sh`, relay CLI flags, HTTP endpoints, and dashboard controls so GitHub Pages matches the local toolset.

  2. Desk shell: captions + catalog cross-check

    Introduced a dedicated captions route with a working-paper scaffold. Claims list is client-generated from seed caption lines until the relay exposes JSON.

  3. Relay panel shared with home

    Bloomberg live relay card is a single component so the same DOM contract (`#home-bloomberg-live`) works on home and on this desk for deep links.

  4. Market alignment stub

    Priority markets from the Erika catalog (metals, benchmarks, crypto presets) are the default “universe” for tagging caption-derived themes.

External references

Design inspiration only — no affiliation. Agent workflows, recipe cards, Public’s guide and API changelog, and Google PAIR references for explainable research surfaces.