Explore Train'ing world Market data.
A GitHub Pages-ready catalog for 56,322 global listings across stocks and ETFs, with searchable metadata, exchange filters, and date-coverage signals.
Static listing intelligence; search and filters live on the Directory tab.
Built from Erika dataset metadata for browsing, filtering, and static profile lookup.
This directory is a static metadata browser for research and educational use. It is not an offer to buy or sell securities, and it does not replace financial, legal, or tax advice.
A GitHub Pages-ready catalog for 56,322 global listings across stocks and ETFs, with searchable metadata, exchange filters, and date-coverage signals.
Coverage profile
Same six dimensions as the grid, normalized to a radar (see ECharts radar).
Trending prediction markets
Train stays static on GitHub Pages; the companion Game Next app hosts the
interactive strip and merged /live (μgrad + Bloomberg).
Product reference:
Coinbase Predictions.
Minors context:
Baller League US.
Illustrative YES mix (demo cards)
Pro basketball · single game
$6,350,233 vol. (24h)
Set PUBLIC_GAME_CONSOLE_URL to link outMLB · moneyline
Minor league · exhibition ladder
$842,110 vol. (24h)
Set PUBLIC_GAME_CONSOLE_URL to link outCustom wind-style field inspired by Apache ECharts custom-wind — static ornament only.
Illustrative numbers, sector commentary, and the rest-of-month expandable sector rows are for
education only — not trading advice, not live Kalshi/Coinbase data, not a quote, and not an
investability signal. Contracts can expire worthless; venues have eligibility and geographic rules. Set PUBLIC_GAME_CONSOLE_URL at build time for header + outbound Game links.
/live merges μgrad + Bloomberg with Train context here
Use this page for live pipeline architecture and the Bloomberg relay panel below. Use the Game app for side-by-side
iframes: serve sports-field-ugrad.html from uvspeed web/, set NEXT_PUBLIC_UGRAD_SPORTS_URL (or NEXT_PUBLIC_UGRAD_TOOLS_DECK_URL), run browser-page/server.py (default port 8765), set NEXT_PUBLIC_BLOOMBERG_CHAT_URL.
HTTPS hosts cannot embed http://127.0.0.1; use a tunnel or
same-origin proxy when needed.
data/catalog.json into the Game trending grid when you
are ready to wire cross-repo data.
NEXT_PUBLIC_UGRAD_SPORTS_URLNEXT_PUBLIC_UGRAD_TOOLS_DECK_URLNEXT_PUBLIC_BLOOMBERG_CHAT_URLhttp://127.0.0.1:8765
Companion to the qbitOS Bloomberg tool in bloomberg/live chat/browser-page/: reference stream, then
caption-driven chat when the local Python relay is running.
This site does not start the relay. In a terminal, run the copied command (or python3 server.py from that folder), wait for “Serving http://…”, then
use the buttons. If a new tab does not appear, try Safari or Chrome; the buttons fall back to navigating this tab
when pop-ups are blocked.
Captions and model reactions stream from the relay; this directory site stays static on GitHub Pages.
Quick setup
Drop-in relay next to this static Train shell
Change one working directory, run one command, open one port. Works with the bundled Python forwarder,
the Bloomberg browser page, Granite Tiny over Ollama (or your host), and any tab that can reach 127.0.0.1:8765.
…/cursor/train), run python3 server.py — it forwards to ../bloomberg/live chat/browser-page (default port 8765). Or cd there and run python3 server.py directly.
http://127.0.0.1; use localhost over HTTP during dev if you need
both in one tab.
The Erika lake is updated through 2026-04-30, but the time-series coverage is not uniform across all symbols. Metals are the strongest local histories, so they now get a dedicated top-level block instead of being buried below the listing directory.
These are the strongest, most complete histories visible in the local Erika lake and should sit above the generic listing browse flow.
At close at Apr 28, 2026, 16:59 GMT-7 · Erika snapshot
GC=F · strong · Erika snapshot
6,440 rows · 2000-08 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 16:59 GMT-7 · Erika snapshot
SI=F · strong · Erika snapshot
6,442 rows · 2000-08 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 16:59 GMT-7 · Erika snapshot
PL=F · strong · Erika snapshot
6,468 rows · 1997-10 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 15:59 GMT-7 · Erika snapshot
PA=F · strong · Erika snapshot
6,479 rows · 1998-09 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 15:59 GMT-7 · Erika snapshot
HG=F · strong · Erika snapshot
6,445 rows · 2000-08 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 16:59 GMT-7 · Erika snapshot
ALI=F · medium · Erika snapshot
2,978 rows · 2014-05 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
The preset exists for major indexes and futures, but the local lake is strongest right now on futures and ETF proxies rather than every benchmark index parquet.
At close at Apr 28, 2026, 12:59 GMT-7 · Erika snapshot
ES=F · medium · Erika snapshot
253 rows · 2025-04 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 14:59 GMT-7 · Erika snapshot
QQQ · good · Erika snapshot
1,255 rows · 2021-04 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 30, 2026, 13:59 GMT-7 · Erika snapshot
^GSPC · missing · Erika snapshot
Use preset / refresh ingest · Not present as local parquet
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
Crypto is present and current, with Bitcoin available locally and the README preset covering the broader major-crypto basket.
At close at Apr 28, 2026, 15:59 GMT-7 · Erika snapshot
BTC-USD · good · Erika snapshot
1,827 rows · 2021-04 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 30, 2026, 13:59 GMT-7 · Erika snapshot
major-crypto · good · Erika snapshot
10-symbol basket · Preset referenced in Erika README
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
These are the headline equity names from the ai-llm-boom preset. Coverage is uneven locally, so they are useful as a prominence layer but not all are equally complete.
At close at Apr 28, 2026, 13:59 GMT-7 · Erika snapshot
MSFT · good · Erika snapshot
1,255 rows · 2021-04 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 12:59 GMT-7 · Erika snapshot
NVDA · good · Erika snapshot
1,255 rows · 2021-04 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
At close at Apr 28, 2026, 15:59 GMT-7 · Erika snapshot
AAPL · limited · Erika snapshot
22 rows · 2026-03 -> 2026-04
Illustrative desk legs — not a tradable quote. TV mini loads when mapped.
Live snapshot · hourly interval
Open the sections below for the Databento-style plant view: venue hops, live pipeline stages, relay junctions, bottlenecks, and the hardware stack behind them.
Static ECharts metaphors — gauge ring , geo-style graph , and a 2D take on flight paths (not WebGL 3D here).
| Venue | Benchmarks / flows | Primary PoP | Relay path | Hop budget | Latency budget | Link speed |
|---|---|---|---|---|---|---|
| CME / Aurora futures Best home for index futures, rates, and metals when futures lead price discovery. | ES, NQ, RTY, YM, GC, SI, HG | Aurora I / CH2 | Aurora edge -> Chicago metro relay -> NY4/NY5 fanout | 1 in-cage; 3-5 metro; 6-9 to NY metro | sub-0.1 ms local; 0.3-0.9 ms Chicago metro; 8-10 ms one-way to NJ | 10/25 GbE capture, 25/100 GbE relay |
| NASDAQ / NYSE / NYSE Arca / OPRA Use this path for ETF NAV, cash open/close logic, and US equity index arbitration. | QQQ, SPY, cash equities, ETFs, options fanout | Carteret / Mahwah / Secaucus / NY4 | Venue edge -> New Jersey relay spine -> Chicago and London distribution | 1-2 local; 3-6 metro; 6-10 to Chicago | sub-0.15 ms local; 0.2-1.1 ms NJ metro; 8-10 ms one-way to Chicago | 10/25 GbE edge, 25/100 GbE relay |
| LSE / ICE Europe / Euronext London is the clean aggregation point for European cash, futures, and energy venues. | FTSE, ICE energy, STOXX-linked flows | LD4 / Basildon / Slough | London metro edge -> LD4 relay -> Frankfurt and NY distribution | 1-2 local; 3-6 metro; 5-7 to Frankfurt | sub-0.15 ms local; 0.3-1.2 ms London metro; 3-5 ms one-way to Frankfurt | 10/25 GbE capture, 25/100 GbE backbone |
| Deutsche Boerse / Eurex Keep Eurex-leading macro logic in Frankfurt if rates or DAX futures are your trigger feed. | DAX, Euro Stoxx, Bund, Bobl, Schatz | FR2 / FR5 | Frankfurt edge -> Frankfurt relay -> LD4 and NY fanout | 1-2 local; 3-5 metro; 5-7 to London | sub-0.12 ms local; 0.2-0.8 ms metro; 3-5 ms one-way to London | 10/25 GbE edge, 25/100 GbE relay |
| JPX / OSE APAC strategies should stay regional; sending the first decision loop to the US is too slow. | Nikkei 225, TOPIX, JGB-linked flows | TY3 / Tokyo metro | Tokyo edge -> Tokyo relay -> Singapore and Hong Kong regional fanout | 1-2 local; 3-5 metro; 7-10 regional backbone | sub-0.12 ms local; 0.2-0.9 ms metro; 35-70 ms to SG/HK depending path | 10/25 GbE edge, 25 GbE regional relay |
| HKEX / SGX Pair Hong Kong and Singapore to absorb regional outages and keep APAC fanout tight. | Hang Seng, CNH, SGX index and derivatives flows | HK1 / HK3 / SG1 | Local metro edge -> Hong Kong or Singapore relay -> Tokyo and London distribution | 1-2 local; 4-8 regional; 8-12 inter-region | sub-0.15 ms local; 0.3-1.0 ms metro; 30-40 ms HK<->SG | 10/25 GbE edge, 25/100 GbE relay uplinks |
| Stage | Primary path | Redundant path | Data carried | Junction | Likely bottleneck | Tech stack |
|---|---|---|---|---|---|---|
| Venue ingress | Primary exchange handoff / multicast plant / direct cross-connect | Secondary venue handoff or alternate A/B line card / network path | Raw packets, sequence numbers, heartbeats, venue control messages | Exchange demarc -> cage switch -> capture NIC | Cross-connect oversubscription, bursty open/close traffic, sequence gaps at the handoff edge | Cross-connects, L1/L2 switching, UDP multicast or direct TCP/ITCH-style feeds, optical diversity |
| Lossless capture | Hot capture host pinned to the primary feed leg | Parallel hot-capture host or standby NIC path with the same session view | Nanosecond timestamps, packet payloads, gap alerts, drop counters | NIC -> kernel bypass / capture process -> write-ahead queue | RX ring overflow, interrupt storms, NUMA mismatch, disk flush latency during bursts | PTP, hardware timestamping, RSS/IRQ pinning, DPDK/AF_XDP-class capture, NVMe journals |
| Decode and normalize | Venue parser and schema-normalization pipeline | Secondary parser workers fed from mirrored capture or replay queue | Trades, MBO/MBP books, instrument defs, status events, normalized records | Capture queue -> parser workers -> normalized event bus | Single-thread parser saturation, schema edge cases, slow gap-fill replay after reconnect | Feed handlers, schema codecs, DBN-style binary encoding, lock-free queues, replayable logs |
| Gap fill and reconciliation | Real-time sequencer and gap-fill service | Replay path from mirrored capture store or alternate venue session | Recovery requests, retransmits, sequence windows, continuity markers | Normalizer -> sequencer -> recovery service | Recovery storms during venue instability and replay contention against live flow | Sequence tracking, retransmit requests, bounded in-memory buffers, deterministic replay |
| Regional relay mesh | Chicago, New Jersey, London, Frankfurt, Tokyo, Singapore active fanout | Cross-region mirror relay or same-region hot standby | Normalized live stream, partial depth, full depth, trades, metadata side channels | Metro relay spine -> regional backbone -> local fanout node | East-west congestion, route asymmetry, fanout amplification when many clients subscribe at once | 25/100 GbE leaf-spine, ECMP, low-jitter metro circuits, internal pub-sub / stream fabric |
| Hot cache and serving plane | In-memory live cache and session gateway | Replica cache and second gateway pool in the same or adjacent region | Latest books, subscription state, session tokens, entitlement-aware stream state | Relay -> cache -> auth/session gateway -> outbound stream | Cache stampede on reconnect, gateway fanout pressure, entitlement checks in the hot path | In-memory caches, session services, load balancers, service discovery, TLS termination off hot path where possible |
| Historical persistence | Append-only live archive and object-store landing zone | Mirrored object storage / second-region archive | Raw PCAP-class capture, normalized DBN records, catalog metadata, audit trail | Capture + normalized stream -> archive writer -> object store / cold tier | Write amplification, checksum/recompression overhead, slow cold-tier recalls during replay | NVMe staging, object storage, immutable partitions, compression, checksum verification |
| Client delivery | Live streaming endpoint close to the consuming strategy | Secondary endpoint / region plus replay bootstrap path | Subscription responses, normalized frames, heartbeat, backpressure / reconnect control | Gateway -> internet/private link -> strategy node | Last-mile jitter, client-side backpressure, TLS/session churn, slow consumer queues | TCP/WebSocket/streaming APIs, client libraries, backpressure handling, reconnect and resume logic |
Raw packet capture, gap detection, feed recording, and immediate handoff to relay.
Protocol normalization, gap-fill, compression, replay, and fanout to multiple consumers.
Feature calc, signal generation, cache-resident books, and order-routing adjacency.
Historical ingest, backtests, full-session replay, and derived dataset generation.
PTP, hardware NIC timestamps, disciplined grandmaster, monotonic local clocks
Required to compare venue ingress, relay transit, and client receive times without lying to yourself.
Diverse cross-connects, low-latency switches, ECMP, 25/100 GbE spine, metro wavelength diversity
This is where primary/redundant paths stop being theoretical and become physically separate.
Kernel-bypass capture, pinned cores, lock-free queues, feed handlers, sequencers, gap detectors
The first place burst traffic and venue pathologies show up.
Binary schema codecs, replayable logs, in-memory buses, shard-aware parser workers
Normalizing many venue dialects without creating a single chokepoint is the core systems problem.
NVMe journals, immutable partitions, object storage, compression, checksum validation
Live and historical paths should share truth, not share contention.
Session gateways, load balancers, auth cache, streaming APIs, client resume/backpressure logic
Most visible outages come from connect/reconnect pressure rather than raw market data loss.
These remain static reference profiles with better date coverage and stronger identity fields. The timezone-aware majors page now lives separately under Featured.
KRX:000240
Consumer Discretionary · company date included · IPO date included
SZSE:000078
Health Care · company date included · IPO date included
HKEX:01199
No sector/category metadata · company date included · IPO date included
HKEX:01208
No sector/category metadata · company date included
HKEX:01364
No sector/category metadata · company date included
HKEX:01508
No sector/category metadata · company date missing · IPO date included