Alexios Bluff Mara × Illinois State University
Research Collaboration · Cardinal & Code
Project · Cortex Gemma 4 Good · Health & Sciences v0.1.0 · Apache-2.0

Watch your brain
respond to any video.

A multimodal brain-response analysis system, built on Meta's TRIBE v2 brain foundation model and Google's Gemma 4. Upload a clip — get a 3D cortical-activation map plus four parallel narrations from four very different readers (an ISU freshman, a WBEZ science reporter, a Northwestern neurologist, and a Google ML scientist).

The Brain Cinema — in one paragraph

Picture a movie theatre. Your brain is the audience: 20,484 people in 20,484 assigned seats, each responsible for a specific job — seeing faces, recognising voices, feeling suspense, processing language. The movie is whatever you upload. TRIBE v2, Meta's brain foundation model, is the high-speed sensor system in every seat — twice per second, it predicts how excited each audience member is going to get, three to five seconds before their reaction visibly peaks. Gemma 4 is the panel of four critics in the back booth: after the screening, all four read the same audience-reaction printout and write their own takes — a chatty freshman, a WBEZ reporter, a Northwestern neurologist, and a Google ML scientist. You see all four side-by-side and pick the voice that sounds like your brain.

"Twenty thousand seats. One movie. Four critics. About three minutes."

Architecture — how it actually runs

   ┌──────────────────────────────────────────────────────────────────┐
   │ Browser / phone                                                  │
   │   ↓ https://seratonin.scylla-betta.ts.net  (Tailscale Funnel)    │
   ├──────────────────────────────────────────────────────────────────┤
   │ Vite dev server  (port 5173)                                     │
   │   ↓ /api/* proxy                                                 │
   ├──────────────────────────────────────────────────────────────────┤
   │ FastAPI backend  (port 8773)                                     │
   │   ├─ TRIBE v2 (PyTorch on RTX 5090, ~6 GB VRAM)                  │
   │   │   → 20,484-vertex BOLD prediction at 2 Hz                    │
   │   │                                                              │
   │   └─ 4× narrate (parallel, in queue)                             │
   │       ↓                                                          │
   ├──────────────────────────────────────────────────────────────────┤
   │ Inference router  (port 8766)                                    │
   │   ├─→ Seratonin Ollama  localhost:11434  (Gemma 4 E4B/E2B/26B/31B)│
   │   ├─→ Big Apple Ollama  100.93.240.52:11434  (M4 Max overflow)   │
   │   └─→ OpenRouter free tier  (cloud failover, $0/token)           │
   └──────────────────────────────────────────────────────────────────┘
  

Two GPUs cooperate via the inference router: the 5090 (Seratonin) does TRIBE inference and the bulk of narration; the M4 Max MacBook (Big Apple) is round-robin overflow when the 5090 is busy. If both fall over, the router fails over to OpenRouter's free Gemma-4-26B endpoint (200 req/day, $0/token) so the demo URL never returns a 502.

The viewer

A WebGL/Three.js scene with per-vertex animation, written by Kimi K2.6 via the Nous Portal during the Mercury sprint. Don't read about it — open the live demo and click around.

Open the live demo →   Or browse the gallery

Where to go next

  • Live demo: seratonin.scylla-betta.ts.net — running on Seratonin (RTX 5090, Chicago) via Tailscale Funnel.
  • Gallery of past scans: /gallery — every completed scan with all four persona narrations.
  • Source: github.com/AlexiosBluffMara/cortex — Apache-2.0; TRIBE v2 weights ship under CC-BY-NC 4.0 (Meta) and install separately.
  • Not a diagnostic tool. Predictions are population-averaged across 25 NeuroMod subjects, not tuned to any individual's brain. Cortex does not replace fMRI.
Research conducted in association with Illinois State University, research collaboration · Bloomington–Normal, IL · ABM in Chicago, IL.
Cortex v0.1.0 · Apache-2.0