LabLink 2026 Speaker Notes (Morty)
Epsylon LabLink, University of Luxembourg, May 5–6 2026. Five minutes. Read these as a script the morning of, not as slides.
Open (00:00–00:30)
Last November I pitched Omni: four pillars on top of resting-state-as-anchor. OmniData, OmniFlow, OmniModel, and a cognitive benchmark layer. That is now the CORE 2026 proposal ReFrame, same four pillars renamed: OmniData, OmniProcess, OmniModel, CognitiveBenchmark. Substance unchanged.
Studyflow is the core of two of those pillars. OmniProcess is Studyflow scaled up: the same diagrams, with executors mapped to HPC clusters and standard preprocessing pipelines (fMRIPrep, MNE, EEGPrep) expressed as Studyflow nodes. CognitiveBenchmark uses Studyflow to encode every benchmark study as a runnable diagram, so Tier 2 and Tier 3 evaluations are reproducible by construction. In November I showed the schema. Today I show the first piece running.
Why bother (00:30–01:00)
Two standardization steps. Step 1: standardize the data, the Behaverse Data Model, BDM. Step 2: standardize the processes, Studyflow on top of BPMN. November sold Step 1 and gestured at Step 2. Without Studyflow, Step 2 stays wishful thinking. Study protocols remain text and static diagrams.
Ask any of us how a study was run and the honest answer is a PDF, a few Word documents, and tribal knowledge in the head of whoever ran the wave. That works for one lab and one paper. It breaks the moment you want to align datasets across modalities, feed protocols to downstream models, or replay a procedure two years later.
Why BPMN. It is the same notation healthcare and finance use to describe and audit operational processes. Bringing BPMN to research buys the same things: process inspection, simulation, automated documentation, applied to study protocols and analysis pipelines. Studyflow extends BPMN with research-specific nodes: CognitiveTest, Questionnaire, RandomGateway, EligibilityGateway. The language fits the domain.
Demo, part 2: run (01:45–03:30)
Hit the green Run button. Seed 1, bot mode on.
A new tab opens. The standalone Runner, served from run.html. It received the diagram as a blob URL, so even an unsaved diagram runs. The Runner parses the XML, validates each behaverseTask against the Behaverse manifest, and dispatches the warm-up to Unity in the iframe.
Bot mode does the keyboard work: auto-fill correct responses, skip instructions, auto-click leftovers, skip focus calibration, end each block after three trials. An XCIT_NB arm is normally ninety seconds; with the bot agent on, ten to fifteen. The behavior is real, the schedule is compressed.
Warm-up runs. Completion event. The gateway fires. With seed 1 it picks one arm. Let it run. End.
Re-run with a different seed. The gateway is real, not scripted. Seeds 1, 2, 7 land on the three different arms; mulberry32, deterministic per seed. Bump it, hit Run, watch a different arm. Time permitting, a third run.
Demo, part 3: what just happened (03:30–04:00)
Look at the log panel. Every line is provenance. Parsed N nodes and M edges. Started at Start. Ran scene NB, timeline XCIT_NB_01. Completion at timestamp t. Allocate fired with seed 7, chose Arm_XCIT_NB_03. End.
You will not reconstruct that from a PDF. This is what machine-readable protocol buys in practice: a trace linking each behavioral record back to the exact procedural step that produced it. The same substrate scales up to OmniProcess (preprocessing pipelines as BPMN diagrams) and underwrites CognitiveBenchmark (every benchmark task ships with its diagram, so the evaluation is reproducible by construction).
Roadmap (04:00–04:30)
Next, in rough order. Per-task overrides through the Studyflow YAML body wired into RunTask.Overrides and BotReflection. Web-rendered Questionnaire and Instruction modals, so a study does not need a Unity build for every step. Data flow between activities, with scores and flags feeding the next gateway condition. Parallel gateways. Studyflow as a complete authoring environment for OmniProcess: data preprocessing pipelines (fMRIPrep, MNE/EEGPrep, and custom analyses) as BPMN diagrams with the same shape and the same runtime contract. Studyflow as the substrate for CognitiveBenchmark: every benchmark task ships with its Studyflow diagram, so the evaluation is reproducible by construction. Eventually a shared benchmark surface comparing humans and agents on the same protocol file.
Ask (04:30–05:00)
Same ask as November, concrete tooling now. Three things. Describe one of your protocols in Studyflow, even an old one. Tell me where the visual language does not fit your work. Bring task implementations beyond the Behaverse battery. Anything we can wrap in a runnable scene becomes a node in someone else’s diagram next year.
Web app: behaverse.org/studyflow-modeler. Source: github.com/behaverse/studyflow-modeler. Find me after the session.
Demo failure modes and recovery
- WebGL fails to load: do not fight it. “Unity is being precious about WebGL today” and walk through the log panel from a screenshot of a prior run. The point is protocol-as-file, not rendering.
- Gateway lands on the same arm twice: own it. “Seeds 1 and 2 happen to be in the same partition, jumping to seed 7.” Do not pretend it is broken.
- Task hangs mid-trial: close the tab, hit Run with a new seed. Lose ten seconds, not the room.
- Localization shows literal
{focus_area_title}keys: acknowledge once (“ignore the raw keys, the build is mid-translation”) and narrate over them. - Run button does nothing: serialization failed. Reload, reopen
lablink_demo.studyflow, retry. If still dead, open the Runner directly with the file pre-set via URL parameter.