New analysis suggests ~40% of fMRI signal is non-neural noise - and that’s a feature to engineer around
A new analysis puts a hard number on a long-standing caveat in brain imaging: roughly 40% of the fMRI BOLD signal appears to come from non-neural sources like head motion, respiration, cardiac pulsation, large-vessel dynamics, and scanner drift. Under the hood, the work partitions variance across datasets and preprocessing pipelines, showing that even widely used denoising stacks (motion regressors, ICA-based cleanup, global signal strategies) leave a substantial non-neural footprint. That aligns with prior concerns about inflated false positives and effect sizes, but the quantification across methods and conditions is what’s notable here.
The bigger picture: this doesn’t invalidate fMRI, but it raises the bar for claims resting on small effect sizes, especially resting-state connectivity and ML decoders that can inadvertently learn motion or physiology. Practically, it argues for faster acquisitions to reduce aliasing, routine physiological monitoring, multi-echo EPI and ICA-based separation, preregistered preprocessing, and confound-aware validation (e.g., testing whether accuracy persists after motion/respiration scrubbing). Worth noting: robust task-evoked responses and well-powered designs remain solid; the implication is about rigor, not retreat. For industry, any neurotech or clinical pipeline built on fMRI should treat noise modeling as a first-class engineering problem, document confound handling end-to-end, and assume that a sizeable slice of “signal” isn’t neural unless proven otherwise.