A neuroscientist wants to train AI on brainwaves—and privacy law may be the biggest obstacle
The next frontier of artificial intelligence training may not be more text or images, but human brainwaves.
In a position paper posted Jan. 17 on the preprint server arXiv, Swiss-based neuroscientist Maël Donoso argues that future “foundation models” — the large, general-purpose systems behind modern chatbots and image generators — should be trained directly on recordings of human brain activity.
The 33-page paper, titled “A New Strategy for Artificial Intelligence: Training Foundation Models Directly on Human Brain Data,” proposes that signals from functional MRI, electroencephalography and even invasive neural electrodes could be used to shape how advanced AI systems learn and reason. Donoso calls the approach “a new strategy for artificial intelligence” meant to go beyond what he describes as “surface-level statistical regularities” learned from text and behavior alone.
He gives the strategy a name: reinforcement learning from human brain (RLHB). A companion idea, chain-of-thought from human brain (CoTHB), would use brain activity patterns as a template for the intermediate “thought steps” inside large models.
The paper is conceptual rather than experimental. It does not announce a brain-trained version of GPT or any commercial system. But it formalizes a direction that has been quietly emerging at the intersection of neuroscience and AI — and does so just as lawmakers and regulators around the world begin to classify neural data as one of the most sensitive forms of personal information.
Donoso, who trained in cognitive and computational neuroscience in Paris and has authored work on decision-making and learning, now runs a small company, Ouroboros Neurotechnologies, in Lausanne, Switzerland. His firm has developed systems to predict brain activity measured by functional MRI using cheaper electroencephalography recordings and neural networks.
A proposal to move beyond text-only learning
In his new paper, Donoso argues that current foundation models have clear limitations in areas such as perception, valuation, action and integration of information. He maps these shortcomings onto corresponding brain systems and suggests that “even limited, carefully targeted” neuroimaging data from humans could be used to adjust models’ internal representations.
Under the RLHB concept, a person would interact with or observe an AI system while their brain activity is recorded. Those signals, after being decoded by auxiliary models, would serve as a kind of reward or penalty signal, nudging the AI’s behavior in directions that the human’s nervous system implicitly “approves.”
In one illustrative scenario, a volunteer might lie in an MRI scanner watching a self-driving car simulator controlled by an AI agent. Moments when the person’s brain activity reflects surprise, alarm or disapproval could be translated into negative feedback for the agent; patterns consistent with relief or confidence could be treated as positive feedback.
Donoso presents RLHB as an extension, not a replacement, for existing techniques such as reinforcement learning from human feedback (RLHF), which relies on explicit ratings, rankings and textual critiques. Brain-derived signals, he suggests, could capture rapid, subconscious reactions that may never be written down.
From “chain-of-thought” to brain-aligned reasoning
The second proposal, CoTHB, borrows from a technique widely used in large language models called chain-of-thought prompting, which encourages systems to reason step by step. Instead of supervising those steps only with text, CoTHB would attempt to align a model’s internal activations with the time-resolved activity patterns observed when humans solve the same tasks in brain scanners or under EEG caps.
Donoso’s paper arrives as “brain foundation models” are beginning to appear in the scientific literature. Over the past two years, research groups have trained large neural networks directly on tens of thousands of functional MRI scans, magnetoencephalography recordings and brain connectivity graphs, often using the same masked prediction and contrastive learning techniques that underpinned advances in language and vision.
These brain-centered models are designed to improve diagnosis, prediction and basic neuroscience. Donoso’s proposal points those methods back at AI itself, suggesting that brain data become one of the signals used to train general-purpose systems such as conversational agents and decision-making tools.
Early evidence—and big scaling questions
The idea of using brain signals as reinforcement input is not entirely new. In 2019, a team at Columbia University reported that a robot could learn faster in a sparse-reward environment by incorporating error-related EEG signals recorded from a human observer. A 2023 study found that implicit feedback from EEG could significantly speed up deep reinforcement learning for robot control, performing comparably to explicit human feedback in a simulated 3D environment.
What is new is the attempt to scale such techniques to the level of foundation models — and to name them as a general agenda.
Donoso’s proposal also acknowledges practical constraints. Brain data are scarce and noisy compared with text and images, and high-quality neuroimaging remains expensive and concentrated in major hospitals and research centers. That raises the prospect that any large-scale RLHB effort would draw heavily on data from a narrow slice of the global population — skewed toward wealthy, educated societies. Scientists have long warned that overreliance on such “WEIRD” (Western, educated, industrialized, rich, democratic) samples can bake subtle biases into psychological research.
Any effort to align AI systems using brain-derived signals would have to decide whose brains count — and how to represent people whose neural data are absent.
Neurorights and the coming collision with privacy law
Donoso’s agenda is likely to collide quickly with evolving notions of “neurorights” and neural privacy.
Chile has been a pioneer in this space. In recent years, the country moved to amend its constitution to recognize a set of neurorights, including mental privacy and free will. In 2023, Chile’s Supreme Court ordered U.S.-based neurotech firm Emotiv to delete all brain data it had collected from a former senator, ruling that the data were protected and could not be retained without proper consent. Advocates there have argued that brain data should be treated more like an organ than a conventional digital record.
In the United States, several states have begun to write neural data directly into privacy law. Colorado’s Protect Privacy of Biological Data Act, signed in 2024, defines “neural data” — information derived from the activity of the central or peripheral nervous system — as sensitive biological data, requiring explicit consent for many uses. Montana followed in 2025 with legislation that similarly protects neural data and notes that such information can reveal a person’s emotions, mental health status and cognitive functioning and can be used “to directly manipulate brain activity.”
At the federal level, three U.S. senators — Majority Leader Chuck Schumer of New York, Senate Commerce Committee Chair Maria Cantwell of Washington and Sen. Edward Markey of Massachusetts — wrote to the Federal Trade Commission in 2024 urging it to scrutinize neurotechnology companies’ data practices. Citing a report by the Neurorights Foundation, they warned that brain-computer interface firms were collecting and, in some cases, sharing neurodata under privacy policies that did not clearly spell out how that information would be used or deleted.
“Neurotechnology companies may be collecting and using incredibly sensitive brain data, including information that could reveal people’s health conditions, emotions and thoughts,” the senators wrote, asking the FTC to investigate whether such practices constituted unfair or deceptive acts.
Internationally, UNESCO’s 194 member states have tasked the organization with drafting the first global recommendation on the ethics of neurotechnology, aimed at guiding countries on issues such as mental privacy, autonomy, fairness and benefit-sharing. A final text is expected to be adopted at UNESCO’s General Conference in 2025.
None of these frameworks was written with brain-trained foundation models in mind. Most focus on consumer headsets, medical devices and experimental brain-computer interfaces. But the same questions recur: who owns brain data, under what conditions it can be collected and shared, and whether individuals can meaningfully revoke consent once their neural signatures have been used to train a system.
A blueprint, not a product
For now, Donoso’s proposal remains a blueprint rather than a product roadmap. His paper outlines possible architectures and training protocols but suggests concentrating RLHB and CoTHB on “high‑value” parts of a model’s reasoning and decision-making rather than trying to replace existing training pipelines wholesale.
Whether major AI labs or device makers will adopt that blueprint is an open question. The paper gives them a vocabulary — RLHB, CoTHB, brain‑trained foundation models — at a moment when the technical and regulatory conditions for such projects are beginning to take shape.
It also crystallizes a larger choice facing policymakers and the public. As AI systems become more capable and intertwined with daily life, researchers are looking for deeper signals than clicks and survey responses to keep those systems aligned with human preferences. Donoso’s paper suggests that the richest such signals may lie in the patterns of activity inside people’s skulls.
If that vision is realized, future debates over AI training will not only be about which books or images a model saw, but about whose brainwaves helped teach it how to think.