Preprint Says Federated-Learning Clients Could Remotely Trigger Rowhammer Bit Flips on Servers

A newly updated arXiv preprint claims attackers could remotely trigger Rowhammer-style bit flips on a federated-learning server by manipulating the inputs of some clients, a result that, if confirmed, would extend the security risks of distributed AI systems into server-side hardware faults. But the work is still a preprint, is under review, and has not been independently verified.

The paper, titled “Remote Rowhammer Attack using Adversarial Observations on Federated Learning Clients,” is listed on arXiv under the names Jinsheng Yuan, Yuhang Hao, Weisi Guo, Yun Wu and Chongyan Gu. It was first submitted on May 9, 2025, and the current version, v2, was posted April 21, 2026. ArXiv metadata says it is under review for IEEE Transactions on Dependable and Secure Computing.

In the paper’s abstract, the authors say an attacker does not need backdoor access to the server. Instead, they describe using a reinforcement learning attacker that learns how to manipulate a federated-learning client’s sensor observations so the client causes “high-frequency repetitive memory updates” on the central server. Those repeated updates, they write, mean “we can remote initiate a rowhammer attack on the server memory.”

The authors say they demonstrated the approach in a large-scale federated-learning automatic speech recognition system that used sparse updates, a common technique for reducing communication costs by sending only part of a model update. In that setup, the abstract says, the adversarial agent reached “around 70% repeated update rate (RUR) in the targeted server model, effectively inducing bit flips on server DRAM.”

If that result holds up, it would broaden the threat model for federated learning. Security research in that area has largely focused on privacy leaks, poisoned model updates or attacks on the communications path between clients and servers. This paper instead argues that manipulated clients could be used to trigger memory corruption on the server itself, potentially disrupting training and, as the authors put it, possibly enabling elevated privilege.

Rowhammer is a long-studied hardware fault mechanism in DRAM, the memory used in servers and other computers, where repeatedly accessing some memory rows can cause bit flips in adjacent rows. Researchers have studied it since at least 2014, including for its potential to support privilege escalation and other exploits. Federated learning, by contrast, is a distributed training method in which many client devices contribute updates to a central model without sending all raw data back to the server.

That combination is what makes the paper notable. The authors’ central claim is not simply that a machine-learning system can be fooled, but that manipulating client-side observations can shape server-side memory activity enough to cross into a hardware fault domain.

Still, the findings should be treated as preliminary. As of April 22, 2026, the claims appear to be documented in the authors’ arXiv preprint and related indexes or mirrors only. The available sourced material shows no independent third-party replication, no vendor advisory and no CVE tied to this specific paper.

That means the paper may point to an important new research direction, but it does not yet establish a confirmed real-world attack against commercial cloud platforms or any specific vendor environment. For now, the claim stands as the authors’ own account in a preprint under review.

Tags: #cybersecurity, #federatedlearning, #rowhammer, #aisecurity