This new AI attack steals models without touching the system


AI systems have long been treated like sealed black boxes, especially in areas like facial recognition and autonomous driving. New research suggests that protection isn’t as solid as assumed.

A KAIST-led team shows that AI systems can be reverse engineered remotely using emissions that leak during normal operation, without direct intrusion. Instead, the approach listens.

Using a small antenna, the researchers captured faint electromagnetic traces from GPUs and rebuilt how the system was designed. It sounds like a heist trick, but the results hold up, and the security implications are immediate.

How the side channel works

The system, called ModelSpy, collects electromagnetic output produced while GPUs handle AI workloads These traces are subtle, yet they follow patterns tied to how the architecture is arranged.

By analyzing those patterns, the team inferred key details, including layer setups and parameter choices. Tests showed core structures could be identified with up to 97.6 percent accuracy.

The setup is what makes this unsettling. The antenna fits inside a bag and doesn’t need physical access. It worked from as far as six meters away, even through walls, across multiple GPU types. Computation itself becomes a side channel, exposing the system’s design without a traditional breach.

Why this changes AI security

This pushes AI security into less familiar territory. Most defenses focus on software exploits or network access. ModelSpy targets the physical byproducts of computation instead.

Even isolated systems could leak sensitive information if hardware emissions aren’t controlled. For companies, that architecture is often core intellectual property, which turns this into a direct business risk.

The work frames this as a cyber physical challenge, where defending AI now involves both digital safeguards and the surrounding environment, which raises the bar for what protection actually means.

What defenses look like now

The team also outlined ways to reduce the risk, including adding electromagnetic noise and adjusting how computations run so patterns become harder to interpret

Those fixes suggest a broader change. Securing AI may require hardware level adjustments, not just software updates, which complicates deployment for industries already locked into existing systems.

The research earned recognition at a major security conference, signaling how seriously this threat is being taken. The next exposure may not involve breaking in at all, but simply observing what systems unintentionally reveal.



Source link