
Published: May 8, 2026 · Last reviewed: May 1, 2026
When connected medical devices are compromised, the harm is usually framed as "data" or "downtime." For implanted neurostimulators, the harm is something different - and worse. Brainjacking is the unauthorized control of an electronic brain implant. Pull the right wireless levers and you don't just leak information or interrupt therapy. You can change how a person moves, what they feel, and - in extreme cases - who they are.
This post is a working brief for medical-device manufacturers, security teams, and regulatory leads who own neurotech products. We cover where the term came from, the attack vectors that matter in real DBS, SCS, and BCI hardware, the clinical consequences a reviewer will (rightly) expect you to address, and what we ship in submissions to convince FDA the device is defensible.
Origin of the term
"Brainjacking" was coined by Laurie Pycroft and colleagues at the University of Oxford's Functional Neurosurgery group in a 2016 paper published in World Neurosurgery titled "Brainjacking: Implant Security Issues in Invasive Neuromodulation." The team included neurosurgeons Tipu Aziz and Alex Green, who actually implant deep brain stimulation (DBS) systems clinically.
That's important context: this wasn't security researchers theorizing about devices they had never touched. It was the surgeons themselves raising the alarm about devices they put in patients' heads every week. The term is now established in both the security and neurosurgery literature - when you say "brainjacking" on stage or in a submission, you are using a term-of-art the field recognizes.
What brainjacking actually means
Brainjacking is the unauthorized control of an electronic brain implant. The most common target is the deep brain stimulation (DBS) system, which has three components:
- Electrodes implanted deep in the brain
- A lead wire running under the skin
- An implanted pulse generator (IPG) that contains the battery, processor, and wireless antenna for clinician programming
Functionally it is a pacemaker for the brain. The wireless programming interface is the attack surface. Clinicians need to adjust stimulation parameters non-invasively (otherwise every adjustment would require surgery), so the IPG accepts commands over a wireless link. If that link is unauthenticated, weakly authenticated, or unencrypted, an attacker within range can do what the clinician does.
The same architectural pattern - implanted stimulator + wireless programmer + cloud back-end - now extends to spinal cord stimulators (SCS), vagus nerve stimulators (VNS), responsive neurostimulators (RNS) for epilepsy, and the new generation of brain-computer interfaces (BCIs) with continuous neural recording.
The specific attack vectors Pycroft catalogued
This is where it gets concrete. Pycroft's paper laid out a taxonomy of attacks against DBS that still maps directly to the threat models we build for neurotech clients today.
1. Battery drain
Force the IPG into continuous wake or transmit cycles until the battery dies. The device stops delivering therapy. For a Parkinson's patient, motor symptoms return. For an epilepsy RNS patient, seizure protection disappears. Replacement requires surgery.
2. Overcharge stimulation
Push voltage or current beyond safe parameters. Can cause tissue damage, painful sensations, or compulsive behavioral effects depending on the brain region targeted.
3. Voltage and current manipulation
Subtler than overcharge. Just shifting parameters off-therapeutic can degrade outcomes without obvious failure, making the attack harder to detect than a hard fault.
4. Frequency and pulse-width changes
These are the parameters clinicians actually titrate for therapeutic effect. Modifying them changes how the brain region responds. In DBS for Parkinson's, the difference between effective tremor suppression and a non-functional patient is often a few Hz.
5. Electrode-contact alteration
Modern DBS leads have multiple contacts. Switching which contact is active can move stimulation from the intended target into adjacent brain tissue. The clinical effects depend entirely on what's nearby - which is what makes this category dangerous.
6. Data theft
Neural recordings and device telemetry are themselves sensitive. Some next-generation devices record neural activity continuously, generating a stream that has both clinical value and obvious privacy implications.
What the clinical consequences look like
This is the part that makes audiences sit up - and the part that should be sitting in your hazard analysis.
- Induced pain. A spinal cord stimulator is implanted to block pain. Modify the parameters and you can deliver pain instead. Same hardware, opposite outcome.
- Motor inhibition. A Parkinson's patient relies on DBS to function. Disable it or shift parameters and they may lose the ability to move, speak clearly, or maintain balance.
- Impulse-control disruption. DBS in certain targets affects impulse regulation. There are documented clinical cases (non-malicious, from improper programming) of patients developing pathological gambling, hypersexuality, or compulsive behaviors after stimulation changes. An attacker could induce these deliberately.
- Affect and emotion modulation. DBS is used to treat severe depression and OCD. The same mechanism that can lift depression can deepen it. The same regions that regulate fear and anxiety can be driven the other direction.
- Reward-pathway exploitation. This is the most unsettling category Pycroft raised. Stimulating reward circuits during specific behaviors could, in principle, reinforce those behaviors. It is harder to achieve and requires sophisticated targeting, but it is not science fiction. It is pharmacology with a wireless interface.
- Closed-loop system manipulation. For responsive neurostimulation in epilepsy (e.g., NeuroPace RNS), the device decides when to deliver therapy based on detected neural patterns. Spoof the input and you either trigger unnecessary stimulation or block stimulation when a real seizure is starting.
Why neurotech is uniquely hard
Pycroft and the subsequent literature made a few points that deserve to be stated plainly to any executive sponsor or regulatory lead:
- The brain is uniquely consequential. Cardiac implants can kill you. Neuro implants can do that, and they can change who you are. The category of harm is broader than any other connected device.
- The patient often can't tell. A pacemaker patient feels arrhythmia. A DBS patient experiencing subtle parameter drift may just feel like their disease is progressing. Brainjacking can hide as clinical decline.
- The clinician often can't tell either. If telemetry has been spoofed, the clinician sees what the attacker wants them to see. The clinician adjusts based on bad data and the patient is harmed by good-faith medical care.
- Patient self-mitigation is impossible. You can't reboot it. You can't unplug it. You can't even know it's been compromised without specific monitoring infrastructure that mostly doesn't exist yet.
What FDA expects neurotech manufacturers to do about it
Brainjacking is not a hypothetical for FDA reviewers. Section 524B and the February 2026 final premarket cybersecurity guidance both push manufacturers to demonstrate that integrity and availability of therapy are protected with the same rigor as confidentiality. Across the neurotech submissions we have supported, reviewers consistently look for:
- A threat model that names brainjacking explicitly. STRIDE or a comparable methodology, with attacker profiles that include a low-skill clinician-impersonator, a co-located adversary with the programmer protocol reversed, and a remote adversary pivoting through the patient's home gateway or companion app.
- Authenticated, encrypted wireless sessions. Mutual authentication between IPG and programmer, ephemeral keys, replay protection, and a documented key-management lifecycle from manufacture through end-of-life.
- Bounded stimulation parameters enforced in firmware. Even an authenticated command must not be able to drive voltage, current, frequency, pulse-width, or active contact outside therapeutically and biologically safe envelopes for the indication.
- Integrity protection for closed-loop inputs. Sensed neural signals that drive automated stimulation must be authenticated end-to-end so a spoofed input cannot trigger or suppress therapy.
- Tamper-evident telemetry. Programmers and back-end services must be able to detect parameter histories that don't match what the device actually executed - so a clinician can spot a brainjacked device even when the patient cannot.
- Battery-drain resilience. Rate-limiting, session lock-outs, and detection of abusive query patterns belong in the requirements set, not the post-hoc mitigation section.
- A coordinated vulnerability disclosure (CVD) program that researchers can actually use, with safe harbor and a published triage SLA.
- A postmarket monitoring plan that treats unexplained therapy degradation, anomalous programmer activity, and out-of-band parameter changes as security signals - not just clinical noise.
For the underlying lifecycle picture, see our deeper write-ups on neurotech cybersecurity risks across neurostimulators, EEG, and BCIs and implantable device cybersecurity concerns.
Where this fits in your submission
If your device is a neurostimulator, an active implantable, or a BCI with sensing or stimulation, brainjacking belongs in:
- The security risk assessment as a top-level threat with explicit clinical consequences mapped to ISO 14971 harm categories.
- The threat model as a named attacker objective.
- The architecture views that show where authentication, encryption, and parameter bounding are enforced.
- The V&V plan as adversarial test cases against the wireless interface, not just functional tests.
- The postmarket plan as a monitored signal class, not a footnote.
When all five align, brainjacking stops being a stage-talk anecdote and becomes a documented, defended risk - which is exactly what FDA wants to see.
Frequently asked questions
What is brainjacking in one sentence?
Brainjacking is the unauthorized control of an implanted neurostimulator - typically a DBS or SCS device - achieved by abusing its wireless programming interface to change stimulation parameters, drain the battery, or spoof closed-loop inputs.
Has brainjacking happened to a real patient?
There is no public, confirmed in-the-wild brainjacking case as of this writing. The threat is documented in academic literature (Pycroft et al., 2016 and follow-ons) and demonstrated in lab settings against real device protocols. FDA's premarket guidance treats it as a foreseeable threat that manufacturers must address.
Which devices are most exposed?
Any implanted stimulator with a wireless programming or telemetry interface: deep brain stimulators (DBS), spinal cord stimulators (SCS), vagus nerve stimulators (VNS), responsive neurostimulators (RNS), and the new wave of brain-computer interfaces (BCIs) with implanted recording electrodes.
Isn't a short-range proprietary radio safe enough?
No. "Proprietary" is not a security control. Several inductive, MICS, and BLE-based programmer protocols have been reverse-engineered and shown to be brute-forceable or replayable. Reviewers expect cryptographic authentication and encryption - not obscurity.
Does Section 524B require us to address brainjacking specifically?
Section 524B requires you to address foreseeable cybersecurity risks for "cyber devices" with reasonable assurance of safety and effectiveness. For an implanted neurostimulator, brainjacking is a foreseeable risk by definition. The February 2026 final guidance makes that expectation explicit through threat modeling, secure architecture, and postmarket monitoring requirements.
How do we test for it without harming a patient?
Through a combination of bench testing against the device and programmer in a Faraday-controlled environment, fuzz testing the wireless protocol, protocol reversing of the programmer firmware, and red-team scenarios that exercise the full clinician-to-patient command path. Our medical device penetration testing practice builds these test plans for neurotech submissions.
Where does brainjacking sit relative to ISO 14971?
It is a security-originated cause of clinical harm. ISO 14971 (with AAMI TIR57 as the security overlay) is the right framework: identify the hazard (e.g., unintended high-frequency stimulation), trace it to a security cause (unauthenticated command), assign severity from the clinical consequence, and document the controls that bring residual risk to acceptable.
Talk to a team that has shipped neurotech submissions
If you are designing or maintaining a neurostimulator, BCI, or active implantable, the brainjacking threat model is not optional - and it is not something you want to discover during an FDA AI request. Book a 30-minute strategy session and we'll walk through where your current submission stands and what we'd add before it goes to a reviewer.
