What testing commercial neurotech has taught us, and why these technologies require informed engagement now.
RightsCon is canceled. The neurotech booth we were preparing this week for the annual global gathering of digital rights defenders has been too.
A new generation of sleek-looking devices claim they can help you focus better, battle depression, or meditate more profoundly—if you strap them to your head and pair them with your phone. Have you seen the ads? You probably will soon.
We’re deeply concerned that not enough of us are asking questions about such devices, the data they collect, or how rights are protected.
To demonstrate this point, we designed a hands-on (or actually, on-heads) experience for RightsCon attendees to interact with so-called “neurotech” or “non-invasive brain-computer interfaces,” also known as BCIs. This new category of consumer gadgets interact with different kinds of neural data, while promising an array of health and productivity benefits to buyers.
Our plan was to bring these increasingly available and affordable devices to Zambia— where RightsCon was to be held—and have our civil society colleagues and allies put the devices on their heads. We wanted folks to experience with their own bodies, what a consumer brain device feels like. To read the privacy policy on the spot. To experience how ill-fitting they can be, and how biased the designs are in favor of homogenous groups of people, repeating patterns of racial, gender, and cultural biases in tech.
Our goal was to alert everyone that we are on the brink of a massive deployment of devices that aim to capture people’s most intimate data, and that we still know very little about them.


Our argument is simple: these technologies are not coming, they are already here and they are opaque. They are marketed as “wellness,” “lifestyle,” and “performance” brands without clear privacy policies or any meaningful oversight. They are excused from requiring specialised legislation, because our general legal frameworks are (supposedly) sufficient for protecting our rights. We thought a testing booth was the right format to begin to circulate this argument because, in a sense, these devices speak for themselves better than we can. We think first hand experience is the best way to spark the initial conversations that lead to broader action, following Open Knowledge Foundation’s tradition of ‘learning by doing’ and ‘learning in community’ over two decades.
Why this matters
At Open Knowledge, we have been incubating a new Open Frontier Technologies Programme since late last year. In essence, we are exploring how openness can be a governance tool to help guarantee and ‘future-proof’ rights in emerging technologies through practical mechanisms, like open technical standards, modular hardware components, transparent system design, auditability, portability, and user control.
Our Open BCI Stack project started from growing discomfort and concern that neurotech is rapidly entering consumer markets while being among the most opaque technologies to practitioners in our field.
Soon, neurotech devices will be integrated with AI agents, deployed in work places, across gaming platforms, and even in schools. Vast investments in neurotech underpin forecasts that the sector will reach an estimated $960.8 million USD market size by 2034, according to Fortune Business Insights. What could go wrong?
While the earliest academic and civil society voices have rallied behind the idea of specialised legislation to regulate neural data and adjacent technologies, the UNESCO Recommendation on the Ethics of Neurotechnology, adopted in November 2025, also pointed at the urgent need to apply existing legal frameworks.
At Open Knowledge, we believe the first step is to increase openness at every layer of the BCI stack, and to work within regulations and practices that already exist, e.g., standards for openness and interoperability, privacy protections and consumer protections, product safety, medical device regulation, and competition laws that are already enacted and mandatory. In addition, we need to fully understand how these neurotech products work. We need to remove barriers to openness, accountability and auditability in order to effectively enable the full defense and exercise of our rights.
We are starting right now, by strapping these machines to our own heads. Our aim at RightsCon was to help advocates and regulators gain a better understanding of these technologies and act now with the instruments they already have. To do that, we have to know the products, test them, and understand them. We bought five. The devices, chargers, cables, apps, accounts, and terms of service, were packed and ready in our suitcases before RightsCon was canceled. But our journey as neurotech investigators began even earlier…
What we did
We began by learning more about surgically implanted brain-computer interfaces, which are currently positioned as medical research devices—developed and deployed in labs—that have the potential to improve the lives of many. These are the science miracles we read about in the media, with rigorous experiments to help people who are nonverbal communicate or people who are paralysed move their limbs.
At the other end of the spectrum, we began studying and cataloguing devices that are commercially available without prescription. Most avoid language that would equate them to a medical device. Our purpose was threefold: to understand ‘the experience’ from a consumer perspective; second, to examine the context, terms and conditions, and regulatory environment from a digital expert perspective; and third, to study the technology and data flows themselves.
All these wellness apps collect medical-grade signals, but their privacy policies are ambiguous. Recordings of electrical activity in the brain flows across borders, and the companies reserve secondary-use rights.
We started developing a risk-tiered ecosystem map of who is building what, for whom, under what claimed regulatory status. We also worked on a structured account of how companies in this space evade existing regulatory categories, the maneuvers they do to keep neural data out of the sensitive-data bucket; how they keep their devices out of the medical-device category, and to route data through whichever jurisdiction asks the fewest questions.
Most importantly, we make the case for an Open BCI Stack that would integrate openness into device approval criteria, apply existing legal frameworks to real-world deployments, and equip regulators and civil society with concrete standards to demand transparency, and increase neurotech literacy to continue evaluating and shaping these technologies.
Brain data literacy
Though we have been working with data for over two decades, none of our team members had advanced understanding of neural data. For the preparation of our booth, we familiarised ourselves with the jargon displayed on device boxes and in manuals, and learned how neural data is captured and processed and linked to different cognitive states.
Electroencephalography (EEG) has traditionally required costly equipment and specialised expertise. With growing interest in BCI, consumer-grade EEG devices have become widely available. EEG measures the brain’s electrical activity and uses software to estimate states such as calm, focus, drowsiness, or sleep.
- Delta (0.5–4 Hz): the slowest brainwaves, often observed during deep sleep or states of unconsciousness.
- Theta (4–8 Hz): present during deep relaxation, meditation, and light sleep, and associated with creative and insightful thinking.
- Alpha (8–12 Hz): occur when we are awake but in a relaxed state. Commonly associated with a calm and focused mind.
- Beta (12–30 Hz): dominant when we are awake, alert, and engaged in cognitive tasks. Sometimes split into low-beta (12–15 Hz) and high-beta (15–30 Hz).
- Gamma (30–100 Hz): the fastest brainwaves, associated with heightened perception, learning, and problem-solving.
Some devices go further, attempting to alter those waves through stimuli from the outside. These technologies alone, or combined with other tools such as agentic AI, claim to have the ability to influence and guide us toward different emotional states.
We want our community to be aware how these products are packaged, and just how close they are to broad commercialisation, without strong governance and oversight. As we know from other realms of data: Once a signal can be read, it can be sold; once it can be biased, it can be optimised against the user as well as for them. The question remains: on what terms does the technology read our brains?
What we learned testing the devices ourselves
We would have loved to conduct our field research with the vibrant and diverse RightsCon community. It would have sharpened our analysis in ways that desk research cannot.
The first thing we noticed testing the devices on our own, was the gap between what the “sensor” of the product could actually do and the marketing claim. For example, an EEG headset can in fact pick up cortical electrical activity. But what the device maker sells you is the claim that it can measure your “calm” score or “focus” score, and even provide a graph of your “mind state” stitched from statistical estimates. The signal it captures is real. The interpretation of it is a product with subjective marketing biases. In other words, there is a gap between the data the device collects and the inferences the company markets.
The second thing that stood out to us was the wellness framing. In a medical research lab, fNIRS hardware records cortical blood oxygen changes, in order to support stroke rehabilitation and disability research. But when the same hardware is marketed as a wellness gadget, it can be sold as a “brain-training” tool. With the consumer grade re-labeling, neural data quietly becomes biometric data, and then consumer data, shedding legal protections at each step.
The third thing we noticed was identity issues. Every device required registration, often with a phone number. We used an alternate email address, but had to share a real phone number for app pairing and verification. To us, the amount of personal data collected across these devices is alarming, especially combined with brain data. Some products even do emotional tests ahead of use, and attach sensitive labels to your dataset, such as “distracted” or “depressed”. Some create long-term trackers of your intimate conditions or conducts. Their default is to maximalise data capture, rather than the opposite. Their consent terms are broad. Across all the devices, the consents we granted to companies bundled core functionality with extensive data permissions, including tracking permissions for the device maker as well as for third party service providers. They also included implicit consent for secondary uses, such as AI model training. It is the same old, familiar business model of tech, built on extreme data collection and processing. How should we feel when it extends to brain data? One device even explicitly indicated that it may share data with insurance companies or law enforcement authorities.
Our final observation is about whose body the technology can read at all. EEG signal-lock varies, depending on the person. Hair density and texture, scalp shape, skin oils, makeup use, and certain neurological conditions can all stop the device from working as intended. Sometimes fitting them is a pain. These ‘universal’ consumer devices are not always as universal as they claim.
The questions that emerged
Among the questions we wanted to ask our community but couldn’t: What counts as neural data, when most laws drafted so far had electrical signals in mind? Where do we draw the line between medical and consumer use, when the same hardware crosses boundaries? What about the portability of such data? Interoperability? Auditability? Openness and accountability in a commercial sphere rife with trade secrets and closed code? How is consent meaningful or possible when the user of a wellness device has no mental model of what is actually being measured or inferred? What is the potential most dystopic use these technologies could have? Whose body is the device actually built for? What is their potential at scale?
These are not the questions currently discussed in the field, but they seem more pressing than ever from a broad, multidisciplinary perspective.
Instead, many rights advocates have been debating whether to declare a new human right centered on neural data. That is a discussion worth having in conference rooms. But while those conferences are taking place (or are sometimes canceled) standard commercial practices for BCIs are already taking shape. And the defaults currently appear to be closed firmware, non-portable data, cloud-only functionality, broad secondary-use rights, and a regulatory category chosen by the seller.
Where this leaves us
In the past, our communities were able to turn the tide and achieve incredible compromises to make data and software infrastructures more open and accountable to all. We created the tools, standards, and coalitions to make information and technology systems more transparent, participatory, and accountable. We have a proven ability to turn complex issues into actionable governance tools through openness and a democratic understanding of technology.
Our early research suggests that these devices are not going to disappear. The amount of investments and patents filed suggest this new direction for personal technology will soon be everywhere.
The appropriate response is to engage now with the legal tools we already have for products already on the market. To make the seams visible. To name the maneuvers. To understand the technology. To insist that brain data, however a company has chosen to label it, is governable under existing laws before any newer rights protection arrives.
Our plan was to deliver this awareness at our booth at RightsCon. But we will do it in many more ways soon. Stay tuned.




