Illustration: Benjamin Currie Illustration: Benjamin Currie

The movement to protect your mind from brain-computer technologies

Recording memories, reading thoughts, and manipulating what another person sees through a device in their brain may seem like science fiction plots about a distant and troubled future. But a team of multi-disciplinary researchers say the first steps to inventing these technologies have already arrived. Through a concept called “neuro rights,” they want to put in place safeguards for our most precious biological possessions: our mind.

Headlining this growing effort today is the NeuroRights Initiative, formed by Columbia University neuroscientist Rafael Yuste. Their proposition is to stay ahead of the tech by convincing governments across the world to create “neuro rights” legal protections, in line with the Universal Declaration of Human Rights, a document announced by the United Nations in 1948 as the standard for rights that should be universally protected for all people. Neuro rights advocates propose five additions to this standard: the rights to personal identity, free will, mental privacy, equal access to mental augmentation, and protection from algorithmic bias.

It’s a long way to go from theoretical protections to actual policy, especially when it comes to technology that doesn’t (yet) exist, but the movement has promise. The National Congress of Chile recently approved an amendment to add such protections to the Chilean constitution, becoming the first country ever to specifically include neural rights as protected by law. Chile, though, already had a branch of government dealing with protections regarding health (which reached out to the NeuroRights Initiative and Yuste on its own, seeking their advice). The reasons and methods for protections in other countries, including the U.S. for example, may differ.

Since brain data can now be tracked remotely and Elon Musk is forging ahead with Neuralink brain implants, it’s not so much fiction as it is fledgeling technology. Neuro rights advocates aim to convince disparate government policy makers, fellow researchers, and the public that it’s vital to stay ahead of the game, rather than wait for neurotechnology to become a problem.

How can we ensure that access to cognition-enhancing devices isn’t restricted to the very rich? Who owns the copyright to a recorded dream? What laws should exist to prevent one person from altering the memory of another through a neural implant?

The seeds of the movement began when Yuste became part of the Obama administration’s BRAIN Initiative, a program that linked a national network of neuroscience labs that would investigate brain-machine interfaces and related tech (which is now, in part thanks to Yuste, a global endeavor). But the ambitious nature of the initiative gave Yuste pause. While medical codes of ethics and neuroscience guidelines exist in various forms, there is no current unifying ethical code for neurotechnology.

“In the very first memo we sent to Obama, we highlighted the need for ethical regulation of all this technology,” Yuste said in a video interview. The BRAIN Initiative pooled research endeavors, from cell biology to brain mapping; some work included ways to decode or record thoughts or safely implant microchips into the brain. Recent NIH BRAIN initiative-funded projects include learning how the brain plans movements and how to do things like read minds using ultrasound.

Many neuro rights advocates are academics—including scientists whose own experiments have convinced them of the need for greater legal protections. In 2017, Yuste hosted informal talks and a workshop with independent volunteers from around the world called The Morningside Group. They huddled in a cramped classroom, almost shoulder to shoulder as they took turns writing on a chalkboard and sharing ideas (just one building away, Yuste likes to point out, from where Manhattan Project scientists realized, in 1939, that nuclear technology would change the world). Their fields spanned law, ethics, sciences, and philosophy, and by all accounts, it was exciting. “For three days, essentially, it was a closed meeting, and we came up with a series of ethical guidelines, also with the reflection that this is a human rights problem,” said Yuste.

They grappled with some big questions: How can we ensure that access to cognition-enhancing devices isn’t restricted to the very rich? Who owns the copyright to a recorded dream? What laws should exist to prevent one person from altering the memory of another through a neural implant? How do we maintain mental integrity separate from an implanted device? If someone can read our mind, how do we protect the read-out of our thoughts as our own?

These questions seem incredibly theoretical, but some arose from these researchers’ own experiments. One such project worked on by Yuste aimed to understand how groups of neurons work together in the visual cortex of the brain, but it incidentally let scientists alter the perception of mice, making them see things that were not actually there. After tracking which neurons were activated when the mice saw vertical bars on a screen, scientists could trigger just those neurons—and the mice, which had been trained to lick a water spout when they saw the bars, displayed the same behavior when the neurons were triggered. The researchers could make the mice “see” the vertical bars even if none were there.

Once he realized the implications of being able to change another being’s perception, Yuste was both excited to have learned more about the brain and gravely concerned. He warns that even if this technique doesn’t work yet in human beings, now that the basic premise of manipulating perception is doable, all someone has to do is build on it.

Likewise, when scientist Jack Gallant and his team developed a side project to better understand the human visual system, they ended up creating some of the groundwork for “reading” or “decoding” some types of thoughts, such as mental images, using fMRI and an algorithm. In one of their many experiments, human participants watched short silent films while scientists monitored an area of their visual cortices. The information was handed over to an AI that was trained on YouTube videos but not the participants’ films. From the data retrieved from brain scans, the AI pieced together and reproduced the general scenes that participants saw. While the reproductions were far from perfect, they represented a first step in decoding information from a human mind.

Since then, multiple experiments working with similar technology have joined ranks with this work, and neuro rights advocates believe that it’s only a matter of time before this technology could be used in a consumer market—for example, for recording dreams, ideas, or memories. Elon Musk’s company Neuralink has been working on neural implants intended to one day help treat brain disorders, allow people to control external devices with their minds, and even boost intelligence and memory (so far, an early version of Neuralink has allowed a monkey to play a videogame with its mind).

“This is a new frontier of privacy rights, in that the things that are inside of our heads are ours. They’re intimate; we share them when we want to share them. And we don’t want that to be made into a data field for experience.”

Even though each brain operates a bit differently based on individual experiences and quirks, the general organization of the brain is the same across nearly all people. At a recent virtual workshop discussing neuro rights, scientists repeated in their presentations, again and again: The ability is there, and all that’s needed is enough brain data from an individual to create a custom model of their brain. With issues of the public not fully understanding or reading informed consent and social media terms of service agreements, it’s already easy to side-step data protections in various ways.

“This is a new frontier of privacy rights, in that the things that are inside of our heads are ours. They’re intimate; we share them when we want to share them. And we don’t want that to be made into a data field for experience,” said Sara Goering, professor of philosophy and co-lead for the Neuroethics Group for the Center of Neurotechnology at University of Washington, in a phone interview.

Goering, who studies the effects of brain-machine interface technologies on patients as part of her ethics and philosophy work, also pointed out that while she believes future neurotech could ultimately be liberating for many people, even current brain-machine interface devices don’t always give users enough transparency on how they are working. Brain-machine interfaces that let people move computer cursors with their minds and deep brain stimulation (DBS) devices for Parkinson’s disease and depression are wonderful tools, but according to interviews conducted by Goering and her colleagues, users of this tech sometimes wonder about who is truly in control. One person used a DBS for Parkinson’s for mobility and occasionally placed his foot where he did not intend. He had no way of telling if the device had malfunctioned or if he simply misstepped—often, he would think that the DBS was now more in control of his body than he was.

“So this [device] followed my action and I intended to do something. But did I do that, or did the device help me do it, or did we do it together?” Goering posed. Neuro rights could begin conversations around developing useful tech that puts emphasis on giving the user more direct agency and sense of self, or providing feedback on when and how a device is working.  

And since advanced neurotechnology has the potential to help people who are currently disadvantaged or suffering, holding back these technologies is also ethically questionable. This could especially be an issue if devices are designed for people who can’t communicate their consent to using the tech, as with covert consciousness and cognitive motor disorder, terms used for a range of conditions in which patients appear to be unconscious but can still think and perceive. Currently, technologies like fMRI can help identify people who are conscious while in a vegetative state and, sometimes, their ability to respond to words, but actual communication of the patient’s thoughts is not yet a reality.

“It turns out that 14%, 15% of people who look unconscious are not. If you test them with the imaging or EEG, and those with families know that their loved ones are conscious, they make different kinds of decisions about care,” Joseph Fins, an ethicist, author, and physician at Weill Cornell in New York, said in a video interview. Physicians and neuropsychologists make multiple specialized bedside assessments to determine a patient’s consciousness status (though there have been experimental uses of fMRI or deep-brain stimulation). These patients would be unable to give consent to having their minds read, but future neurotechnological advances could help them and those with aphasias or other communication issues, opening their lives to interacting with the rest of the world. If the concept of neuro rights takes off, policy makers will have to consider the nuances of how rights would be applied in medical settings.

But neuro rights advocates are more concerned with brain-machine interfaces in non-medical, consumer settings—assuming scientists or companies can get the devices to market and vetted to work. This has already been a challenge with transcranial direct-current brain stimulation devices (tDCS), which left the realm of scientific experimentation and came to market in the past decade with little to no regulation via DIY-ers, now with some guidelines on safety and even possible military applications. “And that’s where your rights could come in. The minute that you talk about the brain, you cannot avoid going into human rights, because the brain is what makes us human,” said Yuste.

“We’re still at the very early stages of this,” warned Fins, recalling that unregulated tDCS in untrained hands can potentially cause seizures, emotional dysregulation, and brain tumors. “So the other thing is the risk of quackery. You know, the late 19th century was all about electromagnetism, and it did nothing.” In many ways, neuro rights would be the Frankenstein’s monster of protections: part FDA, part privacy act, part pioneer of legal definitions—like what it actually means to own your sense of self.

What Yuste doesn’t want to happen is for no one to pay attention to the issue until it’s too late to regulate—similar to what’s happened with social media, which has ballooned into a privacy, security, and ethical nightmare with very little oversight.

“Maybe we can be a little bit smarter with this neurotech,” Yuste said, “and from the outset, we can have ethical guidelines that agree with our humanity.”

REGISTER NOW

By Natalie Zarrelli / Gizmodo Writer
(Source: gizmodo.com; https://tinyurl.com/yhv3egnk)
Back to INF

Loading please wait...