Friday, March 22, 2024

BRAVE NEW WORLD: Brain-Computer Interface.

What is Neuralink for? In the short term, it’s for helping people with paralysis — people like Noland Arbaugh, a 29-year-old who demonstrated in a livestream this week that he can now move a computer cursor using just the power of his mind after becoming the first patient to receive a Neuralink implant.

But that’s not the whole answer.

Of all Elon Musk’s exploits — the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars — his secretive brain chip company Neuralink may be the most dangerous.

FILED UNDER:

Elon Musk wants to merge humans with AI. How many brains will be damaged along the way?

Neuralink has implanted a chip in its first human brain. But it’s pushing a needlessly risky approach, former employees say.

But helping paralyzed people is not Musk’s end goal. That’s just a step on the way to achieving a much wilder long-term ambition. . .

“The goal of Neuralink is to go for more electrodes, more bandwidth,” Watanabe said, “so that this interface can do way more than what other technologies can do.”

After all, Musk has suggested that a seamless merge with machines could enable us to do everything from enhancing our memory to uploading our minds and living forever — staples of Silicon Valley’s transhumanist fantasies
Which perhaps helps make sense of the company’s dual mission: to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”
“Neuralink is explicitly aiming at producing general-purpose neural interfaces,” the Munich-based neuroethicist Marcello Ienca told me. 
“To my knowledge, they are the only company that is currently planning clinical trials for implantable medical neural interfaces while making public statements about future nonmedical applications of neural implants for cognitive enhancement. To create a general-purpose technology, you need to create a seamless interface between humans and computers, enabling enhanced cognitive and sensory abilities. Achieving this vision may indeed require more invasive methods to achieve higher bandwidth and precision.”

Watanabe believes Neuralink prioritized maximizing bandwidth because that serves Musk’s goal of creating a generalized BCI that lets us merge with AI and develop all sorts of new capacities. “That’s what Elon Musk is saying, so that’s what the company has to do,” he said.

The intravascular approach didn’t seem like it could deliver as much bandwidth as the invasive approach. Staying in the blood vessels may be safer, but the downside is that you don’t have access to as many neurons. “That’s the biggest reason they did not go for this approach,” Watanabe said. “It’s rather sad.” 

He added that he believed Neuralink was too quick to abandon the minimally invasive approach. “We could have pushed this project forward.”

For Tom Oxley, the CEO of Synchron, this raises a big question. “The question is, does a clash emerge between the short-term goal of patient-oriented clinical health outcomes and the long-term goal of AI symbiosis?” he told me. 

“I think the answer is probably yes.” 


THE HUMAN-AI SYMBIOSIS
The potential for collaboration between humans and AI extends across multiple domains, shaping a promising future


“What Neuralink doesn’t seem to be very interested in is that while a more invasive approach might offer advantages in terms of bandwidth, it raises greater ethical and safety concerns,” Ienca told me. 

“At least, I haven’t heard any public statement in which they indicate how they intend to address the greater privacy, safety, and mental integrity risks generated by their approach. This is strange because according to international research ethics guidelines it wouldn’t be ethical to use a more invasive technology if the same performance can be achieved using less invasive methods.”

More invasive methods, by their nature, can do real damage to the brain — as Neuralink’s experiments on animals have shown.

Ethical concerns about Neuralink, as illustrated by its animals

Some Neuralink employees have come forward to speak on behalf of the pigs and monkeys used in the company’s experiments, saying they suffered and died at higher rates than necessary because the company was rushing and botching surgeries. Musk, they alleged, was pushing the staff to get FDA approval quickly after he’d repeatedly predicted the company would soon start human trials.

  • One example of a grisly error: In 2021, Neuralink implanted 25 out of 60 pigs with devices that were the wrong size. Afterward, the company killed all the affected pigs. Staff told Reuters that the mistake could have been averted if they’d had more time to prepare.
  • Veterinary reports indicate that Neuralink’s monkeys also suffered gruesome fates. In one monkey, a bit of the device “broke off” during implantation in the brain. The monkey scratched and yanked until part of the device was dislodged, and infections took hold. Another monkey developed bleeding in her brain, with the implant leaving parts of her cortex “tattered.” Both animals were euthanized.
  • In 2022, the US Department of Agriculture’s Office of Inspector General launched an investigation into possible animal welfare violations at Neuralink. The company is also facing a probe from the Department of Transportation over worries that implants removed from monkeys’ brains may have been packaged and moved unsafely, potentially exposing people to pathogens.

“Past animal experiments [at Neuralink] revealed serious safety concerns stemming from the product’s invasiveness and rushed, sloppy actions by company employees,” said the Physicians Committee for Responsible Medicine, a nonprofit that opposes animal testing, in a May 2023 statement. “As such, the public should continue to be skeptical of the safety and functionality of any device produced by Neuralink.”

Nevertheless, the FDA cleared the company to begin human trials.

“The company has provided sufficient information to support the approval of its IDE [investigational device exemption] application to begin human trials under the criteria and requirements of the IDE approval,” the FDA said in a statement to Vox, adding, “The agency’s focus for determining approval of an IDE is based on assessing the safety profile for potential subjects, ensuring risks are appropriately minimized and communicated to subjects, and ensuring the potential for benefit, including the value of the knowledge to be gained, outweighs the risk.”

What if Neuralink’s approach works too well?

Beyond what the surgeries will mean for the individuals who get recruited for Neuralink’s trials, there are ethical concerns about what BCI technology means for society more broadly. If high-bandwidth implants of the type Musk is pursuing really do allow unprecedented access to what’s happening in people’s brains, that could make dystopian possibilities more likely. 

For one thing, our brains are the final privacy frontier. They’re the seat of our personal identity and our most intimate thoughts. If those precious three pounds of goo in our craniums aren’t ours to control, what is?

  • In China, the government is already mining data from some workers’ brains by having them wear caps that scan their brainwaves for emotional states. In the US, the military is looking into neurotechnologies to make soldiers more fit for duty — more alert, for instance.
  • And some police departments around the world have been exploring “brain fingerprinting” technology, which analyzes automatic responses that occur in our brains when we encounter stimuli we recognize. (The idea is that this could enable police to interrogate a suspect’s brain; their brain responses would be more negative for faces or phrases they don’t recognize than for faces or phrases they do recognize.) 
  • Brain fingerprinting tech is scientifically questionable, yet India’s police have used it since 2003, Singapore’s police bought it in 2013, and the Florida state police signed a contract to use it in 2014.

Imagine a scenario where your government uses BCIs for surveillance or interrogations. The right to not self-incriminate — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

Experts also worry that devices like those being built by Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth connection, changing the signals that go to your brain to make you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca told me in 2019. “A hack like this wouldn’t require that much technological sophistication.”

Finally, consider how your psychological continuity or fundamental sense of self could be disrupted by the imposition of a BCI — or by its removal. 

  • In one study, an epileptic woman who’d been given a BCI came to feel such a radical symbiosis with it that, she said, “It became me.” Then the company that implanted the device in her brain went bankrupt and she was forced to have it removed. She cried, saying, “I lost myself.”

To ward off the risk of a hypothetical all-powerful AI in the future, Musk wants to create a symbiosis between your brain and machines. But the symbiosis generates its own very real risks — and they are upon us now.

Update, March 21, 2024, 2:12 pm: This story was first published on October 16, 2023, and has been updated multiple times with details of the first Neuralink implant in a human subject.

Elon Musk's Neuralink implants a brain chip in its first human — at what  cost? - Vox

No comments:

Imperialst Rhetoric, Tom Horn to Defuse Tensions, Gold Tops $5,000 in Demand Frenzy, . . .Japan Bond Crash

         Stephen Maturen/Getty Images Trump, Democrats Hurtle Toward Shutdown After Minnesota Killing A fatal shooting by Border Patrol agen...