1
It’s Good That AI Tech Bros Are Thinking About What Could Go Wrong In The Distant Future, But Why Don’t They Talk About What’s Already Gone Wrong Today?
from the from-techlash-to-ailash dept
Just recently we had Annalee Newitz and Charlie Jane Anders on the Techdirt podcast to discuss their very own podcast mini-series “Silicon Valley v. Science Fiction.” Some of that discussion was about this spreading view in Silicon Valley, often oddly coming from AI’s biggest boosters, that AI is an existential threat to the world, and we need to stop it.
Charlie Jane and Annalee make some really great points about why this view should be taken with a grain of salt, suggesting the “out of control AI that destroys the world” scenario seems about as likely as other science fiction tropes around monsters coming down from the sky to destroy civilization.
The timing of that conversation was somewhat prophetic, I guess, as over the following couple of weeks there was an explosion of public pronouncements by the AI doom and gloom set, and suddenly it became a front page story, just days after we were talking about the same ideas percolating around Silicon Valley on the podcast.
In our discussion, I pointed out that I did think it was worth noting that the AI doom and gloomers are at least a change from the past, where we famously lived in the “move fast and break things” world, where the idea of thinking through the consequences of new technologies was considered quaint at best, and actively harmful at worst.
But, as the podcast guests noted, the whole discussion seems like a distraction. First, there are actual real world problems today with black box algorithms doing things like enhancing criminal sentences based on unknown inputs. Or, determining whether or not you’ve got a good social credit score in some countries.
Like there are tremendous legitimate issues that can be looked at today about blackbox algorithms, but none of the doom and gloomers seem all that interested in solving any of those.
Second, the doom and gloom scenarios all seem… static? I mean, sure, they all say that no one knows exactly how things will go wrong, and that’s part of the reason they’re urging caution. But, they also all seem to go back to the Nick Bostrom’s paperclip thought experiment, as if that story has any relevance at all to the real world.
Third, many people are now noticing and calling out that much of the doom and gloom seems to be the same sort of “be scared… but we’re selling the solution” kind of ghost stories we’ve seen in other industries.
So, it’s also good to see serious pushback on the narrative as well.
A bunch of other AI researchers and ethicists hit back with a response letter, that makes some of the points I made above, though much more concretely:
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as “Stochastic Parrots”), such as “provenance and watermarking systems to help distinguish real from synthetic” media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined “powerful digital minds” with “human-competitive intelligence.” Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
Others are speaking up about it as well:
“It’s essentially misdirection: bringing everyone’s attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard.
Arvind Narayanan, an Associate Professor of Computer Science at Princeton, echoed that the open letter was full of AI hype that “makes it harder to tackle real, occurring AI harms.”
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the open letter asks.
Narayanan said these questions are “nonsense” and “ridiculous.” The very far-out questions of whether computers will replace humans and take over human civilization are part of a longtermist mindset that distracts us from current issues. After all, AI is already being integrated into people’s jobs and reducing the need for certain occupations, without being a “nonhuman mind” that will make us “obsolete.”
“I think these are valid long-term concerns, but they’ve been repeatedly strategically deployed to divert attention from present harms—including very real information security and safety risks!” Narayanan tweeted. “Addressing security risks will require collaboration and cooperation. Unfortunately the hype in this letter—the exaggeration of capabilities and existential risk—is likely to lead to models being locked down even more, making it harder to address risks.”
In some ways, this reminds me of some of the privacy debate. After things like the Cambridge Analytica mess, there were all sorts of calls to “do something” regarding user privacy. But so many of the goals focused on actually handing more control over to the big companies that were the problem in the first place, rather than moving the control of the data to the end user.
That is, our response to privacy leaks and messes from the likes of Facebook… was to tell Facebook, hey why don’t you control more of our data, and just be better about it, rather than the actual solution of giving users control over their own data.
So, similarly, here, it seems that these discussions about the “scary” risks of AI are all about regulating the space in a manner that just hands the tools over to a small group of elite “trustworthy” AI titans, who will often talk up the worries and fears if the riff raff should ever be able to create their own AI. It’s the Facebook situation all over again, where their own fuckups lead to calls for regulation that just give them much greater power, and everyone else less power and control.
The AI landscape is a little different, but there’s a clear pattern here. The AI doom and gloom doesn’t appear to be about fixing existing problems with blackbox algorithms — just about setting up regulations that hand the space over to a few elite and powerful folks who promise that they, unlike the riff raff, have humanity’s best interests in mind.
Filed Under: ai, doom and gloom, existential risks, longtermism
2
French Court Smacks Remote Learning Software Company For Pervasive Surveillance Of Students In Their Own Homes
from the in-school-we-learn-how-to-be-spied-on dept
A worldwide pandemic trapped students in their own homes to stop the spread of the coronavirus. They didn’t ask for this. Neither did educators. But educators made the worst of it in far too many cases.
Aptitude tests and other essentials for continued funding (and bragging rights) were now out of their control. Any student sitting at home had access to a wealth of knowledge to buttress what they may have actually retained from remote instruction. . .
3
NPR Says Enough Is Enough: Quits Twitter
from the time-to-leave dept
The only surprising thing here is that it took this long: NPR has officially announced that it has quit Twitter. This is in response to Elon’s chaotic decision to first label the account “state-affiliated media,” a label that was designed to help users understand if a media organization was actually a dedicated mouthpiece of the government (which NPR is not). Indeed, NPR was initially the example that Twitter used as to the types of media organizations that such a label should not apply to.
After receiving some pushback for this, and revealing his near total lack of intellectual curiosity on the matter, Musk agreed to change the label to “government funded media,” despite that being misleading as well (and again, it seems that such a label would apply just as much to Twitter itself).
NPR had stopped posting to its main Twitter account after the initial label was made, and on Wednesday morning announced that it was leaving for good:
NPR will no longer post fresh content to its 52 official Twitter feeds, becoming the first major news organization to go silent on the social media platform. In explaining its decision, NPR cited Twitter’s decision to first label the network “state-affiliated media,” the same term it uses for propaganda outlets in Russia, China and other autocratic countries.
The reasoning, explained by NPR CEO John Lansing, is about more than just the label, but about how the label came to be:
“At this point I have lost my faith in the decision-making at Twitter,” he says. “I would need some time to understand whether Twitter can be trusted again.”
Meanwhile, Musk’s willingness to just make shit up as he goes along without thinking through the consequences of said labeling was on display in an interview he gave to the BBC, in which he said he’ll probably change the label on NPR, the BBC and other similarly situated news orgs as “publicly funded.”
It’s unclear how that will win back anyone.
Meanwhile, the NPR Twitter account did wake from its weeklong slumber to basically tell everyone the many other places you should go to get NPR info that are not Twitter.
So Musk’s whims are now going to drive more people to following NPR on Facebook, Instagram, and TikTok. Considering that Twitter’s biggest advantage over those other sites was that people used Twitter to follow news, this seems like a pretty massive self-own by Musk.

Filed Under: elon musk, media, social media
Companies: npr, twitter
Clearview Clears 30 Billion Scraped Images, Has Been Accessed More Than 1 Million Times By Law Enforcement
from the bigger-and-worser-than-ever dept
All hail the pariah. If Clearview is only at 30 billion images, it just means social media users haven’t been posting enough.
The little scraper that could has pushed its way to the next plateau of unacceptableness, turning the 10 billion images it had as recently as October 2021 to 30 billion before EOY2024. Plaudits all around, asshats. If nothing else, you’re a cautionary tale of oversharing — albeit one sued, exiled, benchslapped, and fined by pretty much every major nation in Europe.
If that’s the reputation you want, congrats: you’ve earned it. Clearview: shitbird among shitbirds. The worst case scenario for facial recognition tech. Billions of images linked to AI that has been pitched and sold to autocrats, law enforcement, gyms, random ass billionaires, etc. Power irreparably decoupled from responsibility, and continually pitching itself as the darling of whatever it is customers want it to be.
The AI garbage pail kid is at it again, as it freely (and proudly!) admitted [bragged!] to UK journalists. It’s making the numbers that count. And those numbers are the numbers that are bigger than previous numbers, as Katherine Tangalakis-Lippert reports for Insider. (h/t Michael Vario)
A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong.
The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes.
“Controversial” is putting it lightly. The company currently owes multiple millions in fines to European Union members, as well as its self-exiled redheaded stepchild, the recently extremely reactionary United Kingdom. And for all its boasting and “try before you buy” promotions, there’s scant evidence Clearview has been instrumental in law enforcement investigations. Clearview has always said it’s invaluable. It’s customers (even those using free trial access) are not so sure.
Clearview is more famous for the size of its (scraped) database than its usefulness in criminal investigations. Nonetheless, it continues to brag about how often cops have wasted time trawling its voluminous offerings, apparently mistaking quantity for quality. This is from the BBC interview prompting the Insider article:
Facial recognition firm Clearview has run nearly a million searches for US police, its founder has told the BBC.
Quantity is not quality, I repeat. And even the quantity being presented as quality by Clearview CEO Hoan Ton-That is questionable.
The figure of a million searches comes from Clearview and has not been confirmed by police.
Hey, nothing sells like salesmanship says the company that apparently believes all prophecies to be self-fulfilling. And nothing gathers press like a lot of numbers that can’t be easily verified… like 30 billion images or the Million Cop March on Clearview’s facial recognition database.
Rest assured, Clearview continues to scrape anything that’s not locked down. Its image database will always continue to grow. But with it facing legal troubles pretty much everywhere, it’s less likely its law enforcement customer base will show similar year-over-year growth. But public records requesters are incapable of constantly auditing law enforcement use of Clearview tech (something that should rightfully be handled by the entities that are paid to oversee law enforcement agencies), so there will always be a credibility gap Clearview and its vocal CEO can’t leap.
Clearview wants to be viewed as a public good. But all it has to offer is billions of images scraped without permission and unproven claims about being McGruff’s eyes and ears on the digital streets. Assume it’s all bullshit until proven otherwise. Billions in, but nearly nothing out. At best, Clearview is an internet remora.
Filed Under: facial recognition, law enforcement, perpetual police line up, police, privacy
Companies: clearview
No comments:
Post a Comment