28 October 2024

This is A LONG READ: NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment | Mike Masnick writing in Techdirt

 NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment


from the that's-not-how-it-works dept

The NY Times has real difficulty not misrepresenting Section 230. Over and over and over and over and over again it has misrepresented how Section 230 works, even having to once run this astounding correction (to an article that had a half-page headline saying Section 230 was at fault):

A day later, it had to run another correction on a different article also misrepresenting Section 230:

You would think with all these mistakes and corrections that the editors at the NY Times might take things a bit more slowly when either a reporter or a columnist submits a piece purportedly about Section 230.

Apparently not.

Julia Angwin has done some amazing reporting on privacy issues in the past and has exposed plenty of legitimately bad behavior by big tech companies. But, unfortunately, she appears to have been sucked into nonsense about Section 230.

She recently wrote a terribly misleading opinion piece, bemoaning social media algorithms and blaming Section 230 for their existence. The piece is problematic and wrong on multiple levels. It’s disappointing that it ever saw the light of day without someone pointing out its many flaws.

A HISTORY LESSON:

Before we get to the details of the article, let’s take a history lesson on recommendation algorithms, because it seems that many people have very short memories.

The early internet was both great and a mess. It was great because anyone could create anything and communicate with anyone. But it was a mess because that came with a ton of garbage and slop. There were attempts to organize that information and make it useful. Things like Yahoo became popular not because they had a search engine (that came later!) but because they were an attempt to “organize” the internet (Yahoo originally stood for “Yet Another Hierarchical Officious Oracle”, recognizing that there were lots of attempts to “organize” the internet at that time).

After that, searching and search algorithms became a central way of finding stuff online. In its simplest form, search is a recommendation algorithm based on the keywords you provide run against its index. In the early days, Google cracked the code to make that recommendation algorithm for content on the wider internet.

The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”

The next generation of the internet was content in various silos. Some of those were user-generated silos of content, such as Facebook and YouTube. And some of them were professional content, like Netflix or iTunes. But, once again, it wasn’t long before users felt overwhelmed with the sheer amount of content at their fingertips. Again, they sought out recommendation algorithms to help them find the relevant or “good” content, and to avoid the less relevant “bad” content. Netflix’s algorithm isn’t very different from Google’s recommendation engine. It’s just that, rather than “here’s what’s most relevant for your search keywords,” it’s “here’s what’s most relevant based on your past viewing history.”

Indeed, Netflix somewhat famously perfected the content recommendation algorithm in those years, even offering up a $1 million prize to anyone who could build a better version. Years later, a team of researchers won the award, but Netflix never implemented it, saying that the marginal gains in quality were not worth the expense.

Either way, though, it was clearly established that the benefit and the curse of the larger internet is that in enabling anyone to create and access content, too much content is created for anyone to deal with. Thus, curation and recommendation is absolutely necessary. And handling both at scale requires some sort of algorithms. Yes, some personal curation is great, but it does not scale well, and the internet is all about scale.

People also seem to forget that recommendation algorithms aren’t just telling you what content they think you’ll want to see. They’re also helping to minimize the content you probably don’t want to see. Search engines choosing which links show up first are also choosing which links they won’t show you. My email is only readable because of the recommendation engines I run against it (more than just a spam filter, I also run algorithms that automatically put emails into different folders based on likely importance and priority).

Algorithms aren’t just a necessary part of making the internet usable today. They’re a key part of improving our experiences.

Yes, sometimes algorithms get things wrong. They could recommend something you don’t want. Or demote something you do. Or maybe they recommend some problematic information. But sometimes people get things wrong too. Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm. If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.

It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.

And opinions are protected free speech under the First Amendment.

If we held anyone liable for opinions or recommendations, we’d have a massive speech problem on our hands. If I go into a bookstore, and the guy behind the counter recommends a book to me that makes me sad, I have no legal recourse, because no law has been broken. If we say that tech company algorithms mean they should be liable for their recommendations, we’ll create a huge mess: spammers will be able to sue if email is filtered to spam. Terrible websites will be able to sue search engines for downranking their nonsense.

On top of that, First Amendment precedent has long been clear that the only way a distributor can be held liable for even harmful recommendation is if the distributor had actual knowledge of the law-violating nature of the recommendation.

I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.

We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden,  but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty.  Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

It’s not hard to transpose this to the internet. If Google recommends a link that causes someone to poison themselves, precedent says we can hold the author liable, but not the distributor/recommender unless they have actual knowledge of the illegal nature of the content. Absent that, there is nothing to actually sue over.

And, that’s good. Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.

Note that the issue of Section 230 does not come up even once in this history lesson. All that Section 230 does is say that websites and users (that’s important!) are immune from their editorial choices for third party content. That doesn’t change the underlying First Amendment protections for their editorial discretion, it just allows them to get cases tossed out earlier (at the very earliest motion to dismiss stage) rather than having to go through expensive discovery/summary judgment and possibly even all the way to trial.

Section 230 isn’t the issue here:

Now back to Angwin’s piece. She starts out by complaining about Mark Zuckerberg talking up Meta’s supposedly improved algorithms. Then she takes the trite and easy route of dunking on that by pointing out that Facebook is full of AI slop and clickbait. That’s true! But… that’s got nothing to do with legal liability. That simply has to do with… how Facebook works and how you use Facebook? My Facebook feed has no AI slop or clickbait, perhaps because I don’t click on that stuff (and I barely use Facebook). If there was no 230 and Facebook were somehow incentivized to do less algorithmic recommendation, feeds would still be full of nonsense. That’s why the algorithms were created in the first place. Indeed, studies have shown that when you remove algorithms, feeds are filled with more nonsense, because the algorithms don’t filter out the crap any more.

But Angwin is sure that Section 230 is to blame and thinks that if we change it, it will magically make the algorithms better.

Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices.

Let’s back up and start with the problem. Section 230, a snippet of law embedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram.

So, again, this is wrong. From the earliest days of the internet, we always relied on recommendation systems and moderation, as noted above. And “social media” didn’t even come into existence until years after Section 230 was created. So, it’s not just wrong to say that Section 230’s protections made sense for early social media, it’s backwards.

Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them.

But it was also intended to help protect companies from being sued for recommendations. Indeed, two years ago, Cox and Wyden explained this to the Supreme Court in a case about recommendations:

At the same time, Congress drafted Section 230 in a technology-neutral manner that would enable the provision to apply to subsequently developed methods of presenting and moderating user-generated content. The targeted recommendations at issue in this case are an example of a more contemporary method of content presentation. Those recommendations, according to the parties, involve the display of certain videos based on the output of an algorithm designed and trained to analyze data about users and present content that may be of interest to them. Recommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230. And because Section 230 is agnostic as to the underlying technology used by the online platform, a platform is eligible for immunity under Section 230 for its targeted recommendations to the same extent as any other content presentation or moderation activities.

So the idea that 230 wasn’t meant for recommendation systems is wrong and ahistorical. It’s strange that Angwin would just claim otherwise, without backing up that statement.

Then, Angwin presents a very misleading history of court cases around 230, pointing out cases where Section 230 has been successful in getting bad cases dismissed at an early stage, but in a way that makes it sound like the cases would have succeeded absent 230:

Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.

But again, these links misrepresent and misunderstand how Section 230 functions under the umbrella of the First Amendment. None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable. All Section 230 did was speed up the resolution of those cases, without stopping the plaintiffs from taking legal action against those actually responsible for the harms.

And, similarly, we could point to another list of cases where Section 230 “shielded tech firms from consequences” for things we want them shielded from consequences on, like spam filterskicking Nazis off your platformfact-checking vaccine misinformation and election denial disinformationremoving hateful content and much much more. Remove 230 and you lose that ability as well. And those two functions are tied together at the hip. You can’t get rid of the protections for the stuff Julia Angwin says is bad without also losing the protections for things we want to protect. At least not without violating the First Amendment.

This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).

Angwin’s issue (as is the issue with so many Section 230 haters) is that she wants to blame tech companies for harms created by users of those technologies. At its simplest level, Section 230 is just putting the liability on the party actually responsible. Angwin’s mad because she’d rather blame tech companies than the people actually selling drugs, sexually harassing people, selling illegal arms or engaging in human trafficking. And I get the instinct. Big tech companies suck. But pinning liability on them won’t fix that. It’ll just allow them to get out of having important editorial discretion (making everything worse) while simultaneously building up a bigger legal team, making sure competitors can never enter the space.

That’s the underlying issue.

Because if you blame the tech companies, you don’t get less of those underlying activities. You get companies who won’t even look to moderate such content, because that would be used in lawsuits against them as a sign of “knowledge.” Or if the companies do decide to more aggressively moderate, you would get any attempt to speak out about sexual harassment blocked (goodbye to the #MeToo movement… is that what Angwin really wants?)

Changing 230 would make things worse, not better:

From there, Angwin takes the absolutely batshit crazy 3rd Circuit opinion in Anderson v. TikTok, which explicitly ignored a long list of other cases based on misreading a non-binding throwaway line in a Supreme Court ruling, and gave no other justification for its ruling, as being a good thing?

If the court holds platforms liable for their algorithmic amplifications, it could prompt them to limit the distribution of noxious content such as nonconsensual nude images and dangerous lies intended to incite violence. It could force companies, including TikTok, to ensure they are not algorithmically promoting harmful or discriminatory products. And, to be fair, it could also lead to some overreach in the other direction, with platforms having a greater incentive to censor speech.

Except, it won’t do that. Because of the First Amendment it does the opposite. The First Amendment requires actual knowledge of the violative actions and content, so doing this will mean two things: companies taking either a much less proactive stance or, alternatively, taking one that will be much quicker to remove any controversial content (so goodbye #MeToo, #BlackLivesMatter or protests against the political class).

Even worse, Angwin seems to have spoken to no one with actual expertise on this if she thinks this is the end result:

My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability.

As someone who is actively working to help create systems that give control back to users, I will say flat out that Angwin gets this backwards. Without Section 230 it becomes way more difficult to do so. Because the users themselves would now face much greater liability, and unlike the big companies, the users won’t have buildings full of lawyers willing and able to fight such bogus legal threats.

If you face liability for giving users more control, users get less control.

And, I mean, it’s incredible to say we need legal guardrails and less 230 and then say this:

In the meantime, there are alternatives. I’ve already moved most of my social networking to Bluesky, a platform that allows me to manage my content moderation settings. I also subscribe to several other feeds — including one that provides news from verified news organizations and another that shows me what posts are popular with my friends.

Of course, controlling our own feeds is a bit more work than passive viewing. But it’s also educational. It requires us to be intentional about what we are looking for — just as we decide which channel to watch or which publication to subscribe to.

As a board member of Bluesky, I can say that those content moderation settings and the ability for others to make feeds and for them to be available for Angwin to choose what she wants are possible in large part due to Section 230. Without Section 230 to protect both Bluesky and its users, it makes it much more difficult to defend lawsuits over those feeds.

Angwin literally has this backwards. Without Section 230, is Bluesky as open to offering up third-party feeds? Are they as open to allowing users to create their own feeds? Under the world that Angwin claims to want, where platforms have to crack down on “bad” content, it would be a lot more legally risky to allow user control and third-party feeds. Not because providing the feeds would lead to legal losses, but without 230 it would encourage more bogus lawsuits, and cost way more to get those lawsuits tossed out under the First Amendment.

Bluesky doesn’t have a building full of lawyers like Meta has. If Angwin got her way, Bluesky would need that if it wanted to continue offering the features Angwin claims she finds so encouraging.

This is certainly not the first time that the NY Times has directly misled the public about how Section 230 works. But Angwin certainly knows many of the 230 experts in the field. It appears she spoke to none of them and wrote a piece that gets almost everything backwards. Angwin is a powerful and important voice towards fixing many of the downstream problems of tech companies. I just wish that she would spend some time understanding the nuances of 230 and the First Amendment to be more accurate in her recommendations.

I’m quite happy that Angwin likes Bluesky’s approach to giving power to end users. I only wish she wasn’t advocating for something that would make that way more difficult.

Filed Under: 

No comments:

HERE TO PROVE ANYTHING CAN HAPPEN: Meme Coins Explained: Hype, Risk, and Profit!

   Finbold 14 hours ago Search inside image Dogecoin Whales move funds into this emerging AI Altcoin Expecting a 9,303% rally by January 202...