Ridiculous: Journalist Held In Contempt For Not Revealing Sources
from the underpinnings-of-a-free-press dept
Going way, way back, we’ve talked about the need for protection of journalistic sources, in particular the need for a federal journalism shield law. I can find stories going back about 15 years of us talking about it here on Techdirt. The issue might not come up that often, but that doesn’t make it any less important.
On Thursday, a judge held former CBS journalist Catherine Herridge in contempt for refusing to reveal her sources regarding stories she wrote about scientist Yanping Chen.
The ruling, from U.S. District Court Judge Christopher R. Cooper, will be stayed for 30 days or until Herridge can appeal the ruling.
Cooper ruled that Herridge violated his Aug. 1 order demanding that Herridge reveal how she learned about a federal probe into Chen, who operated a graduate program in Virginia. Herridge, who was recently laid off from CBS News, wrote the stories in question when she worked for Fox News in 2017.
In his ruling, Judge Cooper claims that he’s at least somewhat reluctant about this result, but he still goes forward with it arguing (I believe incorrectly) that he needs to balance the rights of Chen with Herridge’s First Amendment rights.
The Court does not reach this result lightly. It recognizes the paramount importance of a free press in our society and the critical role that confidential sources play in the work of investigative journalists like Herridge. Yet the Court also has its own role to play in upholding the law and safeguarding judicial authority. Applying binding precedent in this Circuit, the Court resolved that Chen’s need for the requested information to vindicate her rights under the Privacy Act overcame Herridge’s qualified First Amendment reporter’s privilege in this case. Herridge and many of her colleagues in the journalism community may disagree with that decision and prefer that a different balance be struck, but she is not permitted to flout a federal court’s order with impunity. Civil contempt is the proper and time-tested remedy to ensure that the Court’s order, and the law underpinning it, are not rendered meaningless.
But the First Amendment is not a balancing test. And if subpoenas or other attempts to reveal sources can be used in this manner, the harm to journalism will be vast. Journalism only works properly when journalists can legitimately promise confidentiality to sources. And that’s even more true for whistleblowers.
Admittedly, this case is a bit of a mess. It appears that the FBI falsely believed that Chen was a Chinese spy and investigated her, but let it go when they couldn’t support that claim. However, someone (likely in the FBI) leaked the info to Herridge, who reported on it. Chen sued the FBI, who won’t reveal who leaked the info. She’s now using lawful discovery to find out who leaked the info as part of the lawsuit. You can understand that Chen has been wronged in this situation, and it’s likely someone in the FBI who did so. And, in theory, there should be a remedy for that.
But, the problem is that this goes beyond just that situation and gets to the heart of what journalism is and why journalists need to be able to protect sources.
If a ruling like this stands, it means that no journalist can promise confidentiality, when a rush to court can force the journalist to cough up the details. And the end result is that fewer whistleblowers will be willing to speak to media, allowing more cover-ups and more corruption. The impact of a ruling like this is immensely problematic.
There’s a reason that, for years, we’ve argued for a federal shield law to make it clear that journalists should never be forced to give up sources. In the past, attempts to pass such laws have often broken down over debates concerning who they should apply to and how to identify “legitimate” journalists vs. those pretending to be journalists to avoid coughing up info.
But there is a simple solution to that: don’t have it protect “journalists,” have the law protect such information if it is obtained in the course of engaging in journalism. That is, if someone wants to make use of the shield law, they need to show that the contact and information obtained from the source was part of a legitimate effort to report a story to the public in some form, and they can present the steps they were taking to do so.
At the very least, the court recognizes that the contempt fees should be immediately stayed so that Herridge can appeal the decision:
The Court will stay that contempt sanction, however, to afford Herridge an opportunity to appeal this decision. Courts in this district and beyond have routinely stayed contempt sanctions to provide journalists ample room to litigate their assertions of privilege fully in the court of appeals before being coerced into compliance….
Hopefully, the appeals court recognizes how problematic this is. But, still, Congress can and should act to get a real shield law in place.
Filed Under: 1st amendment, catherine herridge, free press, journalism, journalist shield law, sources, yanping chen
Sports Illustrated Threw Lavish Parties As It Was Shit-canning All Its Actual Journalists
from the none-of-this-means-anything dept
As the Vice collapse and Messenger collapse just got done illustrating in glorious technicolor, the problem with online U.S. journalism isn’t that it’s not inherently profitable. The problem is usually that the worst, least competent, shallowest people imaginable routinely fail upward into positions of management, then treat the media companies they acquire and operate like a disposable napkin.
That’s certainly been the case over at Sports Illustrated, which isn’t so much even a media organization anymore as much as it is a bloated brand corpse being exploited by extraction-centric, visionless failsons, who have minimal coherent interest in the company’s original function: sports journalism.
That’s all well exemplified by this Washington Post article that explores how as the company was falling apart and its journalists and editors were being fired right and left, the folks in charge of the company were throwing lavish Super Bowl parties. It’s well worth a read, and features a lot of doublespeak by managers who talk out of both sides of their mouth about “values” and “mission.”
Over the past six years Sports Illustrated has been tossed around between a rotating crop of dodgy middlemen for whom journalism was an afterthought. SI was acquired in 2018 by what was left of Meredith Publishing as part of the purchase of Time (which founded the magazine in 1954), then had its intellectual property sold to Authentic Brands Group (ABG) for $110 million a year later.
ABG has basically just been renting the Sports Illustrated brand to a company by the name of The Arena Group, which has been mismanaging it for most of that time. The company, like Vice, was run by a lot of non-journalism, affluent, hedge fund brats, simply interested in blindly chasing engagement at impossible scale via seventy-five consecutive but nonsensical attempts to “pivot to video.”
Arena just got bogged down in a massive scandal after it began using fake AI generated authors to create shitty, fake AI-generated journalism — without bothering to even tell staff or readers. Then the company balked on paying its $12 million yearly fee to ABG, resulting in more chaos.
Now Authentic Brands Group is left pondering what to do with the brand. And it will probably involve renting it yet again to some other set of visionless brunchlords keen on chasing engagement at impossible scale in the most superficial way possible. The people who pay the actual price for this incompetence are, as usual, the journalists and editors who have little to do with mismanagement.
When you read the Washington Post article, there seems to be some realization by the executives at ABG, like CEO Jamie Salter, that you can’t just hollow a journalism company out like a pumpkin and parade the corpse around to sell shitty supplements without repercussions:
“Salter insisted SI’s journalism remains central to his mission. “That’s the mouthpiece to the brand,” he explained. “It’s not as critically important from the financial side, but what we put out there from journalism [is the] core. If you took the shoes out of Reebok, I’m not sure Reebok would be Reebok anymore.”
But then these hustlebros will proceed to do exactly that. Repeatedly. Their entire function is to collect brands, exploit and extract any last bit of value, and then when they’ve drained all meaning from the husk, toss it in the trash and start over somewhere else. Salter seems to throw most of the blame for this dysfunction in the lap of The Arena Group, but the dysfunction is commonplace and everywhere in media.
And then the question the Post correctly asks is, why are the actual employees doing the work always left holding the bag, while never getting a cut of the proceeds? Why does this extraction class view labor as such an irrelevant, exploitable resource in the pursuit of their fourth home?
“If Authentic is forging a new way to monetize a media brand — and, to be sure, there are not a lot of happy stories anywhere in media today — why, SI staffers asked, can’t they get a real cut?”
…”As the fates of some 80 staffers hang in the balance and Authentic contemplates its next move, whatever comes next for SI — a new publisher, a zombie website, a cultural renaissance or anything else — Salter probably will be just fine.”
The Sports Illustrated implosion is just such a perfect example of the utterly hollow vision of a lot of the modern media extraction class. There’s really no genuine interest in craft, or journalism, building consistent audience, or longevity. It’s just mindless consolidation, acquisition, quirky tax tricks, and exploitation dressed up as savvy deal-making, all slathered in as much tacky neon paint as possible.
Filed Under: branding, brunchlords, jamie salter, journalism, media reform, sports illustrated, sports journalism
Companies: arena group, authentic brands group
Judge Appears Correctly Skeptical Of Elon’s SLAPP Suit Against Critic
from the is-it-bad-when-a-judge-calls-your-legal-argument-vapid? dept
We have pointed out just how ridiculous Elon Musk’s SLAPP lawsuit against the Center for Countering Digital Hate is, so much that I supported the filing of an amicus brief in support of CCDH, even as I find CCDH’s positions and research to be generally problematic and misleading. But, even if their research methods aren’t great, they still deserve their right to speak out, and they should not face ruinous litigation from a petulant CEO who only pretends to support free speech.
On Thursday, there were oral arguments in the case, and to say they did not go well for Elon would be an understatement. The judge appeared to openly mock the company for its terrible legal arguments. And, most importantly, he (correctly) pointed out how “antithetical” to free speech this lawsuit appeared to be:
“You put that in terms of safety, and I’ve got to tell you, I guess you can use that word, but I can’t think of anything basically more antithetical to the First Amendment than this process of silencing people from publicly disseminated information once it’s been published,” Breyer said.
“You’re trying to shoehorn this theory by using these words into a viable breach of contract claim,” the judge added.
This was exactly the point that was raised in the amicus brief (brilliantly put together by Harvard’s Cyberlaw clinic). That the claims of “breach of contract” were a nonsense attempt to stifle speech, and hoping that by not including a defamation claim it would somehow avoid First Amendment scrutiny. The judge, Charles Breyer, seemed to have figured out ExTwitter’s sneaky plan pretty easily.
Near the end of the hearing, the judge noted that if something is proven to be true a defamation lawsuit falls apart. Why, he said, didn’t Musk’s X bring a defamation suit if the company believes X’s reputation has been harmed?
“You could’ve brought a defamation case, you didn’t bring a defamation case,” Breyer said. “And that’s significant.”
Yeah, because everyone knows that there was no actual defamation.
The judge appeared also to see through the nonsense of the breach of contract claims directly. ExTwitter claims that CCDH should be liable for the loss of ad revenue of advertisers leaving the platform in response to CCDH’s research report. But, the judge pointed out how tenuous this was, to the point of calling the argument “one of the most vapid extensions of law I’ve ever heard.”
But in order to make this case, X had to show the group knew the financial loss was “foreseeable” when it started its account and began abiding by Twitter’s terms of service, in 2019, before Musk acquired the site.
X lawyer Hawk argued that the platform’s terms of service state that the rules for the site could change at any time, including that suspended users whom the group says spread hate speech could be reinstated.
And so, Hawk said, if changes to the rules were foreseeable, then the financial loss from its reports on users spreading hate should have also been foreseeable.
This logic confused and frustrated the judge.
“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer said.
There are times, in a courtroom, where you shouldn’t read very much into things a judge says. And then there are times where it’s pretty clear the judge understands just how how wrong one side is. This is one of the latter cases.
According to a friend who attended the hearing (virtually, since it was on Zoom), these quotes don’t even get to how bad the hearing was for Elon. Apparently, at one point the judge asked ExTwitter’s lawyer “are you serious?” which is never a good thing. ExTwitter’s lawyer also had to walk back a few arguments in court, including when the company tried to apply the wrong terms of service to a separate non-profit they had tried to drag into the case. And, finally, towards the end of the hearing, apparently ExTwitter’s lawyer tried to claim that they had pled actual malice (which, you know, is kind of important), only to have CCDH’s lawyer point out that they had not. CCDH is right. You can look at the amended complaint yourself.
None of that is likely to go over well with this judge.
Filed Under: anti-slapp, breach of contract, defamation, elon musk, slapp
Companies: ccdh, twitter, x
Biden Administration Shouts ‘ONE MORE YEAR! ONE MORE YEAR!’ As Section 702 Stalemate Continues
from the only-if-FBI-agents-show-up-with-stuff-scrawled-on-posterboard dept
There are a variety of reasons to alter, if not actually end, the Section 702 collection. Whatever value it may have in terms of national security, the very real fact is that it has been endlessly abused by the FBI since its inception.
It’s a foreign-facing collection, which means it harvests communications and data involving foreign targets of US surveillance. But there’s a massive backdoor built into this collection. Collecting foreign communications often means collecting US persons’ communications with foreign persons or entities.
That’s where the FBI has gone interloping with alarming frequency. US persons’ communications are supposed to be masked, preventing the FBI from engaging in warrantless surveillance of US-based communications. This simply hasn’t happened. And the FBI has not only performed second-hand abuse of this collection regularly, but it has equally regularly refused to be honest with the FISA court about its activities.
The latest rejection of a clean reauthorization of Section 702 has nothing to do with the FBI’s continuous refusal to play by the rules. Instead, it has to do with the few times it decided to engage in some backdoor action that targeted the party in power or people temporarily involved with inflicting four years of Donald Trump on a nation that was definitely greater before someone started promising to make it great again.
However, the FBI — despite having abused its access for years — continues to insist the program should not be ended or altered. It has actually admitted its backdoor searches would otherwise be illegal without this program and its side benefits — something that should have hastened legislators on both sides of the political aisle to shut the whole thing down until these critical flaws were patched.
Instead, the whole thing have devolved into the expected in-fighting. Some legislators proposed meaningful reforms to the program, which were soundly rejected by a lot of Republicans simply because some Democrats were involved. The Republicans heading up the House Intelligence Committee proposed their own reforms, but the only thing they really wanted to change was the FBI’s ability to place Republicans under surveillance.
Meanwhile, the Biden Administration has decided the FBI is right, no matter how often it’s been wrong. Ignoring years of casual abuse, the Biden team has pushed for a clean reauthorization — something it may not have done if it weren’t for all the Republicans demanding (mostly for self-serving reasons) the program be ended or altered.
Unfortunately, Section 702 continues to live on, even if it’s in an unresponsive coma at the moment. Rather than let the surveillance authority expire, a bi-partisan effort did the country dirty by extending it until April 2024 where it could be further disagreed about following the return of Congressional reps to Capitol Hill.
April just isn’t good enough, apparently. The Biden Administration wants to buy even more time without any termination or authorization, presumably in hopes that the current furor will die down and this executive power will be granted a clean re-authorization. (Of course, by that point, there may be an actual Fuhrer in play, given Donald Trump’s early sweeps of critical primaries.)
Here’s Charlie Savage with more details for the New York Times:
The Biden administration is moving to extend a disputed warrantless surveillance program into April 2025, according to officials familiar with the matter.
The decision by the administration, which requires asking for court approval, seemed likely to roil an already turbulent debate in Congress over its fate. The program has scrambled the usual partisan lines, with members of both parties on each side of seeing the program as potentially abusive of civil liberties or as necessary for protecting national security.
This is probably preferable to holding a budget bill hostage in an executive office display of “I’ll hold my breath until I get my way.” And it’s preferable to Republican efforts to alter Section 702 simply to protect themselves from illegal surveillance. But it’s definitely not preferable to actually engaging with the inherent problems of this surveillance program, all of which seem to lead back to the FBI and its insistence on abusing its access.
This throws these problems on the back burner for another year. And it will be yet another year where the FBI abuses its access. We can make this assumption because there’s never been a year where the FBI hasn’t abused this surveillance power. Refusing to address an issue that’s been publicly acknowledged for several years now just to ensure the NSA doesn’t lose this surveillance program is irresponsible. The Biden Administration’s apparently tactic agreement with assertions made by an agency that has proven it can’t be trusted doesn’t bode well for anyone.
And, if this yearlong reprieve results in a clean reauthorization, the Biden Administration will quite possibly be handing this renewed power to Republicans now allowed to engage in their worst excesses, thanks to the re-election of Dumpster Fire Grover Cleveland.
The best thing the current administration could do at this point is allow the authority to die, which would force Republicans who love power (but hate to see it wielded against them) try to reconcile their desire for a surveillance state with the inevitable reality they will sometimes be on the receiving end of this surveillance. The worst thing it can do is what it’s doing now: pressing the pause button because it doesn’t have the desire or willingness to go head-to-head with an agency that claims — without facts in evidence — the only way it can keep this country secure from foreign threats is by warrantlessly spying on Americans.
Filed Under: biden administration, fbi, fisa court, joe biden, mass surveillance, nsa, section 702, surveillance
Daily Deal: Headway Premium
from the good-deals-on-cool-stuff dept
Headway is the revolutionary app designed to help you turn personal growth into a habit. With a lifetime subscription, you get unlimited access to a huge number of non-fiction bestsellers, summarized into 15-minute reads. Be it personal development, business strategies, or health insights, Headway has you covered. It’s on sale for $49.97.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It
from the that's-not-how-any-of-this-works dept
At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws.
The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that contradict Section 230 (in other words: no state laws that dictate how moderation must work). But that wasn’t what the discussion was about. The discussion was mostly about Thomas and Gorsuch’s confusion over 230 and thinking that the argument for Section 230 (that you’re not held liable for third party speech) contradicts the arguments laid out by NetChoice/CCIA in these cases, where they talked about the platforms’ own speech.
Gorsuch and Thomas were mixing up two separate things, as both the lawyers for the platforms and the US made clear. There are multiple kinds of speech at issue here. Section 230 does not hold platforms liable for third-party speech. But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.
That is separate from who is liable for individual speech.
But, as is the way of the world whenever it comes to discussions on Section 230, lots of people are going to get confused.
Today that person is Steven Brill, one of the founders of NewsGuard, a site that seeks to “rate” news organizations, including for their willingness to push misinformation. Brill publishes stories for NewsGuard on a Substack (!?!?) newsletter titled “Reality Check.” Unfortunately, Brill’s piece is chock full of misinformation regarding Section 230. Let’s do some correcting:
February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.
The article originally claimed it was the 48th anniversary, though it was later corrected (without a correction notice — which is something Newsguard checks on when rating the trustworthiness of publications). That’s not that big a deal, and I don’t think there’s anything wrong with “stealth” corrections for typos and minor errors like that.
But this sentence is just flat out wrong: “Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online.” It’s just not true. Section 230 gives limited exemptions from some forms of liability for third party content that they had no role in creating. That’s quite different than what Brill claims. His formulation suggests they’re not liable for anything they, themselves, put online. That’s false.
Section 230 is all about putting the liability on whichever party created the violation under the law. If a website is just hosting the content, but someone else created the content, the liability should go to the creator of the content, not the host.
Courts have had no problem finding liability on social media platforms for things they themselves post online. We have a string of such cases, covering Roommates, Amazon, HomeAway, InternetBrands, Snap and more. In every one of those cases (contrary to Brill’s claims), the courts have found that Section 230 does not protect things these platforms post online.
Brill gets a lot more wrong. He discusses the Prodigy and CompuServe cases and then says this (though he gives too much credit to CompuServe’s lack of moderation being the reason why the court ruled that way):
That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”
This is both wrong and misleading. First of all, nothing in Section 230 could “require” screening for harmful content, because both the First and Fourth Amendments would forbid that. So the complaint that it did not require such screening is not just misplaced, it’s silly.
We’ve gone over this multiple times. Pre-230, the understanding was that, under the First Amendment, liability of a distributor was dependent on whether or not the distributor had clear knowledge of the violative nature of the content. As the court in Smith v. California made clear, it would make no sense to hold someone liable without knowledge:
For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature.
That’s the First Amendment problem. But, we can take that a step further as well. If the state now requires scanning, you have a Fourth Amendment problem. Specifically, as soon as the government makes scanning mandatory, none of the content found during such scanning can ever be admissible in court, because no warrant was issued upon probable cause. As we again described a couple years ago:
The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”
All of that seems kinda important?
Yet Brill rushes headlong on the assumption that 230 could have and should have required mandatory scanning for “harmful” content.
Also, most harmful content remains entirely protected by the First Amendment, making this idea even more ridiculous. There would be no liability for it.
Brill seems especially confused about how 230 and the First Amendment work together, suggesting (incorrectly) that 230 gives them some sort of extra editorial benefit that it does not convey:
With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.
Again, this is incorrect on multiple levels. The First Amendment gives them the right to edit. It also gives them the right to slipshod editing. And the right to promote harmful content via algorithms. That has nothing to do with Section 230.
The idea that “algorithmic promotion of harmful content… has done so much to destabilize the world” is a myth that has mostly been debunked. Some early algorithms weren’t great, but most have gotten much better over time. There’s little to no supporting evidence that “algorithms” have been particularly harmful over the long run.
Indeed, what we’ve seen is that while there were some bad algorithms a decade or so ago, pressure from the market has pushed the companies to improve. Users, advertisers, the media, have all pressured the companies to improve their algorithms and it seems to work.
Either way, those algorithms still have nothing to do with Section 230. The First Amendment lets companies use algorithms to recommend things, because algorithms are, themselves, expressions of opinion (“we think you would like this thing more than the next thing”) and nothing in there would trigger legal liability even if you dropped Section 230 altogether.
It’s a best (or worst) of both worlds, enjoyed by no other media companies.
This is simply false. Outright false. EVERY company that has a website that allows third-party content is protected by Section 230 for that third-party content. No company is protected for first-party content, online or off.
For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230.
Except… that’s not how any of this works, even without Section 230. Fox News was held liable because the content was produced by Fox News. All of the depositions and transcripts were… Fox News executives and staff. Because they created the defamatory content.
The social media apps didn’t create the content.
This is the right outcome. The blame should always go to the party who violated the law in creating the content.
And Fox News is equally as protected by Section 230 if there is defamation created by someone else but posted in a comment to a Fox News story (something that seems likely to happen frequently).
This whole column is misleading in the extreme, and simply wrong at other points. NewsGuard shouldn’t be publishing misinformation itself given that the company claims it’s promoting accuracy in news and pushing back against misinformation.
Filed Under: 1st amendment, 4th amendment, content moderation, section 230, steven brill
Companies: newsguard
Biden EO Restricts Sale Of Consumer Data To ‘Countries Of Concern’ (But We Still Need A Privacy Law And To Regulate Data Brokers)
from the doing-the-bare-minimum dept
So we’ve noted for a long while that the fixation on China and TikTok specifically has often been used by some lazy thinkers (like the FCC’s Brendan Carr) as a giant distraction from the fact the U.S. has proven too corrupt to regulate data brokers, or even to pass a baseline privacy law for the internet era. The cost of this corruption, misdirection, and distraction has been fairly obvious.
Enter the Biden administration, which this week announced that Biden was signing a new executive order that would restrict the sale of sensitive behavioral, location, financial, or other data to “countries of concern,” including Russia and China. At a speech, a senior administration official stated the new restrictions would shore up national security:
“Our current policies and laws leave open access to vast amounts of American sensitive personal data. Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”
The EO fact sheet is vague, but states the Biden administration will ask the The Departments of Justice, Homeland Security, Health and Human Services, Defense, and Veterans Affairs, to all work in concert to ensure problematic countries aren’t able to buy “large scale” data repositories filled with U.S. consumer data, and to pass new rules and regulations tightening up the flow of data broker information.
We’ve noted for a long, long time that our corrupt failure to pass a privacy law or regulate data brokers was not only a frontal assault on consumer privacy, it was easily exploitable by foreign intelligence agencies looking to build massive surveillance databases on American citizens.
It’s why it was bizarre to see lawmakers myopically fixated on banning TikTok, while ignoring the fact that our corrupt policy failures had made TikTok’s privacy issues possible in the first place.
You could ban TikTok tomorrow with a giant patriotic flourish to “fix privacy,” but if you’re not willing to rein in the hundreds of sleazy international data brokers doing the same thing (or in some cases much worse at even bigger scale), you haven’t actually accomplished much beyond posturing to get on TV.
The EO sounds at least like a first step (depending entirely on the implementation), but is filled with some flowery and revisionist language. This bit, for example:
“These actions not only align with the U.S.’ longstanding support for the trusted free flow of data, but also are consistent with U.S.’ commitment to an open Internet with strong and effective protections for individuals’ privacy and measures to preserve governments’ abilities to enforce laws and advance policies in the public interest.”
Again, we don’t have a privacy law for the internet era in 2024 not because it was too hard to write one, but because Congress is too corrupt to pass one. We have, repeatedly, made the decision to prioritize the profits of an interconnected array of extractive industries over the public welfare, public safety, and even national security.
The result has been a massive, interconnected, hyper-surveillance market that hoovers up data on your every fart down to the millimeter, bundles that data up in vast profiles, and monetizes it across the globe with very little if any real concern for exploitation and abuse. All under the pretense that because much of this data was “anonymized” (a meaningless, gibberish term), there could be no possible harm.
The result has been just a rotating crop of ugly scandals that have gotten progressively worse. All while we (mostly) sat on our hands whining about TikTok.
The FTC has been cracking down on some location data brokers, but generally lacks the resources (by design) to tackle the problem at the scale it’s occurring. They lack the resources because the over-arching policy of the U.S. government for the better part of the last generation has been to defund and defang regulators under the simplistic pretense this unleashes untold innovation (with no downside).
This myopic view of how government works is all pervasive in America, and has resulted in most corporate oversight in the U.S. having the structural integrity of damp cardboard. And it’s all about to get significantly worse courtesy of a handful of looming Supreme Court rulings aimed at eroding regulatory independence even further. There’s a very real cost for this approach, and the check has been, and will be, increasingly coming due in a wide variety of very obvious and spectacular ways.
But we also don’t have a privacy law and refuse to regulate data brokers because the U.S. government benefits from the dysfunction, having realized long ago that the barely regulated data broker market is a great way to purchase data you’d otherwise need to get a warrant to obtain. Data broker location data is now tethered tightly to all manner of U.S. government operations, including military targeting.
The press has also played a role in failing to educate the public about the real risks of failing to regulate data brokers or pass a privacy law. Just 23 percent of the U.S. public even knows the government has failed to pass a privacy law for the internet era. And when the U.S. press does cover privacy, the fact that rank corruption is at the heart of the dysfunction is routinely never mentioned.
So yes, it’s great that we’re starting to see some growing awareness about the real world costs of our corrupt failures on privacy policy. Senator Ron Wyden, in particular, has been doing an amazing job sounding the alarm on how this failure is being exploited by not just a diverse array of self-serving companies, but a surging authoritarian movement in the post-Roe era.
But it’s going to take a hell of a lot more than an EO to course correct. It’s going to take shaking Congress out of its corrupt apathy. And the only thing I think will accomplish that will be a privacy scandal so massive and unprecedented (potentially including mass fatalities or the leaking of powerful figures’ data at unprecedented scale), that elected officials have absolutely no choice but do do their fucking job.
Filed Under: data brokers, executive order, ftc, joe biden, location data, national security, privacy, russia, security, surveillance
No comments:
Post a Comment