Content Moderation, Good Deals, Everyone A Suspect, This-is-For-Your-Own-Good, Whoopsies Department + more
ExTwitter Users Getting Fed Up With The Crypto Spam And AI Bots Elon Promised To Clean Up
from the turns-out-content-moderation-is-difficult dept
Among the various promises that Elon made regarding his takeover of ExTwitter, was that he was there to clean up the spam and bot problem. He seemed to think that the previous regime had fallen down on the job, and that somehow he would have the magical answer to dealing with such things.
About that.
Originally, Elon seemed to think that changing Twitter’s verification system into a subscription service would get rid of the bots. That did not work. More recently, he’s shifted into making anyone who wants to post anything to Twitter to have to pay a nominal amount as his solution.
All of this assumes, incorrectly, that it’s not worth it for scammers and spammers to pay tiny bits to flood Elon’s playground with shit.
And flood it, they are.
A report from Bleeping Computer notes that ExTwitter has become completely overwhelmed with crypto scam ads, and most of them are coming from accounts paying Elon his cut. And even Elon’s biggest supporters are getting sick of it.
And it seems clear that it’s worth it to scammers to pay $8/month for access to the absolute gullible fucks on ExTwitter. As Bleeping Computer highlighted last month, one of these crypto drainer scams that has been regularly advertising on ExTwitter was able to steal $59 million from suckers on ExTwitter via purchased ads:
On X, better known as Twitter, advertisements for MS Drainer are so abundant that ScamSniffer reports they account for six out of nine phishing ads on their feed.
Notably, many of the scam ads on X are posted from legitimate “verified” accounts that carried the blue tick badge when the ad was shown.
The account MalwareHunterTeam is out there finding more and more such scam ads on ExTwitter. Here are just a few:
And more and more and more.
Meanwhile, Elon’s other big “innovation” to try to stop the bots was to change the way the API worked so that it charged ridiculous fees to use. Of course, all that’s done is driven away the useful bots, but left the scam bots free to be.
Which brings us to the other story demonstrating Elon’s absolute failure to deal with bots on the platform. Boingboing details how Parker Malloy has found that there appear to be a shitload of fake bot accounts that are clearly running off of ChatGPT. And you can tell that by simply searching for the phrase “goes against OpenAI’s use case policy.” You find tons and tons of tweets using that phrasing, as it is clearly coming from a bot powered by ChatGPT, but where whoever set it up didn’t think that OpenAI would reject their query.
And, look. Fighting spam and bots is a big challenge. And if Elon had approached this with even the slightest humility, you might feel bad for him. But instead, he insisted, without knowledge, that Twitter’s previous management was failing to take the problem seriously, was lying about how much spam was on the platform (even though that was only because he couldn’t understand how Twitter was reporting things), and that somehow he would have the singular solution to solve it.
Instead, he fired basically anyone who knew anything about fighting spam, put in place braindead stupid solutions that anyone with any experience in the field would tell you wouldn’t work… and then made the problem way, way, way worse.
No wonder Elon is now moving on to trying to blame “DEI” for anything bad that happens in the world (someone should tell him that his own companies, Tesla and SpaceX, both advertise their DEI efforts, but alas…)
Filed Under: ai, bots, chatgpt, crypto spam, elon musk, scams, spam, verification
Companies: twitter, x
Daily Deal: TexTalky AI Text-to-Speech
from the good-deals-on-cool-stuff dept
Turn any text or script into a lifelike natural human voice in easy 3 steps using TexTalky, an AI text-to-speech synthesizer. No robotic voices! TexTalky uses the latest cloud-based AI technology powered by Google, IBM, Microsoft, and Amazon. It covers more than 1140 international languages and accents, and over 900 kinds of lifelike human voices that meet most of your needs. Unlimited use cases. From YouTube narration and marketing content to documentaries and more, you can choose your synthesized audio to turn out exactly how you need them. It’s on sale for $37.99.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
EFF Asks Pennsylvania’s Top Court To Stop Cops From Googling For Suspects
from the can't-just-make-everyone-a-suspect dept
Law enforcement officers learned long ago that if all they have is a crime scene and no likely suspects, there was no reason to wear out shoe leather beating the streets for alleged criminals. They don’t even need to leave the office. All they have to do is produce a subpoena for certain third-party records and/or convince a judge that the only “probable cause” they need to demonstrate is the probability Google houses the data they seek.
In some cases, investigators used Google to identify devices carried by people in or around crime scenes in hopes of finding probable suspects. This option — the geofence warrant — has proven extremely popular in recent years. The constitutionality of these warrants — which require Google (and it’s almost always Google) to search its entire repository of location data — is still up in the air.
So is the constitutionality of another Google-centric law enforcement option which also requires Google to search all of its data to find the stuff cops are looking for. These are keyword warrants and they work in reverse: instead of showing probable cause to search people’s Google search histories, investigators simply present Google with a warrant demanding information on anyone who used certain search terms.
The law is still unsettled here. The EFF is hoping the law will be a bit more settled, but on the side of the Constitution, in a case currently being considered by the Pennsylvania Supreme Court.
Everyone deserves to search online without police looking over their shoulder, yet millions of innocent Americans’ privacy rights are at risk in Commonwealth v. Kurtz—only the second case of its kind to reach a state’s highest court. The brief filed by EFF, the National Association of Criminal Defense Lawyers (NACDL), and the Pennsylvania Association of Criminal Defense Lawyers (PACDL) challenges the constitutionality of a keyword search warrant issued by the police to Google. The case involves a massive invasion of Google users’ privacy, and unless the lower court’s ruling is overturned, it could be applied to any user using any search engine.
It’s an important case. As the EFF notes, this is only the second case of its kind to reach the highest level of a state court system. It has filed a brief [PDF] that not only points out how these warrants far exceed the bounds of the “general warrants” that prompted the creation of the Fourth Amendment, but how dangerous it would be to allow government entities to, in essence, initiate searches of every Google user’s search terms using nothing more than a single warrant.
As the EFF points out, people are far more willing to share intimate thoughts and desires (as expressed in the form of search queries) with Google than with their fellow human beings. Google makes no judgments. It simply goes looking for what it’s being asked to look for.
Unfortunately, the government generally doesn’t even know what it’s looking for when it demands Google perform searches of hundreds of millions of users’ search terms on its behalf.
Keyword search warrants are unlike typical warrants for electronic information in a crucial way: they are not targeted to specific individuals or accounts. Instead, they require a provider to search its entire reserve of user data and identify any and all users or devices who searched for words or phrases specified by police. As in this case, the police generally have no identified suspects when they seek a keyword search warrant. Instead, the sole basis for the warrant is the officer’s hunch that the perpetrator might have searched for something related to the crime.
On top of this hunch that someone may have searched for something, investigators add several more guesses. Not only do they not know who they’re searching for, they don’t know what search terms were used. This creates a dragnet — one cast by Google in an effort to comply with the warrant’s demands.
Several known keyword warrants have, as in this case, sought to identify everyone who searched for a specific address or variations of the victim’s name. However, in other cases police have investigated other search queries, such as the name of someone else related to the case. In at least two known cases, the search queries have been far broader. In response to a series of bombings in Austin, Texas, police sought everyone who searched for words like “low explosives” and “pipe bomb.“ And in Brazil, Google challenged a warrant for everyone who searched for the name of a popular politician who was assassinated and the busy street in Rio de Janeiro where she was killed.
These warrants are problematic and they have little to do with the ideals represented by the Fourth Amendment, like probable cause, specificity, and minimizing intrusions on citizens’ personal lives. And, like any dragnet, they’re capable of implicating innocent people.
Because keyword warrants require Google to search its entire data repository, they have the potential to implicate innocent people who happen to search for something an officer believes is incriminating. Here, Google identified responsive queries from fourteen different IP addresses within the eight days covered by the warrant. Keyword warrants could also allow officers to target people based on political speech and by their association with others. Police used multiple geofence warrants to identify people at political protests in Kenosha, Wisconsin, and Minneapolis after police killings in those cities. Similarly, with keyword warrants, officers could seek to identify everyone who searched for the location or the organizers of a protest.
On top of the Fourth Amendment problem, keyword warrants create a First Amendment problem. Users could start censoring their own searches in hopes of avoiding being swept up in law enforcement dragnets. And because even investigators don’t know who or what they’re searching for, some users may decide to avoid searching for anything that might be considered indicative of criminal activity.
The lower court saw no problem with the warrant, reasoning the data sought by the warrant was covered by the Third Party Doctrine, rather than the Fourth Amendment or the state’s own constitution. As for the “willingly shared with third parties” aspect of the Third Party Doctrine, the lower court said Google users were well aware their search terms could be shared with law enforcement. After all, they’d all agreed to Google’s terms of service, which stated data would be stored by Google and handed over to the government if properly requested.
The EFF points out this broad interpretation of the Third Party Doctrine would, if adopted by the state Supreme Court, allow the government to collect all sorts of content and data from any service provider using nothing more than a non-specific warrant or (in some cases) subpoena.
Because all service providers impose TOS similar to Google’s, the lower court’s analysis, if correct, would apply to digital data maintained with any service provider, not just Google. Further, because providers’ terms apply to all content they store, not just search queries, this analysis would apply to any and all emails, files, photos, attachments, and other electronic “papers and effects” stored with any of those providers. Not only would that conclusion vitiate Fourth Amendment protections for the hundreds of millions of people who use these services, it would mean that a private company’s TOS trump Fourth Amendment protections for all content maintained with the provider. This is inconsistent with public expectations, well-recognized Fourth Amendment case law, and the stated positions of every member of the Supreme Court in Carpenter. If adopted by this Court, it would undermine fundamental privacy protections in communications media used by nearly all Americans.
The warrant at issue here is particularly bad, which should hopefully nudge the state Supreme Court to find it unconstitutional or, at the very least, establish ground rules for similar warrants.
[T]he warrant in this case was based on nothing more than an officer’s speculation that the perpetrator may have used a search engine sometime within the eight days prior to the crime to look for the victim’s house. The only stated connection to Google for this hunch was that its search engine is dominant, suggesting that if the perpetrator had conducted such a query, Google might have a record of it. The affidavit includes no facts to support these speculations. As the affidavit notes, the affiant “believed” that the perpetrator “was very familiar with the victim” and that both the victim and her residence were “not randomly targeted.” Given these beliefs, it is just as likely, if not more so, that the perpetrator knew the victim and would not need to use a search engine to identify her or her house.
Whichever way the state Supreme Court chooses to view this, it will set precedent. It is only the second case involving these warrants to make it this far in state courts and its decision will be used to craft challenges (or defend against them) all across the nation. Until a federal court with significant reach rules on this issue, what’s said in Pennsylvania will set the tone for the nation’s law enforcement agencies. It’s an important case. All we can do is hope this court recognizes this new form of “general warrant” for what it is.
Filed Under: 4th amendment, geofence warrant, location data, privacy
Companies: google
HP Hit With Yet Another Lawsuit Over Bricking Printers That Use Third-Party Ink Cartridges
from the this-is-for-your-own-good dept
Hewlett Packard (HP) has been socked with yet another lawsuit for crippling the printers of consumers who use cheaper third-party ink cartridges. The lawsuit, filed by eleven plaintiffs in US District Court in the Northern District of Illinois, states that HP misleadingly used its “Dynamic Security” firmware updates to “create a monopoly” over replacement printer ink cartridges.
The lawsuit seeks monetary damages of $5 million, demands that HP immediately cease crippling its printers in such a fashion, and is seeking a trial by jury. From the lawsuit:
“In 2022 and 2023, HP distributed updates to many of its registered customers that
featured the functionality of “Dynamic Security” previously discontinued: it disabled the printer if the customer replaced the existing cartridge with a non-HP cartridge. There was no notification of any kind at the time of this firmware update that might inform customers that the update would reduce the printer’s functionality. Even if a customer were able to discern that the update would impede the printer’s functionality with other cartridges, there was no means of opting out of the update.”
Despite years of criticism, HP has only doubled down. As Ars Technica notes, CFO Marie Myers has even lauded the obnoxious, predatory behavior as “relationship building”:
“We absolutely see when you move a customer from that pure transactional model … whether it’s [to] Instant Ink, plus adding on that paper, we sort of see a 20 percent uplift on the value of that customer because you’re locking that person, committing to a longer-term relationship.”
When it comes to obnoxious DRM and bizarre, greedy restrictions, nobody does it better than printer manufacturers. The industry has long waged a not-so-subtle war on its own customers, routinely rolling out firmware updates or DRM preventing them from using more affordable, competitor printer cartridges. Usually under the flimsy pretense of consumer safety and security.
A few years ago, printer manufacturers took this tactic one step further, and began preventing users from being able to use a multifunction printer’s scanner if they didn’t have company sanctioned ink installed. Canon was hit with a $5 million lawsuit in 2021 for the practice, but was able to quietly settle it privately without facing much accountability, or having to change much of its behavior.
In 2022 HP was also hit with a lawsuit (pdf) for preventing scanners from working without sanctioned ink cartridges installed, and not being transparent about this with customers. HP has spent a few years trying to wiggle out of the suit, but hasn’t had much luck. Last August, U.S. District Judge Beth Labson Freeman ruled that that case could also proceed.
It’s not clear how many lawsuits and regulatory actions are required before HP gets the message that this kind of behavior is violently unpopular bullshit that harms the company’s overall brand at the cost of a slight goose in quarterly earnings.
Filed Under: consumer rights, drm, hardware, ink cartridges, ink jet, laserjet, printers, right to repair
Companies: hp
WotC Denies Using AI Generative Art In Promo Materials, Later Admits, Yeah, It Did
from the whoopsie dept
D&D and Magic: The Gathering publisher, Wizards of the Coast (WotC), has certainly been pissing folks off as of late. Between its attempt to change its OGL license for D&D both in the future and retroactively last year combined with sending the literal Pinkerton Agency after someone who received some unreleased Magic cards in error, the company appears to have taken a draconian turn in recent years. Then, over the summer, there was a bunch of backlash when WotC was found to have included art from one of its artists that had been partially generated using AI generative art in one of its books. After that whole fiasco, WotC publicly swore off using any art in its products that was not 100% human created.
And it’s important to note that this is a huge thing in the D&D and Magic worlds. The books, cards, and associated items that players and fans buy from these games have always been revered in part for the fantastic art that has come along with them. And the artists contributing to them have been equally celebrated.
So, when sharp-eyed observers of recent promotional art that came out for Magic pointed out it sure looked like the images around the cards showed signs of having been generated by AI, well, WotC came out with a very strong denial.
“We understand confusion by fans given the style being different than card art, but we stand by our previous statement,” the company tweeted. “This art was created by humans and not AI.”
And even as many sleuths on social media and elsewhere kept up the pushback insisting with example after example within the images themselves that, no, this had all the telltale signs of being AI generated, even that PC Gamer article was referring to all of this as an unfortunate “false positive” resulting from a hyper-sensitivity to the intrusion of AI in art and image generation.
But, no, it turns out that the images around the cards was in fact generated in part using AI, as admitted later on by WotC itself.
After sharp-eyed Magic: The Gathering fans cried foul over a recent promotional image’s seeming use of generative AI, Wizards of the Coast initially asserted that it was fully human-made. However, just two days on Wizards has deleted the offending marketing post and acknowledged that generative tools were used in the image.
On Twitter, Wizards of the Coast stated that the image background was sourced from a third-party vendor, and claimed that “It looks like some AI components that are now popping up in industry standard tools like Photoshop crept into our marketing creative, even if a human did the work to create the overall image.”
You can go read the company’s additional full statement on its website as well. And, as statements about such things go, it’s a fairly good one. It points out that this wasn’t done intentionally or with knowledge by the company, that the company would be working with its 3rd party vendors to make it clear that human-made art is a requirement, and it promised transparency moving forward when it came to this sort of thing.
But the real lesson here is that companies have to be very careful with this sort of thing. The internet has enough well-trained Sherlocks out there who are holding companies to their word, looking for anywhere where AI generated content is being snuck in to replace human-made content that, as the technology stands today, there’s a good chance any such uses will be found out. They might as well save themselves the trouble and just make sure the humans are doing the work.
Filed Under: ai, marketing, promos
Companies: wizards of the coast, wotc
California Appeals Court Says Police Drone Footage Not Automatically Exempt From Public Records Law
from the seeing-more-from-the-all-seeing-eyes dept
Public records requesters in California recently scored a small victory in one of the state’s appeals courts. The EFF, which filed an amicus brief in this case, summarizes the decision at its website.
Video footage captured by police drones sent in response to 911 calls cannot be kept entirely secret from the public, a California appellate court ruled last week.
The decision by the California Court of Appeal for the Fourth District came after a journalist sought access to videos created by Chula Vista Police Department’s “Drones as First Responders” (DFR) program. The police department is the first law enforcement agency in the country to use drones to respond to emergency calls, and several other agencies across the U.S. have since adopted similar models.
This case began when journalist Arturo Castañares filed a public records request with the Chula Vista PD seeking drone use records, as well as any footage recorded between March 1 and March 31, 2021. The PD dragged their feet a bit before handing him some of the records he’d requested. But it refused to hand over any of the recordings, claiming the footage was exempt from the California Public Records Act (CPRA) because every single recording was an “investigative record.”
If this excuse sounds familiar, it’s because the Los Angeles Police Department made the same claim about its automatic license plate reader (ALPR) data. According to the LAPD, every single record collected by its plate readers (at a rate of nearly 2 million plate captures a week) was exempt from disclosure because they were (all several million of them!) part of ongoing investigations.
This ridiculous claim resulted in a public records lawsuit and a subsequent smackdown from the state’s top court, which ruled the LAPD could not categorically deny requesters these records because it was insane to state (and more insane to expect people to believe) that every single driver in the state was the subject of a criminal investigation.
A bit more credibly, the Chula Vista PD claimed it would be “unreasonably burdensome” to redact footage and/or sort through its recordings to determine which were actually linked to criminal investigations.
Neither of these arguments manage to secure much sympathy from the court. While it’s not actually willing to rule that all drone footage is subject to the CPRA, it’s just as unlikely that all of it is exempt. The truth lies somewhere in the middle.
We agree with Castañares that the superior court erred in determining, as a matter of law, all video footage from the drone program is exempt under section 7923.600, subdivision (a) as records of investigations. However, it might be the case, after further inquiry, consistent with this opinion, that the majority of the video footage is exempt. That said, we cannot make that determination on the record before us.
The decision [PDF] notes the case of “everyone’s under investigation” ALPR data, which mildly instructive, is a bit different than what’s going on here. While the claim that every bit of footage is subject to this public records law exemption is a bit much, it’s not quite as egregious as the argument raised by the LAPD in its ALPR records lawsuit.
Although Castañares claims the instant matter is analogous to ACLU Foundation, we observe a key difference between the two cases in considering whether the requested drone video footage falls under the records of investigations exemption of the CPRA. In ACLU Foundation, the ALPR scans were random and not aimed at any particular person or in response to any call to service from the public. In contrast, here, the drone video footage is recorded only after an officer determines a drone should be dispatched in response to a 911 call. Thus, unlike the ALPR scans in ACLU Foundation, the drone video footage in the instant matter required an act of discretion by the City’s police. This is a critical difference between the two programs and underscores why we are not persuaded that ACLU Foundation is instructive regarding the application of the records of investigations exemption here.
Since it’s somewhere in the middle, the appeals court says more information is needed to determine what can and can’t be released. Any arguments over what’s been withheld can be taken up with the lower court.
Instead of adopting such an all-encompassing rule, we conclude a more nuanced approach to the drone video footage is apt. The drone video footage should not be treated as a monolith, but rather, it can be divided into separate parts corresponding to each specific call. Then each distinct video can be evaluated under the CPRA in relation to the call triggering the drone dispatch. Further, as an initial determination, the City is well equipped to categorize the drone video footage in this manner. However, we do not propose to instruct the City regarding what process it must use to evaluate the drone video footage or suggest that a City designee must watch the footage to make the necessary determinations. Indeed, it could be more efficient for the City to simply review call logs, AARs, and other related information to ascertain what drone video footage falls into the three categories. After the City categorizes the drone video footage, Castañares then should be permitted the opportunity to challenge or otherwise question any of the determinations the City made. To the extent the parties disagree on the categorization of the drone video footage, we trust the trial court to resolve those issues of disputed fact…
This ruling will immediately affect the Chula Vista PD, which must start reviewing footage and turning it over to the records requester. But, as the EFF points out in its post, it won’t be the only law enforcement agency affected. Chula Vista may have taken the lead with drone deployments in response to 911 calls, but at least three other law enforcement agencies in California have since done the same thing.
This victory sends a message to other agencies in California adopting copycat programs, such as the Beverly Hills Police Department, Irvine Police Department, and Fremont Police Department, that they can’t abuse public records laws to shield every second of drone footage from public scrutiny.
The state public records law instructs agencies to err on the side of disclosure. But government agencies, more often than not, abuse blanket exemptions when not using other methods to limit responses or deter requests. This ruling, while limited, at least alters these contours a bit, which means there’s a bit more transparency in California going forward.
Filed Under: california, chula vista pd, cpra, investigative record, police, police drones, public records
Substack Realizes Maybe It Doesn’t Want To Help Literal Nazis Make Money After All (But Only Literal Nazis)
from the you-don't-have-to-hand-it-to-the-nazis dept
Last year, soon after Elon completed his purchase of (then) Twitter, I wrote up a 20 level “speed run” of the content moderation learning curve. It seems like maybe some of the folks at Substack should be reading it these days?
As you’ll recall, last April, Substack CEO Chris Best basically made it clear that his site would not moderate Nazis. As I noted at the time, any site (in the US) is free to make that decision, but those making it shouldn’t pretend that it’s based on any principles, because the end result is likely to be that you have a site full of Nazis and… that tends not to be good for business because other people you might want to do business with might not want to be on the site welcoming Nazis.
Thus, it should not have been shocking when, by the end of the year, Substack had a site with a bunch of literal Nazis. And, no, we’re not just talking about people with strong political viewpoints that lead people who oppose them to call them Nazis. We’re talking about people who are literally embracing Naziism and Nazi symbols.
And Substack was helping them make money.
Even worse, Substack co-founder Hamish McKenzie put out a ridiculous self-serving statement pretending that their decision to help monetize Nazis was about civil liberties, even as the site regularly deplatformed anything about sex. At that point, you’re admitting that you moderate, and then it’s just a question over which values you moderate for. McKenzie was claiming, directly, that they were cool with Nazis, but sex was bad.
The point of the content moderation learning curve is not to say that there’s a right way or a wrong way to handle moderation. It’s just noting that if you run a platform that allows users to speak, you have to make certain calls on what speech you’re going to allow and what you’re not going to allow — and you should understand that some of those choices have consequences.
In the case of Substack, some of those consequences were that some large Substack sites decided to jump ship. Rusty Foster’s always excellent “Today in Tabs” switched over to Beehiiv. And then, last week, Platformer News, Casey Newton’s widely respected newsletter with over 170,000 subscribers, announced that if Substack refused to remove the Nazi sites, it would leave.
Content moderation often involves difficult trade-offs, but this is not one of those cases. Rolling out a welcome mat for Nazis is, to put it mildly, inconsistent with our values here at Platformer. We have shared this in private discussions with Substack and are scheduled to meet with the company later this week to advocate for change.
Meanwhile, we’re now building a database of extremist Substacks. Katz kindly agreed to share with us a full list of the extremist publications he reviewed prior to publishing his article, most of which were not named in the piece. We’re currently reviewing them to get a sense of how many accounts are active, monetized, display Nazi imagery, or use genocidal rhetoric.
We plan to share our findings both with Substack and, if necessary, its payments processor, Stripe. Stripe’s terms prohibit its service from being used by “any business or organization that a. engages in, encourages, promotes or celebrates unlawful violence or physical harm to persons or property, or b. engages in, encourages, promotes or celebrates unlawful violence toward any group based on race, religion, disability, gender, sexual orientation, national origin, or any other immutable characteristic.”
It is our hope that Substack will reverse course and remove all pro-Nazi material under its existing anti-hate policies. If it chooses not to, we will plan to leave the platform.
As a result of those meetings, Substack has now admitted that some of the outright Nazis actually do violate “existing” rules, and will be removed.
Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies.
As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.
The company will not change the text of its content policy, it says, and its new policy interpretation will not include proactively removing content related to neo-Nazis and far-right extremism. But Substack will continue to remove any material that includes “credible threats of physical harm,” it said.
As law professor James Grimmelann writes in response: “As content moderation strategies go, “We didn’t realize until now that the Nazis on our platform were inciting violence” perhaps raises more questions than it answers.”
Molly White, who remains one of the best critics of tech-boosterism, also noted that Substack’s decisions seemed likely to piss off the most people possible, by first coddling the Nazis (pissing off most people who hate Nazis), and then pissing off the people who cheered on the “we don’t moderate Nazis.”
In the end, Substack is apparently removing five Nazi newsletters. As White notes, this will piss off the most people possible. The people who want Substack to do more won’t be satisfied and will be annoyed it took pointing out the literal support for genocide for Substack to realize that maybe they don’t want literal Nazis. And the people who supported Substack will be annoyed that Substack was “pressured” into removing these accounts.
Again, there are important points in all of this, and it’s why I started this post off by pointing to the speed run post at the beginning. You can create a site and say you’ll host whatever kinds of content you want. You can create a site and say that you won’t do any moderation at all. Those are valid decisions to make.
But they’re not decisions that are in support of “free speech.” Because a site that caters to Nazis is not a site that caters to free speech. Because (as we’ve seen time and time again), such sites drive away people who don’t like being on a site associated with Nazis. And, so you’re left in a situation where you’re really just supporting Nazis and not much else.
Furthermore, for all of McKenzie’s pretend high-minded talk about “civil liberties” and “freedom,” it’s now come out that he had no problem at all trying to put his fingers on the scale to put together a list of (mostly) nonsense peddlers to sign a letter in support of his own views. McKenzie literally organized the “we support Substack supporting Nazis” letter signing campaign. Which, again, he’s totally allowed to do, but it calls into question his claimed neutrality in all of this. He’s not setting up a “neutral” site to host speech. He’s created a site that hosts some speech and doesn’t host other speech. It promotes some speech, and doesn’t promote other speech.
Those are all choices, and they have nothing to do with supporting free speech.
Running a private website is all about tradeoffs. You have to make lots of choices, and those choices are difficult and are guaranteed to piss off many, many people (no matter what you do). For what it’s worth, this is still why I think a protocol-based solution should beat a centralized solution every time, because with protocols you can setup a variety of approaches and let people figure out what works best, rather than relying on one centralized system.
Substack is apparently realizing that there were some tradeoffs to openly supporting Naziism, and will finally take some action on that. It won’t satisfy most people, and now it’s likely to piss off the people who were excited about Nazis on Substack. But, hey, it’s one more level up on the content moderation speed run.
Filed Under: content moderation, nazi bar, nazis
Companies: platformer, substack
Wherein The Copia Institute Asks The Second Circuit To Stand Up For Fair Use, The Internet Archive, And Why We Bother To Have Copyright Law At All
from the dear-court-please-fix-this dept
December was not just busy with Supreme Court briefs. The Copia Institute also joined many others, including copyright scholars and public interest organizations, in filing an amicus brief to support the Internet Archive’s appeal at the Second Circuit, seeking to overturn the troubling ruling holding its Open Library to be copyright infringement.
We’ve written about this case several times before, including about the original decision. At issue is how the Internet Archive has solved how to be a library in a way that geography doesn’t matter. Instead of lending out physical copies of books it lends out scanned copies instead, which means it doesn’t matter how far away a reader is from a book – they can still get to read it. Just like a physical library, the Internet Archive lends out books one-at-a-time, even in digital form, except during a brief period at the beginning of the pandemic when the exigence of the sudden lockdown, isolating people from the physical books they otherwise were entitled to access, appeared to justify allowing the loans to be unlimited in order to functionally restore the access that readers otherwise would have been able to have.
Publishers whose books were being scanned and lent, however, took issue with this lending and so sued, not just over the brief period of unlimited lending but all of the Internet Archive’s digital lending, arguing that only they were entitled to get digital copies of books into readers hands by virtue of their copyrights. The judge at the district court agreed and thus found the Internet Archive to be infringing, even though such a finding required such a truncated fair use analysis as to effectively obviate the doctrine and the public interests, as well as constitutional interests, it is designed to serve.
The Internet Archive’s own brief does a good job explaining how the district court got the fair use analysis wrong. Our amicus brief discussed the bigger picture of what it would mean if fair use couldn’t apply here. Including constitutionally; once again we reminded the courts that copyright law is subject to two important constitutional limitations.
First, that copyright law promote the progress of sciences and the useful arts. Congress is only constitutionally entitled to legislate in this area when the legislation it produces meets that goal. Legislation that does not meet this goal, or, worse, undermines it, is beyond the scope of its authority to pass and thus unconstitutional. But we weren’t arguing that copyright law was per se unconstitutional on this basis – after all, the statute does include the doctrine of fair use to help ensure that this legislative goal is met. Instead we argued that the courts had to give that part of the statute meaning or else they would be the ones rendering the statute unconstitutional if they interpreted it in a way that did not let it have that knowledge-enhancing effect.
Secondly, Congress is also limited in its legislative abilities by the First Amendment. Congress shall make no law that interferes, for instance, with freedom of expression. And, as we’ve noted a lot lately in our comments to the Copyright Office about AI, the freedom of expression inherently includes the right to read. So for copyright law to be constitutional it also can’t interfere with that right. Here the district court’s decision would interfere with it directly, effectively allowing copyright law to stand between books and readers entitled to read them by privileging copyright owners with a preclusive power the statute does not actually give them – or could give them, given these constitutional limitations constraining how Congress could write its statute.
Finally we argued that these concerns were not just academic. If the district court is upheld, fewer people will get to read books – even books that the Internet Archive lawfully owned, and that readers would otherwise be entitled to read (and often not otherwise get to read). Keeping people from reading seems like the last thing copyright law should be doing, especially not when the whole point of it is to make sure the public actually has things to read. Hopefully the Second Circuit will recognize how destructively counterproductive the district court’s decision was and reverse it.
Filed Under: 1st amendment, copyright, fair use, free speech, libraries
Companies: hachette, internet archive
Florida Senator Introduces Bill That Would Make Accusations Of Racism, Transphobia De Facto Defamation
from the well,-fuck-the-first-amendment-I-guess dept
Things are still batshit insane in the Florida legislature. Again. Apparently, the state’s government won’t be satisfied until it’s attempted to violate every single constitutional amendment (except the 2nd!) via godawful bills crafted by godawful people.
The latest insanity is a bill [PDF] written by state senator Jason Brodeur. It aims to completely rewrite defamation law (and completely undermine the First Amendment) so that people like Brodeur can sue anyone who calls them racist, bigoted, transphobic, homophobic, or anything along those lines.
This bill has been crafted by an absolute idiot who either doesn’t know or doesn’t care that court after court after court after court has ruled that statements implying someone is bigoted (no matter what form of bigotry it is) are protected speech. It’s all opinion. These statements aren’t actionable under defamation law because they cannot be proven to be true or false. These statements are made by people who base their opinion on what someone has said or done. And while it’s terrible to be on the receiving end of these accusations if they’re false (or even just misguided), this law has been written solely to silence the critics of people who do engage in what appear to be bigoted actions.
The most obvious beneficiaries of this pile of First Amendment violations would be Republican legislators in Florida, who have spent most of the past few years passing legislation that specifically targets LGBTQ+ residents. So, of course they would love a law that allows them to sue people for calling out their bigotry while simultaneously shifting the burden of proof to defendants. You know, the exact reverse of the legal process. It will also benefit the worst members of their voting bloc, so there’s that added benefit.
The only way to demonstrate how fucked up this bill is is to quote from it generously.
The first thing the law does is strip long-held protections from journalists, allowing them to be sued just as easily as anyone else.
[P]roviding that provisions concerning journalist’s privilege do not apply to defamation claims when the defendant is a professional journalist or media entity
This proposed addition to the state’s statutes appears to rewrite the law to ensure that only plaintiffs in these lawsuits are capable of recovering costs and fees, even if they do not prevail.
770.09 Application of costs and attorney fees in defamation cases.—The fee-shifting provisions of s. 768.79 do not apply to defamation or privacy tort claims. Notwithstanding any other provision of law, a prevailing plaintiff on a defamation or privacy tort claim is entitled to an award of reasonable costs and attorney fees.
Here’s where the bill really gets going. It basically says implying someone is a bigot is not only legally-actionable, but is per se defamation, i.e. presumptively defamatory, which shifts the burden of proof to the person accused of defamation.
(1) A fact finder shall infer actual malice for purposes of a defamation action when:
(a) The defamatory allegation is fabricated by the defendant, is the product of his or her imagination, or is based wholly on an unverified anonymous report;
(b) An allegation is so inherently implausible that only a reckless person would have put it into circulation;
(c) There are obvious reasons to doubt the veracity of the defamatory allegation or the accuracy of an informant’s reports.There are obvious reasons to doubt the veracity of a report when:
1. There is sufficient contrary evidence that was known to or should have been known to the defendant after a reasonable investigation; or
2. The report is inherently improbable or implausible on its face; or
(d) The defendant willfully failed to validate, corroborate, or otherwise verify the defamatory allegation.
Not only would the law make it per se defamation to express your belief that someone is bigoted or has acted in a bigoted way, but the law deprives defendants of affirmative defenses, and, indeed any defenses at all, including that most famous of defenses: the goddamn First Amendment.
(2) An allegation that the plaintiff has discriminated against another person or group because of their race, sex, sexual orientation, or gender identity constitutes defamation per se.
(a) A defendant cannot prove the truth of an allegation of discrimination with respect to sexual orientation or gender identity by citing a plaintiff’s constitutionally protected religious expression or beliefs.
(b) A defendant cannot prove the truth of an allegation of discrimination with respect to sexual orientation or gender identity by citing a plaintiff’s scientific beliefs.
The end result of this stacked deck is libel cases where plaintiffs sued people over protected speech and have an extremely high chance of walking away with “statutory damages of at least $35,000.”
Then the bill moves on to pretending New York Times v. Sullivan never happened:
A public figure does not need to show actual malice to prevail in a defamation cause of action when the allegation does not relate to the reason for his or her public status.
And from there, the chilling effect gets even more amped up by making “editing” of new articles, reports, and quotes from sources part of the libel process (as it were).
(3) Editing any form of media so that it attributes something false or leads a reasonable viewer to believe something false about a plaintiff may give rise to a defamation claim for false light.
What the actual fuck. I mean, this basically turns every lawsuit against a journalist or op-ed writer into a defamation slam dunk.
Plaintiff: Was this article edited in any way?
Journalist defendant: Of course. Every article goes through an editing process.
Plaintiff: I rest my case.
Unbelievably, the bill doesn’t attempt to erase the state’s anti-SLAPP law. But that’s probably because the bill turns SLAPP suits into wins for people who want to silence their critics by taking them to court for criticizing them. As far as this law is concerned, any action brought under it is a legitimate defamation lawsuit, and not something less legitimate that might be subject to the existing anti-SLAPP law.
This bill pretends the First Amendment does not exist. It operates in a vacuum where decades of Supreme Court precedent don’t immediately invalidate pretty much every word of this insipid bit of legislative garbage. If the legislature is stupid enough to pass this (it might be!) and Governor Ron DeSantis is dumb enough to sign it (ALL SIGNS POINT TO YES), it’s going to get laughed out of court the moment a judge lays eyes on it. If the intention is to make the Florida legislature look even more ridiculous than it already does, mission accomplished.
Filed Under: 1st amendment, actual malice, defamation, defamation per se, false light, florida, free speech, jason brodeur
Daily Deal: StackSkills Unlimited
from the good-deals-on-cool-stuff dept
StackSkills is the premier online learning platform for mastering today’s most in-demand skills. Now, with this exclusive limited-time offer, you’ll gain access to 1000+ StackSkills courses for life! Whether you’re looking to earn a promotion, make a career change, or pick up a side hustle to make some extra cash, StackSkills delivers engaging online courses featuring the skills that matter most today, both personally and professionally. It’s on sale for a short time for only $19.97.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
No comments:
Post a Comment