About Techdirt.
Started in 1997 by Floor64 founder Mike Masnick and then growing into a group blogging effort, the Techdirt blog relies on a proven economic framework to analyze and offer insight into news stories about changes in government policy, technology and legal issues that affect companies’ ability to innovate and grow. As the impact of technological innovation on society, civil liberties and consumer rights has grown, Techdirt’s coverage has expanded to include these critical topics.
The dynamic and interactive community of Techdirt readers often comment on the addictive quality of the content on the site, a feeling supported by the blog’s average of ~1 million visitors per month and more than 1.7 million comments on 73,000+ posts. Both Business Week and Forbes have awarded Techdirt Best of the Web thought leader awards.
You can also find Techdirt on Twitter and Facebook.
Judge Says Montana’s TikTok Ban Is Obviously Unconstitutional
from the the-states-are-out-of-control dept
This wasn’t hard to predict. When Montana passed its TikTok ban in April we called it “laughably unconstitutional.” Montana’s very silly Attorney General, Austin Knudsen, who claimed to have been the driving force behind the bill, had insisted that the state would be vindicated in court. As we noted when the bill passed, his public defense of the bill was to admit that the point of the bill was to shut down content he didn’t like online, damning his own case.
So far, Knudsen has not been vindicated at all.
Federal judge Donald Molloy made the easy call that the bill is obviously unconstitutional as a suppression of 1st Amendment rights.
Plaintiffs argue SB 419’s total ban on TikTok unconstitutionally targets speech and that the law is subject to the highest level of constitutional scrutiny. The State disagrees, arguing that to the extent SB 419 implicates the First Amendment at all, it merely regulates expressive nonspeech conduct, thus it need only pass intermediate scrutiny. Like the curate’s egg, neither argument is entirely persuasive. However, because Plaintiffs have shown that SB 419 is unlikely to pass even intermediate scrutiny, it likely violates the First Amendment.
While he does not fully buy the argument of the plaintiffs (both TikTok and some TikTok users), he does not buy Knudsen’s argument at all. The argument that this is just a standard “consumer protection” bill fails, because consumer protection bills don’t target speech like this bill does:
The State’s defense of SB 419 rests on the proposition that the First Amendment is not implicated at all because the bill does not regulate speech. It argues instead that because the Legislature “may make its own reasoned judgment about what conduct is permitted or proscribed within its borders,” State Farm Mut. Auto Ins. Co. v. Campbell, 538 U.S. 408, 422 (2003), its TikTok ban can sit comfortably alongside its many other generally applicable consumer protection laws. The State and Amicus Virginia, (see Doc. 70), are correct that consumer protection laws “fall in an area that is traditionally within the state’s police powers to protect its own citizens.” Aguayo v. U.S. Bank, 653 F.3d 912, 917 (9th Cir. 2011). However, SB 419 is not merely a generally applicable consumer protection statute without any First Amendment implications
Montana relied heavily on Arcara v. Cloud Books. That case involved an adult bookstore that also engaged in prostitution. The government shut the store down as a “public nuisance” based on the prostitution. The book store argued that by shutting down the bookstore it was an attack on protected speech. The Supreme Court noted that the bookstore was not shut down over speech, but over the prostitution.
Judge Molloy points out just how different this case is:
First, SB 419 is not a generally applicable law like the one in Arcara, which authorized the closure of any building found to be a public health nuisance. Unlike that law, SB 419 targets one entity, which on its face makes it not generally applicable. Second, the Court in Arcara determined that the conduct there was “nonspeech,” subject to New York’s general regulation, and that it had “absolutely no connection to any expressive activity.” 478 U.S. at 707 n.3. For both groups of Plaintiffs, SB 419 implicates traditional First Amendment speech. It does so for User Plaintiffs by banning a “means of expression” used by over 300,000 Montanans. See Minneapolis Star & Trib. Co. v. Minn. Com’r of Revenue, 460 U.S. 575, 582–83 (1983) (holding a statute singling out expressive activity violates the First Amendment even when it is apparently based on a nonexpressive activity). Without TikTok, User Plaintiffs are deprived of communicating by their preferred means of speech, and thus First Amendment scrutiny is appropriate.
Likewise, SB 419 implicates TikTok’s speech because the application’s decisions related to how it selects, curates, and arranges content are also protected by the First Amendment. SB 419 prevents the company from “the presentation of an edited compilation of speech generated by other persons . . . which, of course, fall squarely within the core of First Amendment security.” Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Bos., 515 U.S. 557, 570 (1995); see also Miami Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 258 (1974) (holding that a newspaper’s moderation of third-party content is generally protected by the First Amendment). These speech concerns place SB 419 and the activity it bans squarely within the First Amendment’s protections.
Of course, just because it implicates speech does not automatically mean it violates the 1st Amendment. That’s where the question of what scrutiny level to apply comes into play. Similar to the judge in California tossing out that state’s Age Appropriate Design Code law, the court here finds that we don’t even need to explore the nuances of strict scrutiny vs. intermediate, because the law doesn’t even pass intermediate scrutiny (with its much lower bar).
With intermediate scrutiny, you need to show that there is an “important government interest” to put in place this law and that it “not burden substantially more speech than necessary to further those interests.” The law fails that.
Conceding for the sake of this argument that the State may have at least an important state interest in SB 419, the law is not narrowly tailored, nor does it leave open any alternative channels for targeted communication of information. SB 419 does not pass intermediate scrutiny review
Of course, getting rid of the “for the sake of this argument” part, the court also finds no real important state interest here:
The State attempts to persuade that its actual interest in passing this bill is consumer protection. However, it has yet to provide any evidence to support that argument. See contra Jacobs, 526 F.3d at 435 (noting that sworn affidavits from government officials are useful in demonstrating a government’s purpose in passing a bill). Because Montana does not have an important government interest in regulating foreign affairs, and because the State has not demonstrated the Legislature’s consumer protection interest in passing the bill, it is likely that Plaintiffs will succeed in showing SB 419 does not advance an important government interest as stated in the Act’s preamble and text.
So, it fails the first part of intermediate scrutiny. It fails the second part too:
The State claims that SB 419 is narrowly tailored and meets this standard. SB 419 only bans TikTok, not the other major social media applications, because of its grave risk to Montanans, e.g., Chinese spying on Montanans. In doing so, it argues, SB 419 “eliminate[d] the exact source of evil it sought to remedy.” City of L.A. v. Taxpayers for Vincent, 466 U.S. 789, 808 (1984). Plaintiffs argue that SB 419 burdens substantially more speech than is necessary to fulfill even its purported interests. Because the Legislature used an axe to solve its professed concerns when it should have used a constitutional scalpel, Plaintiffs are correct.
First, SB 419 “burden[s] substantially more speech than is necessary.” Ward, 491 U.S. at 799. This is apparent on the law’s face. SB 419 completely bans TikTok in Montana. It does not limit the application in a targeted way with the purpose of attacking the perceived Chinese problem. At the October 12 hearing, the State argued that the law is narrowly tailored because it is the only way the Legislature could have stopped the purportedly improper behavior it wanted to prevent. In its brief, the State cites a March 2023 article from Reuters reporting on a group of 45 United States attorneys general who moved to file in a Tennessee state court as amici curiae to argue that TikTok has deceptively and improperly ignored requests to produce internal company documents in response to state investigations. (See Doc. 51 at 27 n.14 (citing David Shepardson, State AGs demand TikTok comply with US consumer protection investigations, Reuters,https://perma.cc/4DR9-LQ3M (Mar. 6, 2023)).) The State suggests that any legislation less stringent than an all-out ban would not be properly tailored when the company has already displayed a public willingness to disobey state regulatory requests. However, it is unclear how this single investigation into TikTok warrants a complete ban on the application.
Even worse, the state presented no evidence that banning TikTok will actually protect kids in Montana. It notes that the same data collection issues it claims TikTok uses to harm kids happens on other social media, and if the concern is “China” getting data on kids, they can still buy it via data brokers.
Second, it is likely that SB 419 is not narrowly tailored because the State has not provided any evidence that the ban “will in fact alleviate these harms in a direct and material way.” Turner Broadcast Sys., 512 U.S. at 664. In the first instance, it is well-established that other social media companies, such as Meta, collect similar data as TikTok, and sell that data to undisclosed third parties, which harms consumers. See, e.g., In Re Facebook, Inc. Internet Tracking Litig., 956 F.3d 589, 596 (9th Cir. 2020); In Re Facebook, Inc. Consumer Priv. User Profile Litig., 2021 WL 10282172, at *4 (9th Cir. Oc.t 11, 2021). Additionally, there are many ways in which a foreign adversary, like China, could gather data from Montanans. For example, it could do so by “purchasing information from data brokers (a practice in which U.S. intelligence agencies also engage), conducting open-source intelligence gathering, and hacking operations like China’s reported hack of the U.S. Office of Personnel Management.” (Doc. 15 at ¶ 13.) Thus, it is not clear how SB 419 will alleviate the potential harm of protecting Montanans from China’s purported evils.
And, although the State does not explore this argument in any detail in its briefing, SB 419 does not reasonably prevent minors from accessing dangerous content on the Internet. It is not hard to imagine how a minor may access dangerous content on the Internet, or on other social media platforms, even if TikTok is banned. This “raises serious doubts about whether the government is in fact pursuing” consumer protection interests, Brown, 564 U.S. at 802 (analyzing a law under a strict scrutiny analysis), or targeting the application simply because of its connection to China.
The court also finds that the Montana law is almost certainly pre-empted by federal law. Once again, Knudsen’s own grandstanding about the bill helps to sink the bill. He went on and on about how he was fighting evil Communist China with this bill, and the judge notes that that’s kind of a federal issue, not a state issue, and federal law preempts the states from getting involved in foreign affairs like this.
The bill’s legislative history further supports this conclusion. For example, in the first Montana House of Representatives hearing on the bill, Defendant Attorney General Knudsen explained: “TikTok is spying on Americans, period. TikTok is a tool of the Chinese Communist Party. It is owned by a Chinese company, and under China law, if you are based in China, you will cooperate with the Chinese Communist Party, period.” (Doc. 13-2 at 5.) He further explained his belief that China sees “a war with the United States as inevitable, and [China is] using TikTok as an initial salvo in that war.” (Id. at 6.) This, he explains, is a reason the bill is necessary.
[….]
The Legislature may have set out to protect Montanans from an allegedly grave threat. But “however laudable it may be, [it] is not an area of traditional state responsibility.”
The court also finds that the dormant commerce clause (which limits the ability of states to regulate out of state commerce issues) likely preempts this law as well:
While the State argues that the law’s local benefits are significant, and they may be, it has not provided any evidentiary support for those benefits. Thus, Plaintiffs have demonstrated a likelihood that SB 419 puts a burden on interstate commerce that exceeds its local benefits.
And thus the law is enjoined. It is likely that Knudsen will appeal, because this whole thing is just a grandstanding ploy to get his name and face in the news more often anyway, so why stop now?
But, once again, we see a state trying to pass unconstitutional suppressions of free speech rights. As we’ve noted, this is neither a red state nor a blue state issue, as basically all states seem to be engaging in such censorial behavior. Thankfully, many (though not all) of the courts seem to be recognizing this nonsense for what it is and rejecting these laws.
Filed Under: 1st amendment, austin knudsen, donald molloy, free speech, intermediate scrutiny, montana, tiktok ban
Companies: tiktok
Yes, The First Amendment Protects Displaying The ‘Thin Blue Line’ Flag Even In Publicly Owned Buildings
from the but-especially-unpopular-speech dept
Most people seem to understand the First Amendment protects their right to say stupid or offensive things, especially when they’re the ones saying them. These same people often forget the First Amendment does not protect them from counter-speech, during which they may be publicly decried as stupid or offensive.
The same goes for most government employees in most situations. First Amendment protections aren’t quite as free-ranging when public servants are involved, but it certainly doesn’t just disappear because the expression springs from government sources.
That’s the upshot of this decision, which features some hot government-on-government action. On one side, we have police officers and their union litigating on behalf of their First Amendment rights after being “silenced” by a city statute that targeted just one particular form of expression: the display of the cop-bastardized version of the American flag known as the “Thin Blue Line” flag.
The decision [PDF], handed down by a federal court in Pennsylvania (and brought to us by Courthouse News Service), contains some handy depictions just in case no one’s familiar with law enforcement’s preferred “us vs. them” shorthand.
There’s this version, which is described by the law enforcement plaintiffs in their lawsuit as representing a “show of support for [and] a solidarity with member[s] of law enforcement, which includes, police officers.” Not only that but it allegedly “represents the preservation of the rule of law” and the “sacrifice of fallen law enforcement officers.”
Whew. That’s a lot to ask from one slightly altered American flag. But that’s what the suing officers claim, as well as the union reps from the Springfield Township Police Benevolent Association (PBA), which has not-so-boldly decided to incorporate this separatist version of the American flag into its logo:
Kind of gross, to be honest. The “blue line” flag doesn’t do the things the plaintiffs claim it does. Instead, when displayed by cops or their unions, it represents the “us vs. them” mentality that permeates US law enforcement. The “thin blue line” doesn’t represent cops saving us non-cops from criminal anarchy as much as it represents law enforcement’s insular culture. But I suppose a “circle the wagons” flag wouldn’t be nearly as popular.
In this mockup, the stars represent police supervisors, union reps, and other officers on the scene surrounding an officer involved in an egregious violation of rights — a show of solidarity that includes crafting narratives, synchronizing statements, and applying generous amounts of white-out to in-progress reports to ensure everyone is telling the same story the PD’s PR reps are currently delivering to nearby reporters. More accurate but certainly far less inspiring.
Anyway… back to the lawsuit.
Springfield tried to break up the insular cop culture by enacting a resolution prohibiting the display of the “thin blue line” flag “on all township property” earlier this year. That resolution became the immediate target of this lawsuit, filed by a handful of PD employees, as well as their police union.
Why? Because these cops and their reps have plastered the “blue line” flag all over the police station, which resides inside the main township government building. While not displayed (at least not prominently) in areas accessible by the public, the fake-ass flag is displayed nearly everywhere else in the station.
Although there are no depictions of the Flag in the lobby area of the station (Doc. No. 47-1 at ¶ 80), it does appear in other areas of the station, to which the public has limited access (id. at ¶ 75), including:
- A bulletin board displaying patches from other police departments depicting the Thin Blue Line American Flag,
- A wooden Thin Blue Line American Flag hanging on a wall,
- A Thin Blue Line American Flag hanging on a wall,
- Thin Blue Line American Flags displayed in the safety office,
- A wooden “ballot type” box,
- On a recycling bin, and
- On challenge coins displayed on officers’ desks.
The court notes the township perhaps had a legitimate reason to forbid the posting of this flag on town property.
While Plaintiffs revere the Thin Blue Line American Flag, many members of the public, including residents of Springfield Township, view it as a symbol of police brutality and racial animosity.
The town passed its resolution, prompted in part by the police union’s decision to revamp its logo to incorporate the “blue line” flag. The police union and these police officers wanted nothing to do with dispelling the perception that those displaying this particular flag were supportive of law enforcement actions/officers who engaged in police brutality and/or racial animosity. Instead, they sued.
They’re right, even if their insistence on displaying this flag is not nearly as correct. Whatever injury they might suffer (which appears to be, at most, a reprimand from city officials and/or removal of the offending flag) is still an injury. The speech is protected and this specific targeting of only one certain form of expression by town employees is exactly the sort of thing the First Amendment strictly forbids.
The Township has not, and indeed, cannot, contest that the Resolution is a viewpoint regulation—it prohibits employees, agents, and consultants from displaying only the Thin Blue Line American Flag, not from displaying flags or political speech generally.
Not only that, but if the town wished to eliminate expression that might undermine trust in local law enforcement or discourage racially divisive behavior, it needed to go even further than it did here. On one hand, the law goes too far already. On the other hand, it doesn’t go far enough, as least according to the town’s own assertions in defense of its hastily erected (and poorly thought out) resolution against this one particular form of speech.
In addition to being overbroad, the Resolution is underinclusive in that Township employees, including police officers, are allowed to engage in other forms of discourse that could exacerbate racial tensions and undermine public confidence in the Police Department. For example, nothing in the Resolution precludes an officer, while on duty and in uniform, from voicing opposition to the Black Lives Matter movement or for example, carrying a coffee cup that says, “Blue Lives Matter.” Both forms of speech would seem to trigger the same concerns that the Township is trying to address through the Resolution, perhaps in an even more direct way.
Sure, it’s a little bit more complex than that because it involves the government regulating the government, but at the end of it all, it’s still impermissible. The town’s cops are free to be divisive and the town’s residents are free to think the cops are being divisive. But the town itself can’t really do much about this other than encourage its officers to be less divisive. The First Amendment protects this expression, though, as divisive as it may be. And that’s how it should be. Even divisive expression is still protected expression.
Filed Under: 1st amendment, springfield township, thin blue line
Daily Deal: Skill Success
from the good-deals-on-cool-stuff dept
Skill Success gives you access to over 4,000 online video courses from hundreds of the top experts around the world. Learn new skills from our expansive course library with topics such as Languages, Business, Technology, Meditation, Cooking, Music, and everything in between. It’s on sale for $120.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
Elon Musk Says Only Those Who Pay Him Deserve Free Speech
from the price-check-on-freedom-on-aisle-7-please dept
Okay, okay, I think this is the last of my posts about Elon Musk’s unhinged appearance at the DealBook Summit with ill-prepared interviewer Andrew Ross Sorkin. We already covered his cursing out advertisers, while predicting that “earth will judge” them, as well as his statement that AI copyright lawsuits don’t matter because “Digital God” will be here before it’s over, but I also wanted to cover one more exchange, in which Musk effectively says that only those who give him money deserve “free speech” (his definition of free speech).
Again, Sorkin does a terrible job of setting up the question, so I’ll do what he should have done and explain the context. Sorkin is asking about a few times recently where ExTwitter has fiddled with the knobs to punish sites Elon appears not to like. In September, it was reported that one of those sites was the NY Times, and the process began in July, whereby something changed at ExTwitter so that NY Times’ tweets were suppressed in some manner (what some might call “shadow banned,” even though that term’s meaning has changed over time).
Since late July, engagement on X posts linking to the New York Times has dropped dramatically. The drop in shares and other engagement on tweets with Times links is abrupt, and is not reflected in links to similar news organizations including CNN, the Washington Post, and the BBC….
Now, remember, nonsense conspiracy theories about “shadow banning” were one of the reasons why Elon insisted he had to take over the company “to protect free speech.” But as soon as he was at the controls, he immediately started using the same tools to “shadow ban” some of those he disliked.
Anyway, Sorkin asks Musk about this, and Musk’s response is somewhat incredible to see. He more or less says that if you don’t give him money, you don’t deserve his version of “free speech” (which is the ability to post on ExTwitter).
The discussion starts out weird enough:
ARS: The New York Times newspaper it appeared over the summer, to be throttled.
Elon: What? What did?
ARS: The NY Times.
Elon: Well, we do require that that everyone has to buy a subscription and we don’t make exceptions for anyone and and I think if I want the New York Times I have to pay for a subscription and they don’t give me a free subscription, so I’m not going to give them a free subscription
First of all, what? What do you mean “we do require that everyone has to buy a subscription” because that’s literally not true. Over 99% of users on ExTwitter use the platform for free. Only a tiny, tiny percentage pay for a subscription.
Sorkin tries to bring it back around to throttling, but Musk continues to talk nonsensically about subscriptions, which have fuck all to do with what Sorkin is asking him about.
ARS: But were you throttling the New York Times relative to other news organizations? Relative to everybody else? Was it specific to the to the Times?
Musk: They didn’t buy a subscription. By the way, it only costs like a thousand dollars a month, so if they just do that then they’re back in the saddle.
ARS: But you are saying it was throttled.
Musk: No, I’m saying…
ARS: I’m saying I mean was there a conversation that you had with somebody you said, ‘look, you know, I’m unhappy with the Times, they should either be buying the subscription or I don’t like their content or whatever.’ Whatever.
Musk: Any organization that refuses to buy a subscription is is not going to be recommended.
So, Sorkin and Musk are obviously talking at cross purposes here. Sorkin’s asking about deliberate throttling. Musk is trying to say that news orgs that don’t pay $1,000/month (which is not, as Musk implies, cheap, nor is it worth it, given how little traffic Twitter actually sends to news sites) aren’t recommended.
The correct question for Sorkin to ask here is what’s the difference between “not recommended” and “throttled,” if any, because the evidence suggests that the Times was deliberately punished beyond just not being “recommended.” And, yes, this exact kind of thing was part of what Musk said he had to buy Twitter to stop. So Sorkin jumps ahead (awkwardly) to try to sorta make that point:
ARS: But then what does that say about free speech? And what does it say about amplifying certain voices…
Musk: Well, it says free speech is not exactly free, it costs a little bit.
Which, um, is kinda a big claim. Especially given what he’s said about free speech in the past. Sorkin seems stumped for a moment and so Elon starts laughing and then comes up with some non sequitur from South Park.
Musk: You know, it’s like… uh… South Park like they say: you know freedom isn’t free, it costs of buck o’ five or whatever. So but it’s pretty cheap. Okay? It’s low cost low cost freedom.
So, again, he doesn’t actually answer the question or address the underlying issue, which is that for all the claims that he purchased Twitter to stop those kinds of knob fiddling that he (incorrectly) believes were being done for ideological reasons, he’s now much more actively fiddling with the knobs, including suppressing speech of those who won’t give him money.
The sense of entitlement, again, is astounding. It’s basically, “you don’t get free speech unless you pay me.”
Filed Under: andrew ross sorkin, elon musk, free speech, recommendations, shadowbanning, shadowbans, subscriptions, throttling
Companies: ny times, twitter, x
FCC To Vote On New Rules Cracking Down On Shitty Cable TV Fees
For decades, cable TV giants have nickel-and-dimed customers with a rotating assortment of bullshit cable TV fees, whether it’s “regulatory recovery” fee (a misleadingly named fee designed to have you blaming government for industry greed), regional sports fees (charged whether or not you watch sports), or the completely meaningless “broadcast TV fee” (which has ballooned at several times the rate of inflation).
All of the fees are designed to let the company falsely advertise one price, then sock you with a higher rate when the bill comes due. And now that it has a functional voting majority for the first time in several years, the FCC says it’s looking to vote on new rules in December that could put a damper on the industry’s abuse of at least one type of fee. Maybe.
The agency’s breakdown of its proposed plan suggests the proposal will primarily focus on “early termination fees,” charged when users prematurely cancel service while under contract, and “billing cycles fees” requiring customers pay for a complete billing cycle even if they cancel service before the end of the cycle.
“No one wants to pay junk fees for something they don’t want or can’t use. When companies
charge customers early termination fees, it limits their freedom to choose the service they want,” FCC Chairwoman Rosenworcel said in a statement. “In an increasingly competitive media market, we should make it easier for Americans to use their purchasing power to promote innovation and expand competition within the industry.”
Details will matter. As will consistent enforcement (not really the FCC’s strong suit).
Early cancellation fees are also only one small part of a much larger ecosystem of bullshit fees. Including the mandatory rental of a cable box. A 2019 Consumer Reports study found that about 24% of consumer bills are comprised of bullshit fees, generating cable giants $28 billion in additional revenue annually. Efforts to protect consumers from these fees has been inconsistent and selective at best.
So while it’s nice to hear the FCC say all the right things about obnoxious fees, and tackle a genuine issue of annoyance (early cancellation fees), the actual scope of the rules — and whether they’re consistently enforced — will matter.
Filed Under: cable tv, competition, early termination fees, fcc, video
Ubisoft Tries To DMCA Leak Of BG&E Remaster Footage Before Finally Getting It Right
from the got-there-eventually dept
One of the more famous, and my favorite, quotations attributed to Winston Churchill is: “Americans will always do the right thing, only after they have tried everything else.” My second favorite Churchill quote, by the way, is: “Dammit, I can’t decide between these three fingers of whiskey and this tankard of champagne!”
Anyway, the point of the message is that sometimes it takes an organization failing by doing the wrong thing before they eventually get to doing things the right way. Which brings me to Ubisoft. The gaming company unfortunately did an oopsie and leaked its own remastered version of Beyond Good & Evil to subscribers.
In an email to Kotaku, an Ubisoft spokesperson confirmed that the leak was all its fault.
“Due to a technical error, an early development version of Beyond Good & Evil – 20th Anniversary Edition was recently released to some Ubisoft+ subscribers,” the Ubisoft spokesperson said. “As we celebrate the 20th anniversary of Beyond Good & Evil, we’re excited to share that the official launch is set for early 2024, and we look forward to sharing more with you in the new year.”
That response to the leak is obviously fine. In fact, while we have seen gaming companies absolutely mess themselves over and over again over leaks of game or game footage, we’ve actually promoted this sort of response in the past. You acknowledge the leak, thank your fans for being so interested in the game that they are gobbling up leaked information and footage, and then you remind everyone its not a finished product and to stay tuned for the actual eventual awesome release of the game. Simple!
And, that’s eventually where Ubisoft got publicly on ExTwitter as well.
I say eventually because that wasn’t Ubisoft’s first reaction. As the quotation goes, Ubisoft had to try everything else first, and that in this case meant attempting to put the leak genie back in the bottle via DMCA notices. DMCA notices that are not designed to bury material the company itself leaked.
Ubisoft making the game available was a total accident; however, that didn’t stop folks from sharing clips and screens of the BG&E remaster on YouTube. While the company tried erasing any evidence of this ever happening, copyright striking anyone who published footage of the leaked game, Ubisoft eventually gave up and announced this morning on X/Twitter that the remaster is real.
Now, while this is Ubisoft we’re talking about, perhaps such a clear-cut lesson will be finally learned. Trying to copyright strike leaked information about the game to hell was never going to work. And, in fact, probably would have had the opposite effect. Senorita Streisand, after all, is a resilient mistress.
And before someone eventually shows up in the comments suggesting that this was all a planned thing designed to get the game more attention… nah. Just nah. That’s not how Ubisoft operates and there is zero evidence that this was all some guerilla marketing approach.
Instead, this was almost certainly what it appeared to be: Ubisoft handling this the right way, only after it tried everything else.
Filed Under: beyond good and evil, copyright, dmca, leaks, takedown notice, videogames
Companies: ubisoft
Groq Sends Elon’s ‘Grok’ A Cease & Desist, Though A Funny One
from the the-likelihood-of-confusion dept
One of things we enjoy here at Techdirt is when even those with legitimate gripes about trademark law take a bemused view about the whole thing, rather than immediately jumping to angry and overly aggressive threats. No one likes a trademark bully, even when the trademark holder might have a legitimate claim.
A few weeks ago, we mentioned that Elon Musk probably should have checked with trademark lawyers (or just done a basic internet search) before naming his xAI large language model “Grok” because there was already a well-known AI chip company in the space named Groq. Groq seems mildly annoyed at the confusion this is causing (and literally yesterday when I mentioned testing something on Groq’s AI system someone asked me why I was using Elon’s AI…), but is mostly taking it in good strides.
Initially, Groq’s CEO, Jonathan Ross (who was the guest on this week’s Techdirt podcast, which was entertaining) used his own AI tools to suggest a new name for Elon’s LLM. The solution they came up with was Slartibartfast, which (unlike “Grok”) is actually from Hitchhiker’s Guide to the Galaxy, which Elon keeps insisting is what his AI is trained to be like.
Then Ross used the situation to highlight just how much faster AI running on Groq’s chips are, compared to using the old, tired, slow way that Elon’s appears to be using:
Now, Ross has taken it up a notch with a clearer cease and desist, though still keeping it amusing.
Did you know that when you announced the new xAI chatbot, you used our name? Your chatbot is called Grok and our company is called Groq, so you can see how this might create confusion among people. Groq (us) sounds a lot like (identical) to Grok (you), and the difference of one consonant (q, k) only matters to scrabblers and spell checkers. Plus, we own the trademark.
We can see why you might want to adopt our name. You like fast things (rockets, hyperloops, one-letter company names) and our product, the Groq LPU™ Inference Engine, is the fastest way to run large language models (LLMs) and other generative AI applications. However, we must ask you to please choose another name, and fast.
I stand by my recommendation to name it Slartibartfast. It’s both on message with your idea of a sarcastic bot inspired by Hitchhiker’s Guide to the Galaxy and sounds sufficiently distinct from any other AI company or product (which is why I wouldn’t recommend calling it Giggle or OpenXi). Win-win! But, your call.
In making this request, we’re thinking about you as much as us. It must be annoying having all those people hitting you up on X, asking how the Groq LPU Inference Engine is able to deliver 10X better performance and precision at scale? Or how it is 10X more energy efficient and 10X more cost-effective? That’s plenty of Xs, even for you, especially when it’s not your Groq.
I get it. It was annoying when my great Aunt asked me about my new snarky chatbot over Thanksgiving dinner, but I passed her the mashed potatoes anyway.
#GroqOn
Of course, Elon has (so far) ignored all of this, and there’s a decent chance he’ll continue to do so. But I’d like to no longer need to explain every time I talk about Groq (which is doing some pretty cool stuff) that I’m not using Elon’s glitchy tech, so hopefully Elon gets around to actually changing the name.
As we’ve explained for years, of the three customary fields often linked together under the terrible and misleading term “intellectual property,” trademark is the most defensible, though only for its original intended purpose: to avoid consumer confusion. As a consumer protection tool that accurately designates the origin of a product, trademark serves a useful process (it’s only when it’s being used for bullying/censorship that gets us upset). But here, Groq appears to have a very clear legitimate claim. The likelihood of confusion is extremely clear (I keep experiencing it personally!).
But, I guess it’s up to Elon if he’s actually going to change the name. He could ask Grok, but I’m guessing the responses might come a little too slowly.
Filed Under: ai, cease and desist, elon musk, grok, jonathan ross, likelihood of confusion, trademark
Companies: groq, twitter, x
An Appeals Court Broke Media Advertising, So The Copia Institute Asked The California Supreme Court To Fix It
from the huge-implications dept
A few months ago a California court of appeals issued a really terrible decision in Liapes v. Facebook. Liapes, a Facebook user, was unhappy that the ads delivered to her correlated with some of her characteristics, like her age. As a result there were certain ads, like one provided by an insurer offering a particular policy for men of a different age, that didn’t get delivered to her.
Of course, it didn’t get delivered to her because the advertiser likely had little interest in spending money to place an ad to reach a customer who would not and could not turn into a sale, since she would not have been eligible for the promotion. And historically advertisers in all forms of media – newspapers, television, radio, etc. – have preferred to spend their marketing budgets on media likely to reach the same sorts of people as would purchase their products and services. Which is why, as we explained to the California Supreme Court, one tends to see different ads in Seventeen Magazine than, say, AARP’s.
Because we also tend to see different expression in each one, as the publishing company chooses what content to deliver to which people. There’s no law that says media companies have to deliver content that would appeal to all people in all media channels, nor could there be constitutionally, because those choices of what expression to deliver to whom are protected by the First Amendment.
Or at least they were up until the court of appeals got its hands on the lawsuit Liapes brought against Facebook, arguing that letting advertisers choose which users would get which ads based on characteristics like age violated the state’s Unruh Act. The Unruh Act basically prevents a company from unlawfully discriminating against people for protected characteristics – if it offers a product or service to one customer it can’t refuse to offer it to another because of things like their age.
But Facebook isn’t a business that sells tangible products or non-expressive services; it is a media business, just like TV stations are, newspapers are, magazine publishers are, etc. Like these other businesses, it is in the business of delivering expression to audiences. True, it is primarily in the business of delivering others users’ expression rather than its own, and it is more likely to have the ability to deliver editorially-tailored expression on an individual level, but then again, increasingly so can traditional media. In any case, there is nothing about the First Amendment that keys it only to the characteristics of traditional media businesses producing media for the masses. After all, they themselves often choose which demographic to target with their own media. Conde Nast, for instance, publishes both GQ and Vogue, as well as TeenVogue, and it is surely using demographics of the targeted audience to decide what expression to provide them in each publication.
But the upshot of the appeals court decision, finding Unruh Act liability when a media business uses demographic information to target an audience with certain content (including advertising content), is that either no media business will be able to make any sort of editorial decision based on the demographic characteristics of their intended audience – and as a result, there goes the American advertising model that has sustained American media businesses for generations – or, even if those businesses somehow are left beyond the Unruh Act, it will introduce an artificial exception to the First Amendment to carve out a business like Facebook because… well, just because. There really is no sound rationale for treating a company like Meta differently than any other media business, but if they could be uniquely targeted by the Unruh Act, unlike their more traditional media brethren, it would still gravely impact every Internet business, especially those that monetize the expression they provide with ads.
Which would be particularly troubling because not only are businesses like Facebook supposed to be protected by the First Amendment but they are supposed to be EVEN MORE PROTECTED by Section 230, which insulates them from liability arising from the expression others provide, as well as the moderation decisions the platforms like Facebook make to choose what expression to serve audiences. The court of appeals decision impinges upon both these forms of protection, and in contravention of Section 230’s pre-emption provision, which prevents states from messing with this basic statutory scheme with its own laws, of which the Unruh Act is one. After all, if there was anything actually wrong with the ad, it was the advertiser who produced it who imbued it with its wrongful quality, not Facebook. And the decision to serve it or not is an editorially-protected moderation decision, which Facebook also should have been entitled to make without liability, per Section 230.
In sum, this California appeals court decision stands to make an enormous mess of at least online businesses, if not every media business, and not even just those who take advertising, because simply weakening Section 230 and the First Amendment itself will lead to its own dire consequences. And so the Copia Institute filed this amicus letter supporting Facebook’s petition for further review by the California Supreme Court in order to clean up this looming mess.
Filed Under: ad targeting, advertising, discrimination, liability, samantha liapes, section 230, unruh act
Companies: facebook
California Court: Passwords Are Communications, Protected By The Stored Communications Act
from the only-so-far-you-can-take-a-subpoena dept
The Stored Communications Act — enacted in 1986 — is not only outdated, it’s also pretty weird. An amendment to the ECPA (Electronic Communications Privacy Act), the SCA added and subtracted privacy from communications.
It’s the subtractions that are bothersome. Law enforcement wasn’t too happy a lot of electronic communications were now subject to warrant requirements. They much preferred the abundant use/misuse of subpoenas to force third-parties into handing over stuff they didn’t have the probable cause to demand directly from criminal suspects.
Private parties — especially those engaged in civil litigation — also preferred to see fewer communications protected by the ECPA. So, this law — which declared every unopened email more than 180 days old free game — was welcomed by plenty of people who didn’t have the general public’s best interests in mind.
The government tends to make the most use of the ECPA and SCA’s privacy protection limitations, using the law and legal interpretations to access communications most people logically assumed the government would need warrants to obtain.
But the SCA also factors into civil litigation. In some cases, the arguments revolve around who exactly is protected by the law when it comes to unexpected intrusion by private parties. In this case — one highlighted by FourthAmendment.com (even as the site owner notes it’s not really a Fourth Amendment case) — it involves international litigation involving US service providers. The case directly deals with the Stored Communications Act and what it does or does not protect.
This lawsuit was brought by Path, an Arizona corporation, and its subsidiary, Tempest. Central to the litigation is Canadian citizen Curtis Gervais, who apparently was hired as an independent contractor by Tempest, which promoted him to the position of CEO in February 2022. A few months later, Gervais allegedly hacked into a competitor’s (Game Server Kings [“GSK”]) computers, leading to Tempest demoting (lol) Gervais to COO (Chief Operating Officer).
This demotion apparently didn’t sit well with Gervais, who allegedly began sharing confidential Tempest information with GSK, utilizing communications platform Discord to hand over this information to GSK employees.
So, it’s three American companies and one Canadian individual wrapped up in a dispute over ex parte demands to disclose information to the plaintiffs (Path/Tempest). Discord challenged the subpoenas, which asked for — among other things — any passwords used by Gervais to log into its services.
That’s where it gets interesting. Very few courts have considered what’s explicitly covered by the SCA and/or what can be obtained with subpoenas issued under this authority.
As is implied by both laws in play here (Electronic Communications Protection Act, Stored Communications Act), the protections (or lack thereof) apply to communications. Path argued that its subpoenas did not exceed the grasp of these laws, despite demanding Discord hand over Gervais’ passwords. According to the plaintiffs, passwords aren’t communications.
But that’s a very reductive view of passwords, something Discord pointed out in its challenge of the subpoenas:
Applicants argue passwords are not afforded protection under the SCA because passwords should not be considered “content.” Discord argues passwords are implicitly included within the SCA’s prohibitions because passwords implicate communications. In other words, Discord argues that passwords are “content “ under the SCA because they are “information concerning the substance, purport, or meaning” of a communication.
The court [PDF] says Discord is correct. But only after a lot of discussion because, as the court notes, this is an issue of “first impression.” It has never been asked to make this determination prior to this unique set of circumstances. But, despite the lack of precedent, the court still delivers a ruling that sets a baseline for future cases involving SCA subpoenas.
It begins by saying that even if the language of the SCA doesn’t specifically include passwords in its definition of “content,” it’s clear Congress meant to add protections to electronic communications with this amendment, rather than lower barriers for access.
The legislative history agrees with a broad interpretation of “content.” Congress explained that the purpose of enacting the SCA was to protect individuals on the shortcomings of the Fourth Amendment. Specifically, Congress enacted the SCA due to the “tremendous advances in telecommunications and computer technologies” with the “comparable technological advances in surveillance devices and techniques.” The SCA was further meant to help “Americans [who] have lost the ability to lock away a great deal of personal and business information.”
With this analysis of the scope of the term “content” under the SCA in mind, the Court now turns to determine if passwords are afforded protection under the SCA under that understanding of the definition of the term “content.” Passwords are undoubtedly a form of “information.” And passwords broadly “relate to” (or are “concerning”) the “substance, purport, or meaning of [a]
communication” even if passwords are not themselves the content of a communication. Passwords further relate to a person’s intended message to another; while a password is not the content of the intended message, a password controls a user’s access to the content or services that require the user to prove their identity. As a matter of technological access to an electronic message, a password thus “relates to” the intended message because without a password, the author cannot access their account to draft and send the message (and the user cannot access their account to receive and read the message).When a person uses a password to access their account to draft and send a message, that author inherently communicates to the recipient at least one piece of information that is essential to complete the communication process: namely, that the author has completed the process of authentication. The password is information or knowledge which is intended to convey a person’s claim of identity not just to the messaging system but also implicitly to the recipient. As such, within the context of electronic communication systems, passwords are a critical element because they convey an “essential part” of the communication with respect to access and security protocols.
The dispute at issue here demonstrates the inherency of communicating about passwords when using a messaging platform such as Discord: when the user of the “Archetype” sent messages demanding ransom for the stolen source code, those messages conveyed to the recipients that the author is or was an authentic or authorized user of the “Archetype” account who used and had access to the password for that account. That password for that account thus is information concerning that communication, even if the password is not itself written out in the content directly.
In addition to all of that, there’s the undeniable fact that if you’re able to obtain login info (including passwords) with a subpoena, it doesn’t matter if courts limit the reach of demands for communications. If you have the keys to the accounts, you have full access to any stored communications, whether or not this access has been explicitly approved by a court.
With this password in hand, a litigant (or their ediscovery consultants) would have unfettered access to all communications within the account holder’s electronic storage, without regard to relevance, privilege, or other appropriate bounds of permissible discovery. In other words, litigants could circumvent the very purpose of the SCA by simply requesting that a service provider disclose the password for a user account, ultimately vitiating the protections of the SCA.
No court would allow the government to claim this is acceptable under the SCA and/or the Constitution. And no court should allow it just because it’s litigation involving only private parties. This particular demand cannot be honored without violating the law. And the companies behind the subpoenas know this because they obviously have zero interest in obtaining nothing more than Gervais’ login info.
The only conceivable use for the passwords here is for Applicants to access the requested accounts (such as “Archetype”) and view the contents of all electronically stored communications in those requested accounts.
That’s clearly the litigants’ intent. And it doesn’t mesh with the legislative intent, which was to create a few new protections for then-newfangled electronic communications. This particular demand is rejected. The subpoenas are still alive, but they’re no longer intact. If the suing entities want access to the defendant’s communications, they’ll have to do it the old-fashioned way: by making discovery requests that remain on the right side of the law.
Filed Under: california, communications, curtis gervais, ecpa, passwords, sca, stored communications act
Companies: discord, path, tempest
Elon Says Copyright/AI Lawsuits Don’t Matter Because ‘Digital God’ Will Arrive Before They’re Decided
from the that's-not-how-any-of-this-works dept
So, we already wrote about the biggest headline grabbing moment from Elon Musk’s Dealbook interview with Andrew Ross Sorkin yesterday, but there was another crazy, Techdirt-relevant one involving copyright and AI. As we’ve explained over and over again, copyright is the wrong tool to use to regulate AI, and using it will lead to bad outcomes.
But, absolutely nothing in this bit of the interview made any sense at all (from either side):
It starts out with a drop dead ignorantly wrong question from Aaron Ross Sorkin, who seems wholly unprepared for this:
ARS: So, one of the things about training on data, has been this idea that you’re not going to train on… or these things are not being trained on people’s copyrighted information. Historically. That’s been the concept.
Elon: Yeah that’s a huge lie.
ARS: Say that again.
Elon: These AIs are all trained on copyrighted data. Obviously.
ARS: So you think it’s a lie when OpenAI says that… none of these guy say that they’re training on copyrighted data.
Elon: Yeah, that’s a lie.
ARS: It’s a lie. Straight up?
Elon: Straight up lie.
So… there’s a lie in there, but it’s Andrew Ross Sorkin saying that any AI company claims that it doesn’t train on copyright-covered data. Everyone admits that. They say that doing so is fair use (because it is). So the entire premise of this discussion is wrong. Here’s OpenAI admitting in court that, of course, it trains on copyright covered material. It’s just that it believes fair use allows that (because it does).
So, in one sense here, Elon is right to push back on Sorkin’s claim. But Musk is misleading, because he appears to buy into the false premise of Sorkin’s question that AI companies say they’re not training on copyright-protected data. If Musk had any idea what he was doing he would have told Sorkin his premise was wrong, and that no companies deny training on such material.
From there, Sorkin goes on an even more confused discussion, claiming that while snippets of articles on ExTwitter are fair use, because, combined, people might post a full article, it might not be any more… but that’s… not how any of this works anyway. Someone give him a Copyright 101 lesson, because this is embarrassing.
Either way, Musk then made the whole thing… um… fucking weird. Because as Sorkin kept trying to press Musk on the copyright lawsuits, Musk did this:
Musk: I don’t know, except to say that by the time these lawsuits are decided we’ll have Digital God. So, you can ask Digital God at that point. Um. These lawsuits won’t be decided on a timeframe that’s relevant.
If someone you knew started saying stuff like that, you’d have them checked out.
Whether or not you believe that AGI (Artificial General Intelligence) is on the way or not, or that it might create “Digital God,” the idea that this is coming before these lawsuits are decided is… um… not realistic. But, even if we do somehow reach AGI within the next few years as these lawsuits play out, the idea that such an AGI would obsolete the courts and/or copyright law is similarly wishful thinking.
Hell, we’ve argued for years that the internet itself has already obsoleted copyright laws, but they’re still sticking around and getting dumber all the time. I’d love for it to be true that technology further obsoletes copyright law and moves things to a better overall system… but it’s not going to happen because “Digital God.”
Of course, perhaps if Elon truly thinks Digital God is coming in the next few years, it explains why he doesn’t care about advertisers on ExTwitter any more.
Filed Under: andrew ross sorkin, copyright, digital god, elon musk, fair use
Companies: openai, twitter, x, xai
No comments:
Post a Comment