About Techdirt.
Started in 1997 by Floor64 founder Mike Masnick and then growing into a group blogging effort, the Techdirt blog relies on a proven economic framework to analyze and offer insight into news stories about changes in government policy, technology and legal issues that affect companies’ ability to innovate and grow. As the impact of technological innovation on society, civil liberties and consumer rights has grown, Techdirt’s coverage has expanded to include these critical topics.
The dynamic and interactive community of Techdirt readers often comment on the addictive quality of the content on the site, a feeling supported by the blog’s average of ~1 million visitors per month and more than 1.7 million comments on 73,000+ posts. Both Business Week and Forbes have awarded Techdirt Best of the Web thought leader awards.
You can also find Techdirt on Twitter and Facebook.
_____________________________________________________________________
Why Jonathan Haidt’s ‘Protect The Kids’ Proposals Could Make Things Worse For Kids
from the first-do-no-harm dept
How much harm is done to children in the name of “protecting” them? Entirely too much. What if we drive them further into dangerous corners of the internet by cutting them off from their support networks?
Since the release of Jonathan Haidt’s book, “The Anxious Generation,” a few months back, there has been plenty of discussion and debate about his claims and his proposed solutions. In my own review of the book, I discussed how the data supporting Haidt’s claims were extraordinarily weak, but spent more time talking about how flimsy the support for his “solutions” were.
Haidt has, at other times, suggested that even if there is no evidence to actually support his policy solutions, we should support them anyway, because they couldn’t do any harm, and the mere chance that they might benefit kids is worth it. As I wrote:
While it doesn’t make it directly into his latest book, while he was working on it, Haidt responded to critics of his thesis by citing Pascal’s Wager—that it makes more sense to believe in God than not, because the cost of believing and being wrong is nothing. But the cost of not believing and being wrong could be eternal damnation.
Similarly, Haidt argues that we should keep kids away from social media for the same reason: even if he’s wrong, the “cost” is minimal.
The scariest part is that the cost of being wrong is not minimal. Indeed, it appears to be extremely high.
If he’s wrong, it means parents, politicians, teachers, and more do not tackle the real root causes of teenage mental health issues.
The research has shown repeatedly that social media is valuable for many young people, especially those struggling in their local communities and families (multiple studies highlight how LGBTQ youth rely heavily on social media in very helpful ways). Taking that lifeline away can be damaging. There are numerous stories of kids who relied on social media to help them out of tricky situations, such as diagnosing a disease where doctors failed to help.
I then went on and detailed how little Haidt seemed to understand about his own policy proposals. At least he provided some studies to support his position about the problem. But when it came to his policy proposals, they were totally based on “feels” rather than facts (or data).
Similarly, Haidt is no policy expert, and it shows. In the book, he supports policies like the “Kids Online Safety Act,” which has been condemned by LGBTQ groups, given that the co-sponsor of the bill has admitted she supports it to remove LGBTQ content from the internet. That’s real harm.
Now, Candice Odgers, a researcher who has done the actual research work Haidt has never done, and who published a fantastic takedown in Nature of the misleading claims Haidt made about the research, has a new piece in the Atlantic. The piece details the very real harms that might occur if everyone focuses on smartphones as some sort of horrible depression-making boxes.
Again, Odgers reminds everyone about the lack of any real evidence on these claims of initial harm:
I am a developmental psychologist, and for the past 20 years, I have worked to identify how children develop mental illnesses. Since 2008, I have studied 10-to-15-year-olds using their mobile phones, with the goal of testing how a wide range of their daily experiences, including their digital-technology use, influences their mental health. My colleagues and I have repeatedly failed to find compelling support for the claim that digital-technology use is a major contributor to adolescent depression and other mental-health symptoms.
Many other researchers have found the same. In fact, a recent study and a review of research on social media and depression concluded that social media is one of the least influential factors in predicting adolescents’ mental health. The most influential factors include a family history of mental disorder; early exposure to adversity, such as violence and discrimination; and school- and family-related stressors, among others. At the end of last year, the National Academies of Sciences, Engineering, and Medicine released a report concluding, “Available research that links social media to health shows small effects and weak associations, which may be influenced by a combination of good and bad experiences. Contrary to the current cultural narrative that social media is universally harmful to adolescents, the reality is more complicated.”
In the piece, she notes that these claims from Haidt and others have “an intuitive appeal” because social media and mobile phones make for “an easy scapegoat.” But we should be concerned that the data doesn’t support these claims because it will mean we’ll make very wrong decisions in trying to figure out how to deal with these challenges.
Indeed, if the cause and effect is the opposite direction, as Odgers and others have found, then these “solutions” could do more harm:
The reality is that correlational studies to date have generated a mix of small, conflicting, and often confounded associations between social-media use and adolescents’ mental health. The overwhelming majority of them offer no way to sort out cause and effect. When associations are found, things seem to work in the opposite direction from what we’ve been told: Recent research among adolescents—including among young-adolescent girls, along with a large review of 24 studies that followed people over time—suggests that early mental-health symptoms may predict later social-media use, but not the other way around.
Odgers then highlights how experimental studies that might tease out actual cause and effect tend to have real problems, studying the wrong age group, or platforms that kids don’t really use these days (hello Facebook!).
But, as I noted in my review of Haidt’s book, and as Odgers also highlights here, the risk of falsely jumping to the conclusion that removing social media and phones from kids will somehow solve these problems risks making the problems worse:
But the problem with the extreme position presented in Haidt’s book and in recent headlines—that digital technology use is directly causing a large-scale mental-health crisis in teenagers—is that it can stoke panic and leave us without the tools we need to actually navigate these complex issues. Two things can be true: first, that the online spaces where young people spend so much time require massive reform, and second, that social media is not rewiring our children’s brains or causing an epidemic of mental illness. Focusing solely on social media may mean that the real causes of mental disorder and distress among our children go unaddressed.
Offline risk—at the community, family, and child levels—continues to be the best predictor of whether children are exposed to negative content and experiences online. Children growing up in families with the fewest resources offline are also less likely to be actively supported by adults as they learn to navigate the online world. If we react to these problems based on fear alone, rather than considering what adolescents actually need, we may only widen this opportunity gap.
We should not send the message to families—and to teens—that social-media use, which is common among adolescents and helpful in many cases, is inherently damaging, shameful, and harmful. It’s not. What my fellow researchers and I see when we connect with adolescents is young people going online to do regular adolescent stuff. They connect with peers from their offline life, consume music and media, and play games with friends. Spending time on YouTube remains the most frequent online activity for U.S. adolescents. Adolescents also go online to seek information about health, and this is especially true if they also report experiencing psychological distress themselves or encounter barriers to finding help offline. Many adolescents report finding spaces of refuge online, especially when they have marginalized identities or lack support in their family and school. Adolescents also report wanting, but often not being able to access, online mental-health services and supports.
All adolescents will eventually need to know how to safely navigate online spaces, so shutting off or restricting access to smartphones and social media is unlikely to work in the long term. In many instances, doing so could backfire: Teens will find creative ways to access these or even more unregulated spaces, and we should not give them additional reasons to feel alienated from the adults in their lives.
This is why I find Haidt’s idea of “well, we should do these ideas anyway, even if I have no proof to support them, because they can’t do any harm” so problematic. They can do real, and lasting, harm. They take attention away from dealing with the very complex realities facing teens about mental health today. They especially give parents and teachers an easy excuse to avoid tackling those real issues.
On top of that, if it is true that mental health issues (and a lack of proper resources to deal with them) are driving kids to social media as an alternative, taking that away can have real negative consequences. As Odgers notes, it can also make things worse by driving kids into darker corners of the internet, seeking answers.
We’ve already seen this come true with eating disorder content online. Attempts by social media companies to block such content and shut down groups discussing eating disorders did not diminish the existence of eating disorders among teens. Because it was a “demand side” problem (kids looking for such communities) rather than a “supply side” (kids deciding to explore eating disorders because they were encouraged on social media), it meant that when those communities were shut down, the kids still sought it out. And they found it, but in darker corners of the internet, where there was less oversight and fewer people within those same communities helping to guide the members towards useful recovery resources.
For all the talk of “protecting the children” online, and so much focus on Haidt’s utter nonsense, shouldn’t we be at least somewhat concerned that Haidt’s solutions have a very real chance of doing real harm to kids?
Filed Under: candice odgers, jonathan haidt, mental health, protect the children, social media
Supreme Court Says It’s Fine For Cops To Dick Around For Months Or Years After Seizing People’s Cars
from the just-sit-on-it-until-people-give-up dept
The Supreme Court has recognized there’s something definitely wrong with asset forfeiture. But, so far, it has yet to attempt to put a full stop to it.
A recent case dealt with criminal asset forfeiture. In that case, the nation’s top court ruled it was unconstitutional for the government to seize assets worth far more than the maximum fine it could levy for the criminal charges accompanying the seizure. In that case, cops took a $42,000 Range Rover in exchange for a sale of $260 worth of heroin to an undercover officer. Given that this crime had a max fine of $10,000, the Supreme Court said taking the Range Rover was an “excessive fine” — something that violates the Eighth and Fourteenth Amendments.
But the justices said this also applied to civil asset forfeiture. And in civil cases, criminal charges usually aren’t filed, which means any forfeiture would be an “excessive fine” because the applicable fine in cases with no criminal charges is always going to be… $0.
Unfortunately, the 2019 ruling changed little about forfeiture programs. Most still operate the way they always have and will likely continue to do so until another legal challenge reaches the upper levels of the court system.
This case did manage to make it to the top court in the land. But there’s no win to be had here for people whose property is taken by opportunistic cops who operate in locales with permissive forfeiture laws. Here are the facts of the case, as reported by Adam Liptak for the New York Times.
The court ruled in two cases. One of them started after Halima Culley bought a 2015 Nissan Altima for her son to use at college. He was pulled over by the police in 2019 and arrested when they found marijuana. They also seized Ms. Culley’s car.
That same year, Lena Sutton lent her 2012 Chevrolet Sonic to a friend. He was stopped for speeding and arrested after the police found methamphetamine. Ms. Sutton’s car was also seized.
Alabama law in effect at the time let so-called innocent owners reclaim seized property, and both women ultimately persuaded judges to return their cars. It took more than a year in each case, though there was some dispute about whether the women could have done more to hasten the process.
Ms. Culley and Ms. Sutton filed class actions in federal court saying that they should have been afforded prompt interim hearings to argue for the return of the vehicles while their cases moved forward. Lower courts ruled against them.
And so has the Supreme Court. As the decision [PDF] notes (with some apparent regret), due process rights do not include forcing law enforcement to engage in timely adjudication of forfeiture cases, which means agencies and officers can continue to hang back and hope attrition (for lack of a better word) will allow them to retain control of property seized for seriously specious reasons.
The Court’s decisions in $8,850 and Von Neumann make crystal clear that due process does not require a separate preliminary hearing to determine whether seized personal property may be retained pending the ultimate forfeiture hearing.
In other words, the government is under no obligation to provide forfeiture victims with a preliminary hearing to see whether or not the government can retain control of the property until the forfeiture is adjudicated. People whose property has been seized will just have to wait until the government makes its move and respond to them.
This might seem fair, but it really isn’t. In cases like these — where people’s cars have been seized because of crimes committed by people who don’t own the cars — the owners are still obligated to make payments on these cars or round up the funds to secure other transportation while the government goes through the civil forfeiture motions. This could take weeks, months, or years. At no point is the government required to accelerate the process or allow property owners an opportunity to make things move faster.
So, the ruling was no help to these women or to anyone else subjected to the same tactics. Some justices did have some good stuff to say about the general shittiness of civil asset forfeiture programs, but those were relegated to the dissent, where they similarly won’t do much for people victimized by legalized theft.
But even the concurrence (this one written by Justices Gorsuch and Thomas) has things to say about civil asset forfeiture, most of it critical of the practice.
To secure a criminal penalty like a fine, disgorgement of illegal profits, or restitution, the government must comply with strict procedural rules and prove the defendant’s guilt beyond a reasonable doubt. In civil forfeiture, however, the government can simply take the property and later proceed to court to earn the right to keep it under a far more forgiving burden of proof. In part thanks to this asymmetry, civil forfeiture has become a booming business. In 2018, federal forfeitures alone brought in $2.5 billion. Meanwhile, according to some reports, these days “up to 80% of civil forfeitures are not accompanied by a criminal conviction.”
[…]
Not only do law enforcement agencies have strong financial incentives to pursue forfeitures, those incentives also appear to influence how they conduct them. Some agencies, for example, reportedly place special emphasis on seizing low-value items and relatively small amounts of cash, hopeful their actions won’t be contested because the cost of litigating to retrieve the property may cost more than the value of the property itself. Other agencies seem to prioritize seizures they can monetize rather than those they cannot, posing for example as drug dealers rather than buyers so they can seize the buyer’s cash rather than illicit drugs that hold no value for law enforcement.
Delay can work to these agencies’ advantage as well. See Brief for Institute for Justice et al. as Amici Curiae 16. Faced with the prospect of waiting months or years to secure the return of a car or some other valuable piece of property they need to work and live, even innocent owners sometimes “settle” by “paying a fee to get it back.”
That’s from the concurrence. That’s from two justices who agree the Constitution provides no remedy but still spend most of their concurrence criticizing civil forfeiture.
The dissent, written by Justices Sotomayor, Kagan, and Jackson, is even more harsh in its assessment of civil forfeiture. And they say this decision — while ultimately critical of the practice — gives opportunistic law enforcement agencies all the permission they need to keep doing things the way they’ve always done them while making it clear members of the public are welcome to go fuck themselves if they have a problem with this.
Petitioners claim that the Due Process Clause requires a prompt, post-seizure opportunity for innocent car owners to argue to a judge why they should retain their cars pending that final forfeiture determination. When an officer has a financial incentive to hold onto a car and an owner pleads innocence, they argue, a retention hearing at least ensures that the officer has probable cause to connect the owner and the car to a crime.
Today, the Court holds that the Due Process Clause never requires that minimal safeguard. In doing so, it sweeps far more broadly than the narrow question presented and hamstrings lower courts from addressing myriad abuses of the civil forfeiture system.
Not a great result. Hamstringing lower courts is the least favorable outcome, especially now that lower courts seem to finally be waking up to the harms created by civil forfeiture programs — nearly all of which contain multiple layers of perverted incentives. With this decision, the Supreme Court has taken a pass on establishing a right to a speedy trial (of sorts) for those who’ve just seen their possessions taken by law enforcement because of the actions of others. This decision says things are fine the way they are, even when five justices (even those concurring!) agree the system is completely fucked.
Filed Under: asset forfeiture, civil asset forfeiture, due process, supreme court
Emory University Suspends Students Over AI Study Tool The School Gave Them $10k To Build And Promoted
from the um-guys? dept
I’ll admit, I had to read this story a couple of times, since it’s so unbelievable. With the explosion of AI tools that have come out over the past couple of years, coming along for the ride are all kinds of concerns over how that AI gets used. In the realm of higher education, this means a great deal of consternation over concerns about students using these tools to cheat. While these concerns sure seem wildly overblown, the flipside of this issue has been the use of all kinds of software tools billed as “anti-cheating” programs for schools to use to make sure students are student-ing legitimately.
But, man, whatever the hell is going on with the folks at Emory University is simply bizarre. A group of students are suing the school after being suspended for a year over an AI program they built called “Eightball,” which is designed to automagically review course study material within the school’s software where professors place those study materials and develop flashcards, study materials for review, and the like. The only problem is that the school not only knew all about Eightball, it paid these same students $10,000 to make it.
Last spring, the students presented Eightball at the university’s “Entrepreneurship Summit” and were given a $10,000 grand prize to build and launch their software, which allowed students to upload PDFs of course readings, syllabuses, and other material and turn those into practice tests and flash cards. They also explained that they were eventually going to allow users to connect to Canvas, which is a software platform used by the university where professors upload course readings, documentation, assignments, etc, the lawsuit alleges. “By connecting Eightball to Canvas, students would be able to import their course materials to Eightball all at once rather than uploading the same documents individually.”
“Eightball is a platform kind of like ChatGPT but trained directly on your Canvas courses. The way Eightball works is it connects to your Canvas and goes through each of your courses. And for each course it studies the modules, the lectures, the slides, the readings, everything. From there, it becomes a ChatGPT-like experience, but the AI is customized for your course,” one of the creators explains in a demo video. The student then shows that Eightball surfaces directly relevant passages and serves as, more or less, a search-engine for class material.
The school actually did much more than just fund Eightball’s creation. It promoted the tool on its website. It announced how awesome the tool is on LinkedIn. Emails from faculty at Emory showered the creators of Eightball with all kinds of praise, including from the Associate Dean of the school. Everything was great, all of this was above-board, and it seemed that these Emory students were well on their way to doing something special, with the backing of the university.
Then the school’s IT and Honor Council got involved.
It is not clear, exactly, what changed at Emory that made the university take action against a startup that it went out of its way to promote, but both the lawsuit and the Honor Council writeup asserts that the university’s IT department was angry that the company allowed students to connect their own Canvas API tokens to the app. In the lawsuit, the students’ lawyers write that the university changed the settings within Canvas and “hid the button that generates Canvas [API] tokens, but it did not inform [the students] that the change was in response to Eightball’s newly available method for uploading course materials.” Soon after this, “Emory informed [one of the students] that he may have violated Emory’s Undergraduate Code of Conduct by Connecting Eightball to Canvas.” The students shut Eightball down at this point.
After all of this promotion, the university’s Honor Council launched an investigation into the students and Eightball. This investigation, which can be read here, found that Eightball had not been used for cheating, and that the students had not lied about the capabilities of the software. It also did not dispute that the school both funded and championed the software. The council recommended that the students be suspended for a year, anyway. Jason Ciejka, the director of the school’s honor council, wrote “this case is unprecedented in terms of its scale and potential to harm the Emory community.”
Read that second paragraph again. The school funded the creation of this tool made by its own students, praised those students and promoted the use of the tool, validated that it had not been used for any cheating (because it can’t be used to cheat, more on that later), and then suspended the students for a year anyway. That’s insane.
And all of this consternation over using an API token by students is equally silly. The school suggested this was some kind of IT security risk for students to use them to connect Canvas to Eightball. What the school appears to be missing is that, you know, that’s precisely what APIs are for.
The school “figured out that the Eightball program accesses the Canvas data through the Canvas user generated token, which is essentially users’ Emory credentials that give full access to everything users can access on Canvas. This user generated token is considered a highly restricted user credential tool and sharing it to any outside party is a violation of Canvas terms and IT policies.” API tokens are sensitive, but API tokens exist exclusively for users to connect accounts to outside services—what the Honor Council is describing is essentially the only use for an API token, and is a feature of Canvas which the Honor Council wrote “is not something that they can turn off.” Canvas’s own documentation explains to students how they can use use API tokens to connect their accounts to other apps: “Access tokens provide access to canvas resources through the Canvas API. Access tokens can be generated automatically for third-party applications or created manually.”
The Honor Council, however, seemed to be hyper-focused on cheating, still. While it confirmed no cheating had taken place, it still recommended suspension due to how Eightball could be used for cheating. Except no, no it cannot. That isn’t what the platform does at all. The only information Eightball can supply the student with is the information that is in the course material supplied by the professors themselves.
According to Eightball’s marketing, the lawsuit, and Emory University’s own writeups, Eightball was not actually a cheating tool. As far as AI-tools go, it seems innocuous, and the university did not provide any examples of the tool ever being used for cheating. “Unless answers are directly in the course materials, Eightball cannot make up anything for non-existing answers.”
You can read the lawsuit from the students embedded below, but I can’t for the life of me imagine a scenario in which the court doesn’t laugh Emory University out of the courtroom and order it to return these students to their classes.
Filed Under: ai, cheating, eightball, students
Companies: emory university
The Plan To Sunset Section 230 Is About A Rogue Congress Taking The Internet Hostage If It Doesn’t Get Its Way
from the the-beatings-will-continue-until-the-internet-improves dept
If Congress doesn’t get Google and Meta to agree to Section 230 reforms, it’s going to destroy the rest of the open internet, while Google and Meta will be just fine. If that sounds stupidly counterproductive, well, welcome to today’s Congress.
As we were just discussing, the House Energy and Commerce committee is holding a hearing on the possibility of sunsetting Section 230 at the end of next year. This follows an earlier hearing from last month where representatives heard such confusing nonsense about Section 230 that it was actively misrepresenting reality.
But, based on that one terribly misleading hearing, the top Republican (Cathy McMorris Rodgers) and Democrat (Frank Pallone) on the committee created this bill to sunset the law, along with a nearly facts-free op-ed in the Wall Street Journal making a bunch of blatantly false claims about Section 230. In writing about that bill, I complained that it was ridiculous that neither representative could bother to walk down the hall to talk to Senator Wyden, who coauthored Section 230 and could explain to Rodgers and Pallone their many factual errors.
As I said in last week’s Ctrl-Alt-Speech podcast, they were basically holding a gun to the head of the internet and saying that if Google and Facebook didn’t come up with a deal to appease Congress, Congress would shoot the internet dead.
Now, Wyden and his Section 230 co-author, former Rep. Chris Cox, have penned their own WSJ op-ed that basically makes the same point, with the brilliant title: Buy This Legislation or We’ll Kill the Internet. Because that’s exactly what this “sunset” bill is about. It’s demanding that “big tech” (Meta and Google) come up with a plan to appease Congress, or Congress will effectively kill the internet, by making it nearly impossible for smaller sites to exist.
Just one of the many nonsensical points of this plan is why Rodgers and Pallone think that Meta and Google’s interests are aligned with the wider internet, its users, and smaller sites. There are tons of other sites on the internet that would be way more damaged by removing Section 230.
But Cox and Wyden are pretty clear in pointing out just how wrong all this is. They highlight this trope of threatening to kill something if someone doesn’t get their way:
A 1973 National Lampoon cover featured a dog with a gun to its head. The headline: “If You Don’t Buy This Magazine, We’ll Kill This Dog.” The image is reminiscent of how Congress approaches its most serious responsibilities
This is tragically true. It’s how Congress has handled the debt ceiling for many years now. It’s how Congress has dealt with reform (or, really, lack thereof) of our deeply flawed surveillance system. But, it’s extra ridiculous to have it happen here.
The latest such exercise will be on display at a House hearing on Wednesday, where members of both parties will threaten to repeal the clear-cut legal rules that for decades have governed millions of websites. The dog with a gun to its head is every American who uses the internet.
The law in question is Section 230 of the 1996 Communications Decency Act. The statute provides that the person who creates content online is legally responsible for it and that websites aren’t liable for efforts to moderate their platforms to make them more welcoming, useful or interesting.
Or, as Prof. Eric Goldman (taking inspiration from the opening paragraph in the Wyden/Cox op-ed) made in meme form:
As Cox and Wyden make clear, the framework of Section 230 is entirely sensible, but only if you actually bother to read it and understand it:
When we introduced this legislation in 1995, when both of us served in the House, two things convinced our colleagues to endorse it almost unanimously.
The first was that the internet was different from traditional publishing. The equation had been flipped. We weren’t dealing with millions of people watching a television network’s production, or subscribers reading a newspaper. Publishing and broadcasting tools were suddenly free or nearly so, offering a microphone to millions of Americans who wouldn’t have the power, clout or fame to be featured on NBC’s “Meet the Press” or in Time magazine.
The second was that without new legislation, the law perversely penalized content moderation. Under the old rules of publisher liability, only an “anything goes” approach would protect a website from legal responsibility for user-created content. Prohibiting bullying, swearing, harassment, and threats of violence could be legally disastrous for any site. It was clear, then as now, that if the law were to encourage such a hands-off approach, the internet would turn into a cesspool.
It’s important to remember this history when evaluating the merits of sunsetting Section 230,as the House proposal intends. According to the bill’s text, if Congress can’t agree on a successor to Section 230 by Dec. 31, 2025, websites from Yahoo and Etsy to the local restaurant hosting customer reviews will become liable for every syllable posted on the site by a user or troll. A single post can generate claims that run into the millions of dollars.
I might challenge the wording in that last paragraph a little bit (though I understand why it was written that way within the confines of a short op-ed). Without Section 230, sites don’t automatically become fully liable for content posted by users (some people assume this, incorrectly). Rather, their liability becomes an open question, subject to the results of litigation that is extremely costly whether or not it is later determined that the underlying post can reasonably generate a claim.
This is the part that often gets lost in this discussion. Without Section 230, it flings open the court doors for all sorts of vexatious litigation that is extraordinarily costly just to even determine if a site is liable in the first place. And when that happens, there is tremendous pressure on websites to do a few things. First is to simply remove any content that is at risk of a lawsuit (or when threatened by a lawsuit) just to avoid the costly legal fight that might ensue. So the removal of 230 gives people a kind of litigator’s veto: threaten a lawsuit and there’s a good chance the content gets removed.
The other thing is, if a site does get sued, the cost of defending the lawsuit becomes so high that many companies (and law firms and insurance companies) will push them to just settle. The cost of settling for a nuisance fee will often be significantly cheaper than fighting the full litigation, even if the website would have a high likelihood of winning in the end.
The problem without Section 230 is not the actual fear of liability. A lot of it is the cost of proving you shouldn’t be liable, which is orders of magnitude higher without Section 230. But this is also a big part of what critics of Section 230 do not understand (or, if they’re plaintiffs’ lawyers, they want that lever to use against websites).
As Wyden and Cox make clear:
Reverting to this pre-Section 230 status quo would dramatically alter, and imperil, the online world. Most platforms don’t charge users for access to their sites. In the brave new world of unlimited liability, will a website decide that carrying user-created content free of charge isn’t worth the risk? If so, the era of consumer freedom to both publish and view web content will come to a screeching halt.
It is very much a question of “if you don’t alter 230 in a way Congress likes, Congress will shoot the internet.”
It’s ridiculous that we’ve gotten to this point, and that the support for this destruction is effectively bipartisan. The underlying framing of this effort as the false belief that the biggest of the big tech companies, Google and Meta, are the only real stakeholders here is equally ridiculous.
As I’ve said over and over again, that’s not the case. Both of those companies have buildings full of lawyers. They, above anyone else, can shoulder the costs of these lawsuits. It’s all the other sites that cannot and will not.
At a time when it’s clear that Google and Meta are effectively fine with putting the open web into managed decline and building up their own walled gardens, removing Section 230 will accelerate that process. It will give the biggest internet companies that much more power, while harming everyone else, with it being felt most keenly by the end users who rely on other sites and services beyond Google and Meta.
So the metaphor here really seems to be Congress pointing a gun at the open internet and threatening to shoot it if Google and Meta (which do not represent the open internet) don’t dance to Congress’ tune. The whole situation is truly messed up.
Filed Under: cathy mcmorris rodgers, chris cox, frank pallone, ron wyden, section 230, section 230 repeal
Decentralized Systems Will Be Necessary To Stop Google From Putting The Web Into Managed Decline
from the it's-up-to-us dept
Is Google signaling the end of the open web? That’s some of the concern raised by its new embrace of AI. While most of the fears about AI may be overblown, this one could be legit. But it doesn’t mean that we need to accept it.
These days, there is certainly a lot of hype and nonsense about artificial intelligence and the ways that it can impact all kinds of industries and businesses. Last week at Google IO, Google made it clear that they’re moving forward with what it calls “AI overviews,” in which Google’s own Gemini AI tech will try to generate answers at the top of search pages.
All week I’ve been hearing people fretting about this, sharing some statement similar to Kevin Roose at the NY Times asking if the open web can survive such a thing.
In the early days, Google’s entire mission was to get you off their site as quickly as possible. In a 2004 interview with Playboy magazine that was later immortalized in a regulatory filing with the SEC (due to concerns of them violating quiet period restrictions), Larry Page famously made clear that their goal was to quickly help you find what you want and send you on your way:
PLAYBOY: With the addition of e-mail, Froogle—your new shopping site—and Google news, plus your search engine, will Google become a portal similar to Yahoo, AOL or MSN? Many Internet companies were founded as portals. It was assumed that the more services you provided, the longer people would stay on your website and the more revenue you could generate from advertising and pay services.
PAGE: We built a business on the opposite message. We want you to come to Google and quickly find what you want. Then we’re happy to send you to the other sites. In fact, that’s the point. The portal strategy tries to own all of the information.
PLAYBOY: Portals attempt to create what they call sticky content to keep a user as long as possible.
PAGE: That’s the problem. Most portals show their own content above content elsewhere on the web. We feel that’s a conflict of interest, analogous to taking money for search results. Their search engine doesn’t necessarily provide the best results; it provides the portal’s results. Google conscientiously tries to stay away from that. We want to get you out of Google and to the right place as fast as possible. It’s a very different model.
PLAYBOY: Until you launched news, Gmail, Froogle and similar services.
PAGE: These are just other technologies to help you use the web. They’re an alternative, hopefully a good one. But we continue to point users to the best websites and try to do whatever is in their best interest. With news, we’re not buying information and then pointing users to information we own. We collect many news sources, list them and point the user to other websites. Gmail is just a good mail program with lots of storage.
Ah, how times have changed. And, of course, there is an argument that if you’re just looking for an answer to a question, giving you that answer directly can and should be more efficient, rather than pointing you to a list of places that might (or might not) have that answer.
But, not everything that people are searching for is just “an answer.” And not everything that is an answer takes into account the details, nuances, and complexities of whatever topic someone might be searching on.
There’s nothing inherent to the internet that makes the “search to get linked somewhere else” model have to make sense. Historically, that’s how things have been done. But if you could have an automated system simply give you directly what you needed at the right time, that would probably be a better solution for some subset of issues. And, if Google doesn’t do it, someone else will, and that would undermine Google’s market.
But still, it sucks.
Google’s search has increasingly become terrible. And it appears that much of that enshittification is due to (what else?) an effort to squeeze more money out of everyone, rather than providing a better service.
In Casey Newton’s writeup of the new “AI Overviews” feature, he notes that it may be a sign that “the web as we know it is entering a kind of managed decline.”
Still, as the first day of I/O wound down, it was hard to escape the feeling that the web as we know it is entering a kind of managed decline. Over the past two and a half decades, Google extended itself into so many different parts of the web that it became synonymous with it. And now that LLMs promise to let users understand all that the web contains in real time, Google at last has what it needs to finish the job: replacing the web, in so many of the ways that matter, with itself.
I had actually read this article the day it came out, but I didn’t think too much of that paragraph until a couple days later at a dinner full of folks working on decentralization. Someone brought up that quote, though paraphrased it slightly differently, claiming Casey was saying that Google was actively putting the web into managed decline.
Whether or not that’s very different (and maybe it’s not), both should spark people to realize that this is a problem.
And it’s one of the reasons I am still hoping that people will spend more time thinking about solutions that involve decentralization. Not necessarily because of “search” (which tends to be more of a centralized tool by necessity), but because the world of decentralized social media could offer an alternative to the world in which all the information we consume is intermediated by a single centralized player, whether it’s a search engine like Google, or a social media service like Meta.
For the last few years, there have been stories trying to remind people that Facebook is not the internet. But that’s because, for some people, it kinda has been. And the same is true of Google. For some people, their online worlds exist either in social media or in search as the mediating forces in their lives. And, obviously, there are all sorts of reasons why that happens, but it should be seen as a much less fulfilling kind of internet.
The situation discussed here, where Google is trying to give people full answers via AI, rather than sending them elsewhere on the web, may well be “putting the web into managed decline,” but there’s no reason we have to accept that future.
The various decentralized social media systems that have been growing over the past few years offer a very different potential approach: one in which you get to build the experience you want, rather than the one a giant company wants. If you need information, others on the decentralized social network can help you find it or respond to your questions.
It’s a much more social experience, mediated by other people, perhaps on different systems, rather than a single giant company determining what you get to see.
The promise of the internet, and the World Wide Web in particular, was that anyone could build their own world there, connected with others. It was a world that wasn’t supposed to be in any kind of walled garden. But, many people have ended up in just a few of those walled gardens.
It’s no secret why: they do what they do pretty damn well, and certainly better than what was around before. People became reliant on Google search because it was much better. They became reliant on Facebook because it was an easy way to keep up with your family and friends. But in giving those companies so much control, we’ve lost some of that promise of the open web.
And now we can take it back. Whether it’s using ActivityPub/Mastodon, or Bluesky/ATProtocol (or others like nostr or Farcaster), we’re starting to see users building out an alternative vision that isn’t just mediated by single companies with Wall Street demands pushing them to enshittify.
No one’s saying to give up using Google, because it’s necessary for many. But start to think about where you spend your time online, and who is looking to lock you in vs. who is giving you more freedom to have the world that works best for you.
Filed Under: ai, decentralization, managed decline, open web, search
Companies: google
British Comic Artist Petitions USPTO To Cancel ‘Super Hero’ Trademark Held By DC, Marvel
from the a-real-superhero dept
It should come as no shock to anyone when I say that DC Comics and Marvel both behave in a very aggressive manner when it comes to all things intellectual property. These two companies have engaged in all kinds of draconian behavior when it comes to everything from copyright to trademark. But one thing that somehow escaped my attention all the years I’ve been writing for Techdirt is that those two companies also jointly hold a trademark, granted by the USPTO, for the term “Super Hero,” as well as several variants. You can visit that Wikipedia link to get some of the backstory as to how this all came to be, but, suffice it to say, that the term “super hero,” at this point in history, is obviously generic. Hell, it refers to an entire genre of movies, if nothing else.
Well, one comic artist in London is attempting to challenge that trademark with the USPTO, seeking to have it and its variants canceled entirely.
Scott Richold’s Superbabies Ltd told a USPTO tribunal, opens new tab that “Super Hero” is a generic term that is not entitled to trademark protection, according to a copy of the petition provided by Superbabies’ law firm Reichman Jorgensen Lehman & Feldberg.
Representatives for DC and Marvel did not immediately respond to requests for comment.
“By challenging these trademarks, we seek to ensure that superheroes remain a source of inspiration for all, rather than a trademarked commodity controlled by two corporate giants,” Superbabies’ attorney Adam Adler said in a statement.
Now, this is all coming about because DC Comics accused Superbabies Ltd. of trademark infringement when it caught whiff of the company’s own attempt to trademark its comic book name. But the idea that the term “Superbabies” or “Super Hero” could be monopolized for any market at all via trademark law is, at this point, absurd. And yet both DC and Marvel have wielded their trademarks many times in the past.
“DC and Marvel claim that no one can use the term Super Hero (or superhero, super-hero, or any other version of the term) without their permission,” the petition said. “DC and Marvel are wrong. Trademark law does not permit companies to claim ownership over an entire genre.”
I would argue that the term wasn’t particularly unique as an identifier back when it was first granted over 100 years ago and certainly isn’t now. When you hear the term, you might think of certain super heroes from either Marvel or DC. Or you might think about the many, many super hero characters out there that are not owned by those companies. The point is that the term is ubiquitous at this point.
Will the USPTO give serious consideration to canceling DC and Marvel’s joint trademark? I’m not sure, but it certainly should.
Filed Under: super hero, super heroes, trademark, uspto
Companies: dc comics, marvel
Public Records Show Cops Are Still Very Interested In Surveilling People Who Protest Against Cops
from the First-Amendment-casually-brushed-aside-again dept
There are few groups of people cops like less than people who don’t like cops. But it’s not that these people don’t like cops, per se. It’s that they’re tired of cops doing whatever they want whenever they want with near-zero accountability.
Cops continue to bad things, like murder unarmed people while “effecting arrests.” These actions prompt protests. Police are asked to oversee these protests to prevent things like further violence and/or property damage. Because they’re asked to police anti-police protests, they far too often choose to treat protected First Amendment expression as a criminal act.
It’s very much human nature to respond to antipathy with some of your own. The problem is cops are expected to rise above this human response to carry out their directives: keeping the peace and ensuring laws are respected by protesters.
But they don’t respect protesters, much less their rights. So, even if protesters are respecting the laws, law enforcement officers are going to do what they want to do, no matter how violent or unconstitutional it is. That’s the upshot of slew of documents obtained by the Brennan Center following several public records requests, as well as the inevitable litigation it took to force the Washington DC Metro Police to comply with the district’s public records laws.
The documents show DC Metro cops deliberately targeting protected expression by tracking and surveilling anti-police violence protests and the groups behind these demonstrations. Not only were they targeting people apparently for the sole reason of gaining more information about people who weren’t happy with their local law enforcement, they were sharing this information with federal law enforcement agencies.
We obtained documents from February 2020 to January 2023 showing that the MPD compiled information largely collected from social media platforms about upcoming demonstrations and other public events, including the date, time, location, organizer, and estimated crowd size for each assembly. During racial justice protests in the summer of 2020, the MPD provided these lists to federal agencies, including the Secret Service, National Park Service, and the Department of Defense. Likewise, federal agencies disseminated similar lists to email threads that included over a dozen local, municipal, state, and federal government officials.
[…]
In response to one email from an MPD lieutenant asking for accounts and posts from people on social media who were “urging people to riot,” one U.S. Capitol Police representative stated that they were tracking groups such as the DC chapters of Black Lives Matter, Refuse Fascism, and Showing Up for Racial Justice, along with terms including “Protest DC,” “Justice for George Floyd DC,” “BLM DC,” and “No Justice No Peace DC.” The Capitol Police official did not provide any evidence that these groups had “urg[ed] people to riot,” nor did the official demonstrate that the phrases they were tracking corresponded to credible threats to public safety.
It’s not that the Metro PD shouldn’t attempt to stay abreast of current developments to ensure proper staffing for upcoming protests. It’s that Metro PD officers and officials drilled down into the collected data for the sole reason of finding almost any reason to track down and arrest them for activities entirely unrelated to the protests the PD claimed it was monitoring.
In one case, a DC police officer flagged a post from a woman about an upcoming protest and then dug deeper into her social media profile to find a photo supposedly depicting a child she had left sleeping in her car while she participated in an earlier protest.
On top of that, the Metro PD made liberal use of the term “Antifa” to depict people as potential threats to officers and the general public. The woman with the sleeping child picture was claimed to be (without supporting evidence) to be “one of the main Antifa organizers in DC.”
This happened so often even the federal agencies the Metro PD shared its “intelligence” with pushed back. The Secret Service rejected one such “Antifa” report by noting the targeted protest group (All Out DC) had never posted anything “inciting violence” on its multiple social media accounts. Metro officers handling protest surveillance also claimed (without evidence) that people participating in vigils for people killed by police officers were far more likely to commit acts of violence.
As if this is isn’t disturbing enough, here’s the background on the Metro PD officer who was in charge of the intelligence gathering depicted in the documents obtained by the Brennan Center:
Notably, the MPD officer who oversaw the department’s intelligence branch at the time and was heavily involved in these activities was suspended and subsequently indicted in part for allegedly providing Enrique Tarrio, the leader of the Proud Boys, with information about police investigations into the group.
It was more than a casual disregard for the First Amendment. From all appearances, this “intelligence gathering” seems to have been infected by the head officer’s desire to shield pro-cop bigots from scrutiny while demonizing their opposition as “Antifa” or worse. And that means the intelligence-gathering operation had far less to do with ensuring the safety of the general public than generating the confirmation bias needed to take action against activists seeking social justice.
Worse, the supposedly protest-targeted surveillance was expanded to cover entire neighborhoods in DC — neighborhoods mainly populated by black residents. What began as a public safety oriented effort, supposedly aimed at ensuring proper staffing and response to upcoming demonstrations, became just another tool of oppression to be wielded by a Metro PD officer with ties to known (and acknowledged!) bigots.
We also obtained heavily redacted SCI Area Enforcement reports from May to July 2014 containing information about four designated areas or groups — Benning Corridor, Choppa City, Barry Farm, and Washington Highlands, which are in overwhelmingly Black wards of DC — sourced almost entirely from Twitter. Though each report states that it contains information found on social media “pertaining to ongoing criminal activities, beefs, and retaliations,” the little information that the MPD left unredacted demonstrates that these reports also include events and gatherings that appear far more innocuous, such as a birthday party, a graduation celebration, a cookout, a trip to Six Flags, a mixtape release party, and a concert.
Beyond that, there’s the usual cop disregard for laws, rules, and policies erected by others. Fake social media accounts were used to collect information — something police agencies have been repeatedly told by Meta and others violates their terms of service. The documents obtained here include communications and presentations where officers and officials openly acknowledge their actions with violate user agreements with the sites they monitor. Not included in these documents is any suggestions cops shouldn’t do this sort of thing, much less any policy forbidding investigators and officers from breaking the rules at targeted websites.
There’s not a lot that’s truly astounding in this document dump. But that’s not to say it’s not worth looking at or pointing out. Instead, it’s more confirmation that American law enforcement considers constitutional rights, internal policies, and third-party user agreements subservient to its surveillance goals. And, somehow, cops are still wandering around apparently just astounded the general public doesn’t like them, much less trust them.
Filed Under: 1st amendment, dc metro police, police, protests, surveillance
The Horribly Stupid Saga Of Craig Wright, The Fake Satoshi, Should Now Be Over
from the don't-mince-words-now dept
It’s been a while since we last mentioned Craig Wright here on Techdirt. We’ve been pretty clear all along, like pretty much everyone else, that Wright was so obviously full of shit in claiming to be Satoshi Nakamoto, and then trying to claim patents and copyrights over all kinds of Bitcoin/cryptocurrency related things.
Over the last few years, Wright has been continuing to make a pest of himself with lawsuits. In 2021 he sued a bunch of core Bitcoin developers, claiming that he lost encrypted keys to billions of dollars’ worth of Bitcoin when he was hacked, and demanding that the Bitcoin developers patch Bitcoin code to give him back the money.
All of this seemed utterly ridiculous for all sorts of reasons, but it was awful for those developers in particular who were just trying to develop Bitcoin. Suddenly, they faced the prospect of a full trial after a judge allowed the case to move forward last year, positioning it for a full trial. With some funding help from Jack Dorsey, the developers were able to fight back somewhat, allowing the Crypto Open Patent Alliance (COPA) to go even further in challenging Wright’s claims.
The last few months have not gone well for Wright in court (to put it mildly).
The legal challenge from COPA put Wright in a position of having to prove that he was Satoshi or basically to fuck off. That trial did not go well. In March, the judge declared that Wright clearly was not Satoshi and did not write the original Bitcoin whitepaper. This was hardly a surprise to anyone, but given how the case against the Bitcoin developers had progressed, there was real nervousness about how the court would rule here. But the judge was pretty explicit on this point:
“Dr Wright is not the author of the Bitcoin White Paper. Second, Dr Wright is not the person who adopted or operated under the pseudonym Satoshi Nakamoto in the period 2008 to 2011. Third, Dr Wright is not the person who created the Bitcoin System. And, fourth, he is not the author of the initial versions of the Bitcoin software.”
A month later, in April, Wright just out and out dropped the case against those Bitcoin developers.
Now, a month after that, we have a follow-up judgment in that original case that goes beyond just what Wright is not. Now the judge is calling out Wright for apparently lying and forging documents to make his case. The ruling and its related appendix are brutal. The ruling itself is well over 200 pages of fascinating detail. But the summary and the conclusions are all you really need to know.
From the summary:
Dr Craig Steven Wright (‘Dr Wright’) claims to be Satoshi Nakamoto i.e. he claims to be the person who adopted that pseudonym, who wrote and published the first version of the Bitcoin White Paper on 31 October 2008, who wrote and released the first version of the Bitcoin Source Code and who created the Bitcoin system. Dr Wright also claims to be a person with a unique intellect, with numerous degrees and PhDs in a wide range of subjects, the unique combination of which led him (so it is said) to devise the Bitcoin system.
Thus, Dr Wright presents himself as an extremely clever person. However, in my judgment, he is not nearly as clever as he thinks he is. In both his written evidence and in days of oral evidence under cross-examination, I am entirely satisfied that Dr Wright lied to the Court extensively and repeatedly. Most of his lies related to the documents he had forged which purported to support his claim. All his lies and forged documents were in support of his biggest lie: his claim to be Satoshi Nakamoto.
Many of Dr Wright’s lies contained a grain of truth (which is sometimes said to be the mark of an accomplished liar), but there were many which did not and were outright lies. As soon as one lie was exposed, Dr Wright resorted to further lies and evasions. The final destination frequently turned out to be either Dr Wright blaming some other (often unidentified) person for his predicament or what can only be described as technobabble delivered by him in the witness box. Although as a person with expertise in IT security, Dr Wright must have thought his forgeries would provide convincing evidence to support his claim to be Satoshi or some other point of detail and would go undetected, the evidence shows, as I explain below and in the Appendix, that most of his forgeries turned out to be clumsy. Indeed, certain of Dr Wright’s responses in cross-examination effectively acknowledged that point: from my recollection at least twice he indicated if he had wanted to forge a document, he would have done a much better job.
If Dr Wright’s evidence was true, he would be a uniquely unfortunate individual, the victim of a very large number of unfortunate coincidences, all of which went against him, and/or the victim of a number of conspiracies against him.
The true position is far simpler. It is, however, far from simple because Dr Wright has lied so much over so many years that, on certain points, it can be difficult to pinpoint what actually happened. Those difficulties do not detract from the fact that there is a very considerable body of evidence against Dr Wright being Satoshi. To the extent that it is said there is evidence supporting his claim, it is at best questionable or of very dubious relevance or entirely circumstantial and at worst, it is fabricated and/or based on documents I am satisfied have been forged on a grand scale by Dr Wright. These fabrications and forgeries were exposed in the evidence which I received during the Trial. For that reason, this Judgment contains considerable technical and other detail which is required to expose the true scale of his mendacious campaign to prove he was/is Satoshi Nakamoto. This detail was set out in the extensive Written Closing Submissions prepared by COPA and the Developers and further points drawn out in their oral closing arguments.
And from the conclusion:
Overall, in my judgment, (and whether that distinction is maintained or not), Dr Wright’s attempts to prove he was/is Satoshi Nakamoto represent a most serious abuse of this Court’s process. The same point applies to other jurisdictions as well: Norway in particular. Although whether Dr Wright was Satoshi was not actually in issue in Kleiman, that litigation would not have occurred but for his claim to be Satoshi. In all three jurisdictions, it is clear that Dr Wright engaged in the deliberate production of false documents to support false claims and use the Courts as a vehicle for fraud. Despite acknowledging in this Trial that a few documents were inauthentic (generally blamed on others), he steadfastly refused to acknowledge any of the forged documents. Instead, he lied repeatedly and extensively in his attempts to deflect the allegations of forgery.
Also, there’s this:
I tried to identify whether there was any reliable evidence to support Dr Wright’s claim and concluded there was none. That was why I concluded the evidence was overwhelming
That one had the emphasis in the original.
Wright has already said he’ll appeal, and we’ve seen over the years that this guy thrives off of the media coverage (I’d been debating over the past year whether to cover any part of this case, but finally figured now was the time to highlight just this brutal decision).
No one has seriously believed that Wright had any connection to Satoshi. His years-long campaign of bullying and nonsense should fade into the ugly dustbin of history. It should be seen as an example of brazen mendacity in pursuit of great wealth, without a care in the world for who he would run over and destroy in the process.
I hope we can retire the Craig Wright tag with this story, though I fear he’ll still be causing a nuisance somewhere.
Filed Under: bitcoin, craig wright, fake satoshi, forgery, fraud, lies, satoshi nakamoto, uk
Companies: copa, tulip trading
Vindictive Nonsense: Tesla Threatens To Fire Law Firm Over Expert’s Amicus Brief
from the thus-proving-the-point dept
It’s no secret that Elon Musk can be petty and vindictive over the dumbest shit. You may have heard that he fired the entire Supercharger team a few weeks ago entirely due to him getting upset at what the woman who led that team told him (he’s now scrambling to try to rehire the team he fired — another thing that’s happened before).
Sometimes it gets even sillier. You may recall a couple of years ago when Tesla demanded that law firm Cooley LLP fire a lawyer who happened to have worked at the SEC back when Elon was fined for tweeting about his supposed plans to take the company private.
Pressuring law firms is apparently becoming a pattern.
Charles Elson, a retired Finance Professor at the University of Delaware, is a well-recognized authority on corporate governance issues. And it seems that Elon is terrified he might give his opinions to the Delaware Court of Chancery that is handling his compensation lawsuit.
In the past, I’ve explained how this whole lawsuit doesn’t make that much sense to me. It’s one case where I think Elon’s argument is actually entirely plausible. I wouldn’t vote in favor of his $55 billion pay package, but I can see why some people might not find it problematic. But, it seems that Elon is really, really scared about losing that payday. Hell, Tesla, which is famous for not advertising anything, is advertising to shareholders to tell them to vote to reinstate Elon’s pay package.
Still, even if I find the lawsuit a bit perplexing, it seems that Musk wants to handicap the opposition.
Elson filed one hell of a motion, asking for leave to file his expected amicus brief, noting that the Musk Team started playing hardball to try to force him not to file.
Professor Elson, a leading authority on corporate law, moves for leave to submit a second proposed amicus curiae brief in this action. Professor Elson previously submitted an amicus brief concerning the development and goals of equity-linked executive compensation during the post-trial briefing stage of this action, which the Court found “persuasive.” Professor Elson now writes to provide the Court with additional context and analysis in connection with the Tesla Board’s unprecedented attempt to seek a post-trial stockholder vote to ratify the Award.
Additional context, you say? What sort of context? Perhaps some of it has to do with how badly Elon doesn’t want Elson to say anything.
It’s pretty typical for parties to consent to amicus briefs being filed as a matter of course. Even if they know the briefs will challenge or disagree with their arguments. It’s just professional courtesy, and courts expect it. Opposing efforts to file an amicus brief can raise eyebrows. And Tesla went all in trying to block Elson:
Plaintiff consents to this motion. Defendants do not and Musk was willing to go to extraordinary—and appalling—lengths to prevent this Court from reading the Brief.
Early Friday morning, Professor Elson’s counsel emailed a copy of the Brief to counsel for the parties, asking whether they would consent to a motion for leave to file it. Plaintiff’s counsel responded that they did not oppose its submission. Tesla’s counsel from DLA Piper telephoned Professor Elson’s counsel to assert, without further explanation, that Professor Elson “may have a conflict” and asked counsel to hold off on filing the brief.
Soon after, Professor Elson received an email from Holland & Knight LLP, a law firm with which Professor Elson had a consulting relationship. Holland & Knight informed Professor Elson that the firm represents Tesla in certain unrelated matters and that Tesla had threatened to fire Holland & Knight if Professor Elson submitted this amicus brief.
The assertion that Professor Elson was conflicted is risible—which is presumably why Tesla’s then-counsel raised no objection when Professor Elson submitted his prior amicus brief in this matter. The rules of professional conduct prevent a lawyer from representing a client if the representation of one client will be directly adverse to another client. None of those elements was present here:
- Professor Elson is neither acting as a lawyer nor representing a client in this action; he is represented by counsel and seeks leave to file a brief as an amicus.
- Nor was Professor Elson acting as a lawyer at Holland & Knight; the rules of professional conduct do not impute conflicts from a consultant to a law firm or from a law firm to a consultant.
- Nor is Professor Elson acting adversely to Tesla; his brief is defending a multi-billion-dollar judgment in Tesla’s favor.
I mean, all of this is incredible. The threat. The weak ass claims of a “conflict.” But, most of all, the very fact (as Elson points out) that his argument is actually in support of Tesla which benefits by not having to give out this massive pay package if Elon loses.
To avoid having his professional associates suffer because of Elon’s petty vindictiveness, Elson chose to resign from Holland & Knight, “ending a relationship of nearly thirty years.”
This is doubly ridiculous given all of the conflicts that Elon has between his various companies, and the fact that he’s been claiming that he “deserves” this $55 billion pay package for all his hard work. Does Elson not then deserve to continue his relationship with H&K for all of his work? Of course not. The primary motive of everything Elon is “what benefits Elon?”
And, of course, the whole thing acts as a kind of Streisand Effect highlighting the key point that Elson was trying to raise. Tesla and Elon’s interests are averse here, yet the company is acting as if they’re aligned, which at least gives pretty strong credence to the idea (at the heart of the lawsuit) that the board is focused on helping Musk, rather than looking out for Tesla’s best interests.
The Court should have no illusions about what happened here. The frivolous assertion of a conflict was a fig leaf for Musk, acting through Tesla, to try to bully a law professor by making a serious economic threat to a law firm with which the professor had a consulting relationship. This is not the first time that Tesla has threatened to fire a law firm for employing someone who annoyed Elon Musk by doing his job. That it did so again here only emphasizes the correctness of the Court’s conclusion that Musk controls Tesla
And, of course, it’s giving everyone yet another glimpse into the ways in which Musk will let any slight turn him into a vindictive asshole.
Meanwhile, at the very end of the week, Tesla filed with the court to “reject the amicus’s motion that it is ‘appalling’ or ‘bullying…” but still admitting that they did, in fact, everything that Elson said, though they claim they were just raising “a potential conflict issue.”
Um. No. Again, Elson’s brief was on behalf of Tesla suggesting that they shouldn’t have to pay Musk his huge compensation. If there’s any “potential conflict issue” here, it seems to be on the lawyers ostensibly representing “Tesla” but instead advocating for something that would harm Tesla, while benefiting Elon Musk.
Filed Under: amicus brief, charles elson, compensation, delaware court of chancery, elon musk, threats
Companies: holland & knight, tesla
Social Media’s Electoral Power: More Hype Than Reality?
from the destroying-all-your-priors dept
It’s been almost an article of faith among many (especially since 2016) that social media has been a leading cause of our collective dumbening and the resulting situation in which a bunch of fascist-adjacent wannabe dictators getting elected all over the place.
But, we’ve always found that argument to feel massively, if not totally overblown. And, the data we’ve seen has highlighted how little impact social media has actually had on elections (cable news might be a bit different).
Now there’s a new study out of NYU’s Center for Social Media & Politics, which has been working through a ton of fascinating social media data over the past few years. This latest study suggests that the impact of social media on the 2020 election appears to have been minimal.
This is based on looking at the behavior of people who deactivated their Facebook and Instagram accounts in the runup to the election, and how that changed (or didn’t change) their behavior.
We use a randomized experiment to measure the effects of access to Facebook and Instagram on individual-level political outcomes during the 2020 election. We recruited 19,857 Facebook users and 15,585 Instagram users who used the platform for more than 15 min per day at baseline. We randomly assigned 27% to a treatment group that was paid to deactivate their Facebook or Instagram accounts for the 6 wk before election day, and the remainder to a control group that was paid to deactivate for just 1 wk. We estimate effects of deactivation on consumption of other apps and news sources, factual knowledge, political polarization, perceived legitimacy of the election, political participation, and candidate preferences.
There were a few interesting findings, though I’m not sure any are particularly surprising. They found that users without social media lessened their knowledge of news events, but increased their ability to recognize disinformation.
The study also found that the deactivation had effectively no impact on “issue polarization.” This result is different than when a similar study was done in 2018, which the authors chalk up, potentially, to the differences between a mid-term election and a general election.
The issue polarization variable is an index of eight political opinions (on immigration, repeal of Obamacare, unemployment benefits, mask requirements, foreign policy, policing, racial justice, and gender relations), with the signs of the variables adjusted so that the difference between the own-party and other-party averages is positive. These questions were chosen to focus on issues that were prominent during the study period. Neither Facebook nor Instagram deactivation significantly affected issue polarization, and the 95% CI bounds rule out effects of ±0.04 SD.
As a point of comparison for these magnitudes, ref. 5 find that Facebook deactivation reduced an overall index of political polarization prior to the 2018 midterm elections. This includes a statistically insignificant reduction of 0.06 SD in a measure of affective polarization, and a significant reduction of 0.10 SD in a measure of issue polarization. One possible explanation for the difference in effects on issue polarization is that our study took place during a presidential election, where the environment was saturated with political information and opinion from many sources outside of social media. Another possible explanation is that the set of specific issues on which we focus here may have produced different responses. As another comparison point, ref. 26 estimate that affective polarization has grown by an average of 0.021 SD per year since 1978.
They also found no change in the “perceived legitimacy of the election” which is interesting given how prevalent that issue has been (especially among the Trumpist contingent). If you thought people only falsely believed the election was stolen because of Facebook, the data just doesn’t support that:
The perceived legitimacy variable is an index of agreement with six statements: i) Elections are free from foreign influence, ii) all adult citizens have equal opportunity to vote, iii) elections are conducted without fraud, iv) government does not interfere with journalists, v) government protects individuals’ right to engage in unpopular speech, and vi) voters are knowledgeable about candidates and issues. Neither Facebook nor Instagram deactivation had a significant effect, and the 95% CI bounds rule out effects of ±0.04 SD.
There’s more in the study as well, but it’s good to see more actual data and research along these lines. As a first pass, it again looks like the rush to blame social media for all the ills in the world might just be a bit overblown.
Filed Under: disinformation, elections, politics, studies
Companies: facebook, instagram, meta, nyu
To The Surprise Of Absolutely No One, Cops Under Facial Recognition Bans Are Asking Other Agencies To Run Searches For Them
from the law-fought-the-law-and-the-law-lost dept
God forbid any of you peons break a law. It doesn’t matter if you only do it once. If you get caught, it’s all on you.
But if you’re a cop, laws are, at best, suggestions. Break them if you can. Ignore them when they’re inconvenient. And treat any law or court ruling that reins in officers (and/or protects constitutional rights) as optional unless there’s no way through it but to respect it.
Cops dodge warrant requirements for cell phone location data by buying data directly from third-party data brokers. Cops avoid local laws limiting civil asset forfeiture by asking the feds to “adopt” their latest stash of ill-gotten booty, allowing themselves to benefit directly from seizures otherwise restricted in their locales.
And, now that they’re subject to facial recognition tech bans in several places around the nation, they’re ignoring those laws too. Douglas MacMillan has the details for the Washington Post.
Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents.
In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned, according to a summary of those incidents submitted by the department to the county’s board of supervisors last year.
[…]
Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity.
By definition, these are isolated incidents. Roughly 99.9% of the nation is free of any facial recognition tech bans. And the number of violations reported here appear (that’s a very key word) to be extremely low given the number of theoretical opportunities available to law enforcement officers to break the law.
But let’s not pretend that means it’s ok. Any violation of the law is a violation of the law. No one’s letting you out of a speeding ticket because you generally follow the speed limit. And no court is just going to dismiss charges because it’s the only time you’ve ever murdered anyone.
True, violating facial recognition bans isn’t on par with murder. But it is on par with, at the very least, traffic violations. If we don’t get a free pass when we’ve been caught speeding, cops shouldn’t be given a free pass on facial recognition ban violations just because they haven’t violated the bans thousands of times.
The SFPD spokesperson confirmed no investigations were opened or officers disciplined for violating the ban by asking outside agencies to run searches for it. The same thing goes for the Austin PD, which only admitted the unlawful searches had been requested after being contacted by the Washington Post. It said vague things about an investigation, but the spokesperson said nothing that suggested people would be punished or steps would be taken to prevent further lawbreaking by the PD’s law enforcers.
But here’s the real heart of the issue: what’s reported here is most likely an undercount. These violations are likely occurring far more frequently. As the article points out, law enforcement agencies rarely like to discuss use of this tech, even when presenting evidence in court. What’s leaked out into the public domain via public records requests is most likely just the tip of the iceberg.
[E]nforcing these bans is difficult, experts said, because authorities often conceal their use of facial recognition. Even in places with no restrictions on the technology, investigators rarely mention its use in police reports. And, because facial recognition searches are not presented as evidence in court — legal authorities claim this information is treated as an investigative lead, not as proof of guilt — prosecutors in most places are not required to tell criminal defendants they were identified using an algorithm, according to interviews with defense lawyers, prosecutors and judges.
“Police are using it but not saying they are using it,” said Chesa Boudin, San Francisco’s former district attorney, who said he was wary of prosecuting cases that may have relied on information the SFPD obtained in violation of the city’s ban.
Even if we take the numbers at face value, it’s still a problem. And it’s one that has existed as long as law enforcement agencies have existed. To law enforcers, laws are for other people. When they break them, it’s because they’re pursuing loftier goals, like public safety. When normal people do it, they’re just criminals. And because they’re criminals, every violation should be handled harshly. When cops do it, everyone is just expected to shrug it off as the cost of doing public safety business.
But we shouldn’t accept this, not even in limited quantities. And the cities and states that have passed these bans need to be right on top of this, demanding accountability and transparency from the law enforcement agencies they oversee. If they’re assuming cops won’t break laws they don’t like, they’re stupider than the cops they’re overseeing and twice as stupid as cops think their overseers are. If you can’t keep this from happening, why even bother passing laws?
Filed Under: austin d, facial recognition, facial recognition ban, police, sfpd
NFL Inks Deal With Netflix To Stream A Handful Of Xmas Day Games Only
from the not-found-league dept
Are you an NFL fan? If you are, are there particular teams or games you want to watch? The obvious answer to that second question would be “yes”, though the answer to whether you’ll actually be able to watch those games is much less obvious and much more convoluted. It depends which team, and which game, and on which day, and with which device. That’s because the NFL product has become so impossibly fractured across all kinds of broadcast and streaming partners that you have to wade through a labyrinth just to figure out if you’re subscribed to the right product for a particular game.
We talked about this already earlier this year when the NFL put a single game exclusively on Peacock. That brand new experiment didn’t go terribly, but it also didn’t do great. For streaming events, the numbers it drew were huge. So too was it surprising to learn that more people than I would have expected let their brand new Peacock subscriptions remain weeks and weeks after the game. On the other hand, the game didn’t do anything like the numbers NFL games typically do on broadcast television or streaming services that closely resemble them, such as YouTube TV.
Meanwhile, you could currently watch NFL games on cable, if you were in the right geographic area, on YouTube TV, on Amazon for Thursday night games, on Peacock, on ESPN Plus, or on the NFL’s NFL+ (but only if you are okay watching it on a phone or tablet). And now, on Christmas Day for the next several years, you’ll have to be a subscriber to Netflix to watch those games as well.
The first Netflix Christmas games will be this season, on December 25, 2024, (that’s a Wednesday, by the way). Netflix will get two Christmas games this year, Chiefs at Steelers and Ravens at Texans, with exact times to be announced later tonight at the NFL’s live schedule unveiling extravaganza (even the schedule release is an event now). The NFL says 2025 and 2026 will see “at least one” game on the service each Christmas. The exact terms of the deal were not disclosed.
When it comes to televised live events, the NFL is obviously king. If you look at the top 30 televised events for 2023, you will notice that 29 of them are NFL games. So it’s obvious why any streaming partner would want a piece of the game to attract those eyeballs to their platforms.
But at some point the product has to become so fractured to have a negative viewership effect, right? This cannot go on forever. And for all the fanfare around the Peacock exclusive game last year, there was also a lot of anger and frustration from folks that thought it was quite shitty that they had to sign up for yet another streaming service just to watch a single game due to the NFL’s insatiable appetite for cash.
But if the NFL is hoping to eventually unify its streaming rights, it won’t be able to do so until 2029.
Netflix has been dipping its toe into the NFL content stream with special reality-style documentaries like Quarterback and the upcoming Receiver, which star current NFL players, but this will be the first time the streamer will air live football. With NFL Sunday Ticket on YouTube TV and Thursday Night Football games on Amazon Prime, the NFL is moving online more than ever. In a few years, things will get even wilder: In 2029, the NFL can cancel all the TV deals at the same time if it wants. That would lead to an unprecedented bidding war among all the TV and streaming providers and would upend the entire NFL content world.
While true, at some point NFL fans are just going to want to be able to know where they can watch the damned games. Having seven to ten places to have to either subscribe to and/or navigate to find that game is an opportunity cost that will eventually have an effect.
Pennsylvania Once Again Shows What Broadband Corruption Looks Like: Doles Out Millions In Dodgy, Non-Transparent Grants To Comcast, Verizon In Favored Political Districts
from the do-not-pass-go,-do-not-collect-$200 dept
By now we’ve laid out the case that U.S. broadband is spotty, expensive, and slow due to regional monopolies and the corruption that protects them. Despite this, every time the U.S. decides to spend taxpayer money on broadband, said corruption usually ensures that we throw most of that money into the laps of the same giant companies responsible for our broadband woes to begin with.
America loves dumping billions of dollars into the accounts of AT&T, Comcast, Verizon, or other giants in exchange for layoffs and half-deployed networks. Companies that have lobbied for decades to crush all competition and defang regulators to ensure U.S. broadband remains as expensive and spotty as possible get billions of dollars to do very little (or nothing). It’s utterly pathological and it never changes.
Case in point: Pennsylvania Governor Josh Shapiro recently announced that the state would be doling out $204 Million to deliver broadband to 100,000 Pennsylvanians in 42 counties. Officials insist that project applications were evaluated based on “experience and ability of the applicant to successfully deploy high-speed broadband service,” and “affordability standards that include a low-cost option.”
The problem: nearly all the money was simply dropped into the laps of Comcast and Verizon, the latter of which has an extremely long history of ripping off Pennsylvania telecom subsidy programs in exchange for networks that are routinely not fully delivered. Verizon was at the heart of a major scandal on this front in the 90s, and again in the 2000s when accused of neglecting its aging DSL networks.
Another problem: smaller ISPs, cooperatives, and community broadband networks (which have a solid track record of deploying more affordable access) were ignored entirely. Penn State Telecommunications Professor Sascha Meinrath tells me that community broadband ISPs and nonprofits that promised to deploy faster broadband at much lower costs were completely snubbed:
“I’ve now talked to multiple ISPs that offered a faster, cheaper service but got turned down,” he said. “And I’m like, so wait a second…what criteria were they using to decide on these two companies? There’s just a real lack of clarity as to what’s transpired here, frankly.”
Worse, Meinrath notes that the state politicians in charge of the organization in charge of determining who won awards (the Pennsylvania Broadband Development Authority (PBDA)) conveniently wound up driving most of the awards to their own districts:
“All four of the board members — like Republican Senator Gene Yaw — have projects in their own very small districts, which statistically speaking is an incredible occurrence, because only one tenth to one twentieth of the state is covered by these grants.”
There are questions about how any of this is even legal. In several of the grant applications the state appears to have twisted itself in knots to approve funding for Comcast and Verizon — in key political districts — without bothering to offer the slightest transparency into how the state made determinations.
Pennsylvania is also one of 17 states where telecoms like Comcast lobbied for what’s effectively a state ban on community-owned broadband networks, despite the fact such networks routinely help drive affordable access — and competition — to broken broadband markets.
In PA policy conversations, Meinrath notes that the PA government likes to pretend that this law doesn’t exist, despite the section in question being literally titled “prohibition against political subdivision broadband services deployment.” The entire state policy apparatus is custom built to make it difficult to challenge monopolies — while simultaneously denying that this is happening:
“They [giant private providers] have a right at first refusal for muni networks, but also for public private partnerships, which again, this is not even acknowledged by the state,” Meinrath said. “You can imagine if there is a law that is on the books – Title 66, paragraph 3014, subpart H – but declared to not exist, officially – it makes it very awkward.”
This is all par for the course. Politicians from both parties will wax poetic endlessly about the need to “bridge the digital divide.” But even the best intentioned are too politically timid to acknowledge that monopolization and competition problems exist, much less propose solutions.
So when pressured to “fix the problem,” their solution is almost always to throw a bunch of money into the laps of politically powerful telecom monopolies responsible for much of the problem in the first place. Companies that are, as extensions of our domestic surveillance systems, now well beyond the reach of coherent reason, accountability, or the law.
There are billions more in broadband subsidies headed to the states via the $42 billion in broadband subsidies included in the infrastructure bill. But unlike Pennsylvania’s recent grant awards, the federal process will actually involve something genuinely resembling transparency, hopefully giving small businesses, cooperatives, and community owned broadband networks a better shot.
Still, telecom monopolies like Comcast and Verizon are there too, working overtime to ensure they not only hoover up the lion’s share of the funding, but don’t face any sort of “onerous” requirements, like having to actually deliver uniform, affordable broadband access to poor people.
Filed Under: broadband, corruption, fiber, grants, high speed internet, josh shapiro, pennsylvania, subsidies, telecom
FTC Hints At Regulatory Action Against Automakers For Terrible Privacy Practices
from the I-can't-drive-55 dept
In 2023, Mozilla released a report noting that modern cars had the worst security and privacy standards of any major technology industry the organization tracks. That was followed by a NYT report earlier this year showing how automakers routinely hoover up oodles of consumer driving and phone info, then sell access to that data to auto insurance companies looking to justify rate hikes.
The very least the auto industry can do is make these transactions clear to car owners, but most of the time they can’t even do that.
Now it looks like the FTC might be considering legal action against the auto industry for lax privacy standards. An FTC blog post indicates that the “connected car” industry has been on the agency’s “radar for years,” and hinted at potential future actions:
“Car manufacturers—and all businesses—should take note that the FTC will take action to protect consumers against the illegal collection, use, and disclosure of their personal data.”
The FTC is being prodded into action by the concerns of Senator Ron Wyden, whose office launched an investigation finding that automakers routinely collect not only driver behavior data but data from connected phones, sell access to a myriad of often dodgy third parties and data brokers, and routinely fail to make any of those transactions meaningfully clear to car owners.
Usually customer acceptance for such monetization of data isn’t buried in your car paperwork; it’s buried in the user agreement connected to automakers’ car apps or road-side assistant apps. This is, it should be noted, the same industry that’s fighting tooth and nail against “right to repair” reforms under the pretense that it just cares a whole lot about consumer privacy and security.
Of course the FTC lacks the resources, staff, and authority (quite by lobbying design) to meaningfully police U.S. tech privacy violations at the scale they’re happening. And even should the FTC take action, any fines would likely comprise a tiny fraction of the money made from non-transparently and haphazardly monetizing drivers’ every fart for the better part of the last two decades.
And whatever fines that do get levied are often reduced further (or eliminated entirely) thanks to multi-year legal fights within an increasingly corrupt court system.
Still, it’s important to try to have standards. It’s what separates us from potatoes.
As Wyden’s office has made clear, the stakes for our corrupt failure to pass baseline privacy laws or regulate data brokers continue to rise. Demonstrated pretty clearly by his office’s recent discovery that a data broker had been selling abortion clinic location data to right wing activists, who then took to targeting vulnerable women with health care disinformation.
But between regulators that have been steadily boxed in by thirty-years of lobbying and corrupt court rulings, and a Congress that’s too corrupt to function, it seems like we’ll be waiting a long time to see meaningful reform on this front. And that reform is only likely to come courtesy of a privacy scandal whose scope and impact we probably can’t imagine.
Filed Under: automakers, cars, data brokers, ftc, location data, privacy, privacy law, ron wyden, security
Techdirt Podcast Episode 392: Platform Moderation Or Individual Control?
As decentralized social media experiments continue, we’re getting more and more opportunities to really understand the impact of decentralized systems and how they are received by users. Amy Zhang, Assistant Professor of Computer Science at the University of Washington, has been studying and thinking about these issues a lot, and this week she joins us on the podcast to discuss a recent paper and, in general, how users are faring in the world of decentralized social media and content moderation.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: amy zhang, content moderation, decentralization, podcast, social media
No comments:
Post a Comment