10 (Not So) Hidden Dangers of Age Verification
from the it-causes-real-problems dept
It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various requirements to verify your age before you can create a social media account.
Age-verification laws may sound straightforward to some: protect young people online by making everyone prove their age. But in reality, these mandates force users into one of two flawed systems—mandatory ID checks or biometric scans—and both are deeply discriminatory. These proposals burden everyone’s right to speak and access information online, and structurally excludes the very people who rely on the internet most. In short, although these laws are often passed with the intention to protect children from harm, the reality is that these laws harm both adults and children.
Here’s who gets hurt, and how:
1. Adults Without IDs Get Locked Out
Document-based verification assumes everyone has the right ID, in the right name, at the right address. About 15 million adult U.S. citizens don’t have a driver’s license, and 2.6 million lack any government-issued photo ID at all. Another 34.5 million adults don’t have a driver’s license or state ID with their current name and address.
- 18% of Black adults don’t have a driver’s license at all.
- Black and Hispanic Americans are disproportionately less likely to have current licenses.
- Undocumented immigrants often cannot obtain state IDs or driver’s licenses.
- People with disabilities are less likely to have current identification.
- Lower-income Americans face greater barriers to maintaining valid IDs.
Some laws allow platforms to ask for financial documents like credit cards or mortgage records instead. But they still overlook the fact that nearly 35% of U.S. adults also don’t own homes, and close to 20% of households don’t have credit cards. Immigrants, regardless of legal status, may also be unable to obtain credit cards or other financial documentation.
2. Communities of Color Face Higher Error Rates
Platforms that rely on AI-based age-estimation systems often use a webcam selfie to guess users’ ages. But these algorithms don’t work equally well for everyone. Research has consistently shown that they are less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds; that they often misclassify those adults as being under 18; and sometimes take longer to process, creating unequal access to online spaces. This mirrors the well-documented racial bias in facial recognition technologies. The result is that technology’s inherent biases can block people from speaking online or accessing others’ speech.
3. People with Disabilities Face More Barriers
Age-verification mandates most harshly affect people with disabilities. Facial recognition systems routinely fail to recognize faces with physical differences, affecting an estimated 100 million people worldwide who live with facial differences, and “liveness detection” can exclude folks with limited mobility. As these technologies become gatekeepers to online spaces, people with disabilities find themselves increasingly blocked from essential services and platforms with no specified appeals processes that account for disability.
Document-based systems also don’t solve this problem—as mentioned earlier, people with disabilities are also less likely to possess current driver’s licenses, so document-based age-gating technologies are equally exclusionary.
4. Transgender and Non-Binary People Are Put At Risk
Age-estimation technologies perform worse on transgender individuals and cannot classify non-binary genders at all. For the 43% of transgender Americans who lack identity documents that correctly reflect their name or gender, age verification creates an impossible choice: provide documents with dead names and incorrect gender markers, potentially outing themselves in the process, or lose access to online platforms entirely—a risk that no one should be forced to take just to use social media or access legal content.
5. Anonymity Becomes a Casualty
Age-verification systems are, at their core, surveillance systems. By requiring identity verification to access basic online services, we risk creating an internet where anonymity is a thing of the past. For people who rely on anonymity for safety, this is a serious issue. Domestic abuse survivors need to stay anonymous to hide from abusers who could track them through their online activities. Journalists, activists, and whistleblowers regularly use anonymity to protect sources and organize without facing retaliation or government surveillance. And in countries under authoritarian rule, anonymity is often the only way to access banned resources or share information without being silenced. Age-verification systems that demand government IDs or biometric data would strip away these protections, leaving the most vulnerable exposed.
6. Young People Lose Access to Essential Information
Because state-imposed age-verification rules either block young people from social media or require them to get parental permission before logging on, they can deprive minors of access to important information about their health, sexuality, and gender. Many U.S. states mandate “abstinence only” sexual health education, making the internet a key resource for education and self-discovery. But age-verification laws can end up blocking young people from accessing that critical information. And this isn’t just about porn, it’s about sex education, mental health resources, and even important literature. Some states and countries may start going after content they deem “harmful to minors,” which could include anything from books on sexual health to art, history, and even award-winning novels. And let’s be clear: these laws often get used to target anything that challenges certain political or cultural narratives, from diverse educational materials to media that simply includes themes of sexuality or gender diversity. What begins as a “protection” for kids could easily turn into a full-on censorship movement, blocking content that’s actually vital for minors’ development, education, and well-being.
This is also especially harmful to homeschoolers, who rely on the internet for research, online courses, and exams. For many, the internet is central to their education and social lives. The internet is also crucial for homeschoolers’ mental health, as many already struggle with isolation. Age-verification laws would restrict access to resources that are essential for their education and well-being.
7. LGBTQ+ Youth Are Denied Vital Lifelines
For many LGBTQ+ young people, especially those with unsupportive or abusive families, the internet can be a lifeline. For young people facing family rejection or violence due to their sexuality or gender identity, social media platforms often provide crucial access to support networks, mental health resources, and communities that affirm their identities. Age verification systems that require parental consent threaten to cut them from these crucial supports.
When parents must consent to or monitor their children’s social media accounts, LGBTQ+ youth who lack family support lose these vital connections. LGBTQ+ youth are also disproportionately likely to be unhoused and lack access to identification or parental consent, further marginalizing them.
8. Youth in Foster Care Systems Are Completely Left Out
Age verification bills that require parental consent fail to account for young people in foster care, particularly those in group homes without legal guardians who can provide consent, or with temporary foster parents who cannot prove guardianship. These systems effectively exclude some of the most vulnerable young people from accessing online platforms and resources they may desperately need.
9. All of Our Personal Data is Put at Risk
An age-verification system also creates acute privacy risks for adults and young people. Requiring users to upload sensitive personal information (like government-issued IDs or biometric data) to verify their age creates serious privacy and security risks. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store, for example. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Once uploaded, this personal information could be exposed, mishandled, or even breached, as we’ve seen with past data hacks. Age-verification systems are no strangers to being compromised—companies like AU10TIX and platforms like Discord have faced high-profile data breaches, exposing users’ most sensitive information for months or even years.
The more places personal data passes through, the higher the chances of it being misused or stolen. Users are left with little control over their own privacy once they hand over these immutable details, making this approach to age verification a serious risk for identity theft, blackmail, and other privacy violations. Children are already a major target for identity theft, and these mandates perversely increase the risk that they will be harmed.
10. All of Our Free Speech Rights Are Trampled
The internet is today’s public square—the main place where people come together to share ideas, organize, learn, and build community. Even the Supreme Court has recognized that social media platforms are among the most powerful tools ordinary people have to be heard.
Age-verification systems inevitably block some adults from accessing lawful speech and allow some young people under 18 users to slip through anyway. Because the systems are both over-inclusive (blocking adults) and under-inclusive (failing to block people under 18), they restrict lawful speech in ways that violate the First Amendment.
The Bottom Line
Age-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.
The internet is essential to how people speak, learn, and participate in public life. When access depends on flawed technology or hard-to-obtain documents, we don’t just inconvenience users, we deepen existing inequalities and silence the people who most need these platforms. As outlined, every available method—facial age estimation, document checks, financial records, or parental consent—systematically excludes or harms marginalized people. The real question isn’t whether these systems discriminate, but how extensively.
Republished from the EFF’s Deeplinks blog.
Filed Under: access to information, age verification, anonymity, free speech, privacy
UK Law Enforcement Pushed Hard To Maintain Access To Deeply Flawed Facial Recognition Tech
from the but-the-good-stuff-works-too-well! dept
While each iteration presents a chance to improve, there are some very real reasons why facial recognition tech will do a bit of stagnating. And that reason is the biggest market for this tech: law enforcement agencies.
In 2019, the US National Institute for Science and Technology studied 189 different facial recognition algorithms. The results were conclusive: every single one of them performed worse when asked to “recognize” anything other than white male faces. Asians and African Americans were more than 100 times more likely to be misidentified by the tech. While some were a little bit better, the average across the board was bad news for people who’ve already been subjected to decades of biased policing.
Adding tech to existing biases only allows them to compound the inequities faster. That’s something that was pointed out less than a year later to the EU Parliament. Allowing cops to control both the input and the output just means the systems will generate plausible deniability for racist policing, rather than create a playing field that’s a bit more level.
Not only does facial recognition tech have a built-in bias problem, it also seems to have a problem with recognizing faces, no matter what color those faces are. Police forces in the UK have seen this happen repeatedly, racking up alarming false positive rates during tech rollouts. Despite these failures (and the unacknowledged flip side of false positives: false negatives), the UK government has continued to expand facial recognition programs.
The UK’s version of the NIST, the National Physical Laboratory (NPL), performed its own examination of tech currently being used by UK law enforcement. Its conclusions were just as unsurprising:
UK forces use the police national database (PND) to conduct retrospective facial recognition searches, whereby a “probe image” of a suspect is compared to a database of more than 19 million custody photos for potential matches.
The Home Office admitted last week that the technology was biased, after a review by the National Physical Laboratory (NPL) found it misidentified Black and Asian people and women at significantly higher rates than white men, and said it “had acted on the findings”.
These findings were passed on to law enforcement by the Home Office last September. The National Police Chiefs’ Council (NPCC) responded about as well as it could: it ordered any users of the tech examined by the NPL to adjust sensitivity settings to raise the “confidence threshold” for matches. This order was meant to counteract (to a point) the false positives generated by the tech’s inability to accurately match images involving women, Black people, and pretty much anyone of any race under the age of 40. (Whew. That’s a lot of failure.)
Well, that apparently angered a whole lot of UK officers and supervisors. With the threshold raised, fewer matches (and, presumably, fewer incorrect matches) were being generated. Rather than recognize this was part of necessary compromise needed to offset faulty tech, they decided to get bitchy about not being given enough false positives to act on.
That decision was reversed the following month after forces complained the system was producing fewer “investigative leads”. NPCC documents show that the higher threshold reduced the number of searches resulting in potential matches from 56% to 14%.
Yep, the NPCC rolled this decision back because officers weren’t getting as many matches as they were used to getting. Sure, the matches they were generating were likely much better than the ones they had generated in the past, but accuracy doesn’t seem to matter to UK law enforcement. It collectively pushed back hard enough to get this order reversed, allowing UK agencies to once again exploit the known, scientifically studied limitations of the facial recognition tech they were using. They valued quantity over quality — the sort of thing that naturally lends itself to the biased policing efforts these officers prefer to engage in.
Chief Constable Amanda Blakeman, an NPCC lead, claims there’s a tradeoff being made here that will ultimately benefit the public, even if it means more of them will be falsely arrested and the increase in false negatives will mean more criminals will escape justice.
“The decision to revert to the original algorithm threshold was not taken lightly and was made to best protect the public from those who could cause harm, illustrating the balance that must be struck in policing’s use of facial recognition.”
Blakeman insists additional training is all that’s needed to overcome the known limitations of the tech. Anyone who has ever attended mandatory training knows this simply isn’t true. All that means is that a bunch of people will doze or daydream through these sessions and pencil whip whatever form is given to them that will supposedly “verify” that all the training they never paid attention to has been put to use. Blakeman even said some of this training will be “reissued,” which makes it clear no one was paying any attention to it the first time around.
It’s fucking amazing. When confronted with the fact that their tech is flawed, UK law enforcement agencies demanded everything be reverted back to the fully-broken “normal” they’d been allowed to abuse since the tech’s inception. And now that this is all out in the open, police spokespeople are back to pretending law enforcement has anything to do with competently and carefully enforcing laws.
Filed Under: biased policing, facial recognition, false positives, home office, national physical laboratory, national police chiefs council, uk
LG Forces TV Owners To Use Microsoft ‘AI’ Copilot App You Can’t Uninstall And Nobody Asked For
from the our-sad-desperation-means-the-product-is-good dept
If your product is even a third as innovative and useful as you claim it is, you shouldn’t have to go around trying a little too hard to convince people. The product’s usefulness should speak for itself. And you definitely shouldn’t be forcing people to use products they’ve repeatedly told you they don’t actually appreciate or want.
LG and Microsoft learned that lesson recently when LG began installing Microsoft’s Copilot “AI” assistant on people’s televisions, without any way to disable it:
“According to affected users, Copilot appears automatically after installing the latest webOS update on certain LG TV models. The feature shows up on the home screen alongside streaming apps, but unlike Netflix or YouTube, it cannot be uninstalled.”
To be clear this isn’t the end of the world. Users can apparently “hide” the app, but people are still generally annoyed at the lack of control. Especially coming from two companies with a history of this sort of behavior.
Many people just generally don’t like Copilot, much like they didn’t really like a lot of the nosier features integrated into Windows 11. Or they don’t like being forced to use Copilot when they’d prefer to use ChatGPT or Gemini.
You only have to peruse this Reddit thread to get a sense of the annoyance. You can also head over to the Microsoft forums to get a sense of how Microsoft customers are very very tired of all the forced Copilot integration across Microsoft’s other products, even though you can (sometimes) disable the integration.
But “smart” TVs are already a sector where user choice and privacy take a backseat to the primary goal of collecting and monetizing viewer behavior. And LG has been at the forefront of disabling features if you try to disconnect from the internet. So there are justifiable privacy concerns raised over this tight integration (especially in America, which is too corrupt to pass even a baseline internet privacy law).
This is also coming on the heels of widespread backlash over another Microsoft “AI” feature, Recall. Recall takes screenshots of your PC’s activity every five seconds, giving you an “explorable timeline of your PC’s past,” that Microsoft’s AI-powered assistant, Copilot, can then help you peruse.
Here, again, there was widespread condemnation over the privacy implications of such tight integration. Microsoft’s response was to initially pretend to care, only to double down. It’s worth noting that Microsoft’s forced AI integration into its half-assed journalism efforts, like MSN, has also been a hot, irresponsible mess. So this is not a company likely to actually listen to its users.
It’s not like Microsoft hasn’t had some very intimate experiences surrounding the backlash of forcing products down customers’ throats. But like most companies, Microsoft knows U.S. consumer protection and antitrust reform has been beaten to a bloody pulp, and despite the Trump administration’s hollow and performative whining about the power of “big tech,” big tech giants generally have carte blanche to behave like assholes for the foreseeable future, provided they’re polite to the dim autocrats in charge.
Filed Under: ai, antitrust, consumers, copilot, pc, privacy, smart tvs, software
Companies: lg, microsoft
Google Built Its Empire Scraping The Web. Now It’s Suing To Stop Others From Scraping Google
from the the-open-web-is-closing dept
Last week, Google filed suit against SerpApi, a scraping company that helps businesses pull data from Google search results. The lawsuit claims SerpApi violated DMCA Section 1201 by circumventing Google’s “technological protection measures” to access search results—and the copyrighted content within them—without permission.
There’s just one problem with this theory: Google built its entire business on scraping the web without asking permission first. And now it wants to use one of the most abused provisions in copyright law to stop others from doing something functionally similar to what made Google a tech giant in the first place.
The lawsuit comes on the heels of Reddit’s equally problematic anti-scraping suit from October—which we called an attack on the open internet. Reddit sued Perplexity and various scraping firms (including SerpApi), claiming they violated 1201 by circumventing… Google’s technological protections. Reddit was mad it had cut a multi-million dollar licensing deal with Google for access to Reddit content, and these firms were routing around both that deal and Google itself to provide similar results to users. The legal theory was bizarre: Reddit didn’t own the copyright on user posts, and the scrapers weren’t even touching Reddit directly—yet Reddit claimed standing to sue based on circumventing someone else’s TPMs.
So now, Google has filed its own, similar lawsuit, going after SerpApi directly, focused on how SerpApi gets around its attempts to block such scraping. Google released a blog post defending this lawsuit:
We filed a suit today against the scraping company SerpApi for circumventing security measures protecting others’ copyrighted content that appears in Google search results. We did this to ask a court to stop SerpApi’s bots and their malicious scraping, which violates the choices of websites and rightsholders about who should have access to their content. This lawsuit follows legal action that other websites have taken against SerpApi and similar scraping companies, and is part of our long track record of affirmative litigation to fight scammers and bad actors on the web.
Google follows industry-standard crawling protocols, and honors websites’ directives over crawling of their content. Stealthy scrapers like SerpApi override those directives and give sites no choice at all. SerpApi uses shady back doors — like cloaking themselves, bombarding websites with massive networks of bots and giving their crawlers fake and constantly changing names — circumventing our security measures to take websites’ content wholesale. This unlawful activity has increased dramatically over the past year.
SerpApi deceptively takes content that Google licenses from others (like images that appear in Knowledge Panels, real-time data in Search features and much more), and then resells it for a fee. In doing so, it willfully disregards the rights and directives of websites and providers whose content appears in Search.
Look, SerpApi’s behavior is sketchy. Spoofing user agents, rotating IPs to look like legitimate users, solving CAPTCHAs programmatically—Google’s complaint paints a picture of a company actively working to evade detection. But the legal theory Google is deploying to stop them threatens something far bigger than one shady scraper.
Google’s entire business is built on scraping as much of the web as possible without first asking permission. The fact that they now want to invoke DMCA 1201—one of the most consistently abused provisions in copyright law—to stop others from scraping them exposes the underlying problem with these licensing-era arguments: they’re attempts to pull up the ladder after you’ve climbed it.
Just from a straight up perception standpoint, it looks bad.
To be clear: this isn’t about defending SerpApi. They appear to be bad actors who built a business on evading detection systems. The problem is that Google chose to go after them using a legal weapon with a long history of collateral damage. When you invoke Section 1201 against web scraping, you’re not just targeting one sketchy company—you’re potentially rewriting the rules for how the entire open web functions. The choice of weapon matters, especially when that weapon has been repeatedly abused to stifle legitimate competition and could now be turned against the very openness that made the modern internet possible.
For many years, we’ve discussed the many, many problems of DMCA Section 1201. It’s the “anti-circumvention” part of the law that says merely any attempt to get around a “technological protection measure” (or even just tell someone else how to get around a technological protection measure) could be deemed to violate the law, even if the TPMs in question were wholly ineffective, and even if the intent in getting around the TPM had nothing to do with copyright infringement.
That has lead to years of abusive practices by companies who would put silly, pointless “TPMs” in place just in order to be able to use the law to limit competition. There were lawsuits over printer ink cartridges and garage door openers, among other things.
Here, Google is saying that it put in place a TPM in January of 2025 called “SearchGuard” (which sounds like an advanced CAPTCHA of some sort) to prevent SerpApi from scraping its search results, but SerpApi figured out a way around it:
When SearchGuard launched in January 2025, it effectively blocked SerpApi from accessing Google’s Search results and the copyrighted content of Google’s partners. But SerpApi immediately began working on a means to circumvent Google’s technological protection measure. SerpApi quickly discovered means to do so and deployed them.
SerpApi’s answer to SearchGuard is to mask the hundreds of millions of automated queries it is sending to Google each day to make them appear as if they are coming from human users. SerpApi’s founder recently described the process as “creating fake browsers using a multitude of IP addresses that Google sees as normal users.”
SerpApi’s fakery takes many forms. For example, when SerpApi submits an automated query to Google and SearchGuard responds with a challenge, SerpApi may misrepresent the device, software, or location from which the query is sent in order to solve the challenge and obtain authorization to submit queries. Additionally or alternatively, SerpApi may solve SearchGuard’s challenge with a “legitimate” request and then syndicate the resulting authorization, that is, share it with unauthorized machines around the world, to enable their “fake browsers” to generate automated queries that appear to Google as authorized. It also uses automated means to bypass CAPTCHAs, another aspect of SearchGuard that tests users to ensure they are humans rather than machines.
Getting around these protections eats up Google’s resources, and sure, that must be annoying for Google. But the real motivation shows up when Google gets to the economics of the situation. Google has started cutting licensing deals with content partners—most notably the multi-million dollar Reddit deal—and now those partners are pissed that SerpApi lets others access similar data without paying anyone:
For Google, SerpApi’s automated scraping not only consumes substantial computing resources without payment, but also disrupts Google’s content partnerships. Google licenses content so that it can enhance the Search results it provides to users and thereby boost its competitive standing. SerpApi undermines Google’s substantial investment in those licenses, making the content available to other services that need not incur similar costs.
SerpApi’s scraping of Google Search results also impacts the rights holders who license content to Google. Without permission or compensation, SerpApi takes their content from Google and widely distributes it for use by third parties. That, in turn, threatens to disrupt Google’s relationship with the rights holders who look to Google to prevent the misappropriation of the content Google displays. At least one Google content partner, Reddit, has already sued SerpApi for its misconduct.
This is where the 1201 theory becomes genuinely dangerous. Google’s argument, if accepted, provides a roadmap for any website operator who wants to lock down their content: slap on a trivial TPM—a CAPTCHA, an IP check, anything—and suddenly you can invoke federal law against anyone who figures out how to get around it, even if their purpose has nothing to do with copyright infringement.
The implications spiral outward quickly. If Google succeeds here, what stops every major website from deciding they want licensing revenue from the largest scrapers? Cloudflare could put bot detection on the huge swath of the internet it serves and demand Google pay up. WordPress could do the same across its massive network. The open web—built on the assumption that published content is publicly accessible for indexing and analysis—becomes a patchwork of licensing requirements, each enforced through 1201 threats.
That doesn’t seem good for the prospects of a continued open web.
Google’s legal theory has another significant problem: the requirement that a TPM must “effectively control” access. Just last week, a court rejected Ziff Davis’s attempt to turn robots.txt into a 1201 violation when OpenAI allegedly ignored its crawling restrictions. The court’s reasoning is directly applicable here:
Robots.txt files instructing web crawlers to refrain from scraping certain content do not “effectively control” access to that content any more than a sign requesting that visitors “keep off the grass” effectively controls access to a lawn. On Ziff Davis’s own telling, robots.txt directives are merely requests and do not effectively control access to copyrighted works. A web crawler need not “appl[y] . . . information, or a process or a treatment,” in order to gain access to web content on pages that include robots.txt directives; it may access the content without taking any affirmative step other than impertinently disregarding the request embodied in the robots.txt files. The FAC therefore fails to allege that robots.txt files are a “technological measure that effectively controls access” to Ziff Davis’s copyrighted works, and the DMCA section 1201(a) claim fails for this reason.
Google will argue SearchGuard is different—it’s more than a polite request, it actively challenges and blocks scrapers. But if SerpApi can routinely bypass it by spoofing browsers and rotating IPs, does it really “effectively control” access? Or is it just a slightly more sophisticated “keep off the grass” sign that determined actors can ignore?
This question matters enormously because it determines whether the statute that was supposed to prevent piracy of CDs and DVDs now also governs every attempt to access publicly-available web pages through automated means.
For decades, we’ve operated under a system where robots.txt represented a voluntary, good-faith approach to web crawling. The major players respected these directives not because they had to, but because maintaining that norm benefited everyone. That system is breaking down, not because of SerpApi, but because of the rise of scrapers focused on LLM training, mixed with other companies wanting to find licensing deals to get a cut of the money flows. Reddit and Google negotiating licensing deals over open web content was a warning sign of all of this, and now it’s spilling out into the courts with questionable 1201 claims.
Both Reddit and Google frame this as protecting the open internet from bad actors. But pulling up the ladder after you’ve climbed it isn’t protection—it’s rent-seeking. Google built an empire on the assumption that publicly accessible web content could be freely scraped and indexed. Now it wants to rewrite the rules… using Hollywood’s favorite tool to block access to information.
The real problem isn’t that Google is fighting back against SerpApi’s evasive tactics. It’s that they chose to fight using a legal weapon that, if successful, fundamentally changes how we understand access to the open web. Section 1201 has already been wildly abused to stifle competition in everything from printer cartridges to garage door openers. Extending it to cover basic web scraping because SerpApi seems sketchy threatens the foundational assumption that published web content is accessible for indexing, research, and analysis.
Google has the resources to solve this problem through better engineering or by raising the actual cost of evasion high enough that SerpApi’s business model fails. Instead, they’ve opted for a legal shortcut that, if it works, will reshape the internet in ways that go far beyond one sketchy scraping company.
The internet is changing, and legitimate questions exist about how web scraping should function in an era of large language models and AI training. But those questions won’t be answered well by stretching copyright law to cover something it was never designed for, and empowering every website operator to demand licensing fees simply by putting up a CAPTCHA.
That’s not protecting the open web. That’s closing it.
Filed Under: 1201, anti-circumvention, circumvention, copyright, dmca 1201, licensing, open web, robots.txt, webcrawling
Companies: google, reddit, serpapi
Daily Deal: The JavaScript DOM Game Developer Bundle
from the good-deals-on-cool-stuff dept
The JavaScript DOM Game Developer Bundle has 8 courses to help you master coding fundamentals. Courses cover JavaScript DOM, Coding, HTML 5 Canvas, and more. You’ll learn how to create your own fun, interactive games. It’s on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
MAGA Legislators Want To Add Mercenaries To Trump’s Perverse Take On The ‘War On Drugs’
from the rogue-state-seeks-additional-rogues dept
Yeah, the economy sucks and trade-war tariff agendas are only making things worse. But as Trump promised/threatened during a recent national address from the White House, things are turning around, even if you (MAGA voters especially) are too stupid to see it.
It looks like the first growth market to see a significant increase might be law firms specializing in maritime law. (And those of you who specialize in Third Amendment law might want to hang around for a bit before becoming Costco greeters or whatever.)
The Trump administration has been straight up murdering people in international waters for the past few months. The regime’s “shoot first, demand all questioners be hit with sedition charges” plan hasn’t exactly worked out. Questions are actually being asked, and in response, the administration has been engaged in some last-minute retcon. Following the controversial boat strikes, the government has now declared drug cartels and drugs themselves to be terrorists worthy of extrajudicial killings.
Not only has Trump declared a controlled substance to be a “weapon of mass destruction” (fentanyl), he’s also pretty much directly asked oil companies if they’re interested in seeing a foreign government overthrown — an offering that ensures the CIA won’t be getting the holidays off this year.
Trump kills people in boats and commandeers Venezuelan oil tankers while relying on plenty of specious legal assertions. If there’s anyone who loves specious legal assertions, it’s the people who worship Donald Trump. This is the new hotness awaiting us in 2026: the addition of mercenaries to an undeclared war of opportunity.
Here’s Senate head Mike Lee with a proposal to allow regular-ass Americans to participate in actions that are already extremely questionable in terms of legality.
U.S. Senator Mike Lee (R-UT) introduced legislation today that would allow private entities to stop drug cartel smuggling and violence. The Cartel Marque and Reprisal Authorization Act authorizes President Trump, as provided under the Constitution, to commission American operators under letters of marque to seize cartel property and persons on land or sea. Representative Tim Burchett (R-TN) introduced the House version of the legislation.
“The Constitution provides for Letters of Marque and Reprisal as a tool against the enemies of the United States,” said Senator Mike Lee.“Cartels have replaced corsairs in the modern era, but we can still give private American citizens and their businesses a stake in the fight against these murderous foreign criminals. The Cartel Marque and Reprisal Authorization Act will revive this historic practice to defend our shores and seize cartel assets.”
Lee’s bill is hardly worth reading. It only runs three pages and basically says “the president can hire whatever mercenaries he wants” before heading to a conclusion that claims the president can declare whoever he wants to be a “cartel” and therefore a worthy target of whatever sort of Blackwater-murdering-civilians might ensure.
Of course, every Senate bill must be matched with something similar. No one specially asked congressional reps Tim Burchett and Mark Messmer to speak up, but they’ll be damned if their bootlicking will go unnoticed.
Here’s a bit of Rep. Burchett’s CV:
On March 28, 2023, Burchett responded to the Covenant School shooting, where three nine-year-old students and three staff members were killed in Nashville, by telling reporters: “It’s a horrible, horrible situation, and we’re not going to fix it. Criminals are gonna be criminals. And my daddy fought in the second world war, fought in the Pacific, fought the Japanese, and he told me, he said, ‘Buddy,’ he said, ‘if somebody wants to take you out, and doesn’t mind losing their life, there’s not a whole heck of a lot you can do about it.'” Burchett also said he sees no “real role” for Congress in reducing gun violence, other than to “mess things up”.
[…]
After a local D.J. was killed and 22 others were wounded in the 2024 Kansas City parade shooting, Burchett inaccurately identified an adult attendee of the Kansas City rally, Denton Loudermill Jr., as the shooter, claiming he was an “illegal alien”. Burchett’s social media post received 1.4 million views.
Yeah, so he’s one of those people.
Burchett’s buddy on this one is Rep. Mark Messmer, who used to be a pretty normal person before being elected to his current position. Now he’s just a guy who says stuff like this:
“I agree with President Trump that drug cartels are foreign terrorist organizations and are a serious threat to all Americans,” said Rep. Messmer. “The Cartel Marque and Reprisal Authorization Act of 2025 would add another arrow in our national security quiver, guaranteeing that President Trump has all the authority he needs to protect our citizens from criminal and terrorist-linked threats.”
These two rubes were at least ahead of the game. They tossed this one into the congressional pool back in February 2025. That one has been copied word-for-word by Sen. Mike Lee, who looks like he’s desperately trying to shore up his toady credentials now that Trump has pretty much declared war on anything traversing international waters while in the vicinity of Venezuela. Everything old is new again, except for Donald Trump, who is older than last time and far less likely to remember this idea got pitched months ago.
The Trump administration has never seen a bad idea it can’t make worse. And while Trump has chosen to legislate from the executive office as often as possible, he’ll always have a place in his heart for the desperate sycophants who are willing to give him whatever he wants, no matter the cost to their own careers.
And there are plenty of violent, bigoted sycophants in the private sector just dying for an opportunity to get their violent racism on. They, too, are now being given a chance to claim a chair at the Big Boy table and to engage in lawlessness this administration will always celebrate, rather than condemn.
Filed Under: boat strikes, drug war, due process, extrajudicial killings, mark messmer, mercenaries, mike lee, murder, tim burchett, trump administration, venezuela
Larry Ellison Is Very Excited To Destroy What’s Left Of CNN, Sweetens The Pot For Hostile Warner Takeover
from the building-state-television dept
We’ve noted how right wing billionaire Larry Ellison, as part of his attempt to control the entirety of media, had launched a $108 billion hostile takeover bid for Warner Brothers backed by Saudi cash.
Larry, as we’ve seen with CBS and his interest in TikTok, is trying to convert what’s left of U.S. media into a giant safe space for affluent right wing autocrats and the right wing culture war grievance and infotainment complex (as we just saw CBS and Bari Weiss demonstrate in vivid detail).
In Larry’s way on the road to dominating media sits Netflix, which had already struck its own $82.7 billion deal with Warner Bros. The Warner board has consistently supported the Netflix deal as the safer option, and last week rejected the CBS/Paramount/Ellison family hostile takeover bid.
The Warner board generally had two major concerns: they were worried that this weird assortment of Saudi money wasn’t fully backed by Larry, and therefore wasn’t particularly reliable. And they were generally worried that the inclusions of the Saudis would trigger complicated national security regulatory review, slowing down the approval process.
Larry, for his part, this week tried to ease Warner board concerns by stating he would personally guarantee $40 billion of his own cash as a backstop to the deal:
“The guarantee, disclosed in a filing on Monday, seeks to allay the Warner Bros board’s doubts about Paramount’s financing and the lack of full Ellison family backing, which had pushed it toward the competing cash-and-stock offer from Netflix.”
Warner has given every indication they’re more comfortable with Netflix, and I suspect won’t be willing to soften their stance. If Warner refuses, it’s almost certain that Ellison will have Trump’s DOJ threaten fake-populist, phony antitrust action in the new year, featuring lots and lots of propaganda about how Netflix is a dire antitrust concern, but Larry Ellison’s ownership would be a gift from the heavens.
Arguably, a country with functional regulators (which we aren’t) would block all media consolidation, since these deals inevitably result in mass layoffs, higher consumer prices, less competition, and shittier overall product (example A: the last two Time Warner mergers with AT&T and Discovery).
But only one of these two options leads to the potential for Trump-allied authoritarian state television owned by a technofascist billionaire (see: Hungary), making it the bigger threat to folks who like things like Democracy. That’s not to soft sell the harms Netflix could cause, which will be ample (especially as they debase themselves to gain approval), but one path is clearly worse.
So in the absence of blocking all media consolidation (which, again, won’t happen because the U.S. is too corrupt to function), Democrats are most likely better served by finding ways to back Netflix’s play for Warner Brothers assets. I’m not sure they’re going to be strategically bright enough to realize that.
It’s worth noting that Netflix doesn’t want CNN and its sorry-ass ratings, and that CNN and Warner’s other cable channels are likely to be spun off and sold anyway, giving Larry Ellison yet another chance to acquire CNN, and like CBS, turn it into a right wing propaganda mill whose primary function will be to kiss the ass of increasingly unpopular autocrats.
Filed Under: consolidation, larry ellison
Companies: cnn, netflix, paramount, warner bros. discovery
40 Years Of Copyright Obstruction To Human Rights And Social Justice
from the forever-and-a-day dept
One of the little-known but extremely telling episodes in the history of modern copyright, discussed in Walled Culture the book (free digital versions available), concerns the Marrakesh Treaty. A post on the Corporate Europe Observatory (CEO) site from 2017 has a good summary of what the treaty is about, and why it is important:
It sets out exceptions and limits to copyright rules so that people unable to use print media (including blind, visually impaired, and dyslexic people) can access a far greater range of books and other written materials in accessible formats. These exceptions to copyright law are important in helping to combat the ‘book famine’ for print-disabled readers. The Marrakesh Treaty is particularly important in global south countries where the range of materials in an accessible format – usually expensive to produce and disseminate – can be extremely limited.
Its importance was recognised long ago, as indicated by a timeline on the Knowledge Economy International (KEI) site:
In 1981, the governing bodies of WIPO and UNESCO agreed to create a Working Group on Access by the Visually and Auditory Handicapped to Material Reproducing Works Produced by Copyright. This group meeting took place on October 25-27, 1982 in Paris, and produced a report that included model exceptions for national copyright laws. (UNESCO/WIPO/WGH/I/3). An accessible copy of this report is available here.
And yet it was only in 2013 – 31 years after the original report – that the treaty was finally agreed. The reason for this extraordinary delay in making it easier for the visually impaired to enjoy even a fraction of the material that most have access to is simple: copyright. As KEI’s director, James Love, told Walled Culture in an interview three years ago: “the initial opposition was from the publishers, and the publishers did everything you can imagine to derail this [treaty]”. The CEO post explains why:
Industry’s lobby efforts have attempted to re-frame the Marrakesh Treaty away from being a matter of human rights, education, and social justice, towards a copyright agenda by portraying it as a threat to business’ interests.
Indeed, even industries well outside publishing lobbied hard against the treaty. For example:
Caterpillar, the machinery manufacturer, joined the campaign to oppose it, apparently convinced that the Treaty would act as a slippery slope towards weaker intellectual property rules elsewhere.
As the CEO article noted, after the Marrakesh Treaty was agreed, several EU member states insisted on it being watered down further:
contrary to the obvious benefits of the ratification and implementation of the Marrakesh Treaty for the 30 million blind or visually-impaired people in Europe (and 285 million worldwide), several EU member state governments have instead bought the business line that these issues should be viewed through the lens of copyright.
That was eight years ago. And yet – incredibly – the pushback against providing the visually impaired with at least minimal rights to convert print and digital material into forms that they could access has continued unabated. A recent post on the International Federation of Library Associations and Institutions (IFLA) blog analyses the ways in which the already diluted benefits of the Marrakesh Treaty have been diminished further:
it has become clear that there are a number of ways in which it is possible to undermine the goals and intent of the Marrakesh Treaty, ultimately limiting the progress of access to information than would otherwise be possible.
This article highlights examples from countries that are arguably getting Marrakesh implementation wrong. The list below illustrates provisions (or a lack of provisions) to avoid because they undermine the purpose of the treaty and create barriers to access for people with disabilities.
One extraordinary failure to implement the Marrakesh Treaty properly, a full 40 years after it was first discussed, is “where laws have set out that authorised entities need to be registered in order to use Marrakesh provisions, but then there is no way of registering.” According to the IFLA this is the case in Brazil and Argentina. Just slightly better is the situation where “only certain institutions and libraries should count as authorised entities.” Clearly, this “may have the effect of limiting the number of service providers, and place an additional burden on institutions.” Another problem concerns remuneration:
The Marrakesh Treaty includes an optional provision for remuneration of rightholders. This non-compulsory clause was added in order to secure support during negotiations, but undermines the Treaty’s purpose by allowing the payment of a royalty for an inaccessible work, and creates a financial and administrative burden, ultimately drawing resources away services to persons with disabilities.
Germany is a disappointing example of how new barriers can be placed in the way of the visually impaired by adding unjustified and exorbitant costs:
a fee of at least €15 is charged for each transfer of a book for each individual format. Fees (approx. 15 cents) are also charged for each download or stream of a book. Additionally, fees are charged for obtaining books from other German-speaking countries and for borrowing them. This leads to considerable costs, which inevitably result in a decline in purchases and the range of services offered.
Another obstacle is the requirement in some countries for “a commercial availability check for a work in an accessible format, when the very purpose of the Marrakesh Treaty was to address a market failure.” As the IFLA post rightly points out:
A commercial availability check is unnecessary – libraries will buy books in accessible formats where they can, as it is far more cost effective to purchase the work than produce it in accessible format. Yet Canada has introduced such a provision, and indeed even requires a second check when exporting books. It is burdensome to expect a library to conduct a search in a foreign market and be 100% sure that a book is not available in a given format there. Often the information simply is not available. Such provisions therefore create unacceptable liability, chilling the sharing of books.
Finally, there are countries that have joined the Marrakesh Treaty, but have done little or nothing to implement it:
a recent piece from Bangladesh highlights how delays in reforming domestic copyright laws, coupled with underinvestment, have meant that three years on from ratifying the Treaty, persons with print disabilities are still waiting for change. Similarly in South Africa, despite a judgement from the Constitutional Court, the necessary reforms to implement the Treaty are still being held up.
The Marrakesh Treaty saga shows the copyright industry and its friends in governments around the world at their very worst. Unashamedly placing copyright’s intellectual monopoly above other fundamental human rights, these groups have selfishly done all they can to block, hinder, delay and dilute the idea that granting the visually impaired ready access to books and other material is a matter of social justice and basic compassion.
Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.
Filed Under: access to knowledge, copyright, human rights, marrakesh treaty, visually impaired
Companies: caterpillar
Trump Admin Reinvents US Digital Services Program After Elon Musk Fired All Their Actual Tech Experts
from the but-this-one-goes-to-11 dept
Here’s a fun game the Trump administration keeps playing: destroy a successful government program, wait a few months, then breathlessly announce you’ve “invented” the exact same thing but with obvious corruption mechanisms baked in.
Last week, the administration excitedly announced a new “Tech Force”—a program to bring tech talent into government for two-year stints to modernize federal technology. If that sounds familiar, it’s because that’s precisely what the US Digital Service (USDS) and 18F successfully did for over a decade. You know, until Elon Musk and DOGE gleefully fired the entire 18F team in March and gutted USDS into a husk of what it once was.
USDS and 18F were genuine success stories. Obama-era programs that brought engineers from Silicon Valley into government to help all Americans by modernizing creaking federal systems. Here’s how USDS described itself two years in:
In the early days, we worried if more than ten people would apply to join the team. Two years later, folks from Google, Facebook, Amazon, Twitter and the likes have joined to put their skills towards helping Veterans, students, small businesses, and all Americans.
That institutional knowledge, that decade of learning what works and what doesn’t, that careful balance between public service and private sector expertise? All gone. Torched by Musk as part of his faux “efficiency” crusade earlier this year.
And now they’re reinventing it. Badly. I used to joke that the Elon Musk Twitter era was all about throwing out all of Twitter’s carefully thought out ideas and then bringing them back in a dumber, more dangerous way. This seems like that, but in the federal government.
The United States Tech Force, announced Monday, is meant to source the artificial intelligence talent the government needs to win the global AI race and modernize the government, the administration says. The goal is to recruit an initial cohort of around 1,000 technologists who will be placed in agencies for two-year stints, potentially as soon as March.
“We need you,” said Scott Kupor, the director of the Office of Personnel Management. “The U.S. Tech Force offers the chance to build and lead projects of national importance, while creating powerful career opportunities in both public service and the private sector.”
Welcome to Temu USDS, everyone.
Same basic concept—rotate tech talent through government—but stripped of all the institutional knowledge about what actually works, run by political operatives instead of civil servants, and riddled with conflicts of interest that the original programs were specifically designed to avoid.
The especially galling part? Watching the same tech bros who helped destroy USDS and 18F now celebrate “Tech Force” as some brilliant innovation:




These are the people who either stayed silent or actively cheered when Musk gutted the actual working programs. Now they’re acting like this is some breakthrough moment of government-tech collaboration. Looking through the boosters, it looks like every partner at A16Z felt the need to support this. None of them seem to mention how this only came after the destruction of the programs that were doing such great work over the past decade (including during the first Trump administration).
Again, conceptually, there is merit to the idea of bringing in techies to help make government work better for the public. But it seems pretty obnoxious for these tech bros to jump into this without acknowledging (1) this existed and worked really well for over a decade until (2) they and their tech bro buddy Elon went in and destroyed it all. Also, given how the Trump admin has acted towards the public for the past 11 months, pretty rich to assume anything done by this new “Tech Force” will be in the interest of the public.
The one actual “innovation” in Tech Force creates a corruption vector that should alarm anyone who cares about government integrity: companies are guaranteeing participants can return to their old jobs after their tour of duty.
USDS never needed this because it wasn’t a problem—people could always go back to industry if they wanted. What this guarantee does is fundamentally change the incentive structure. Now you have engineers building government systems who know exactly where they’ll be working in two years, and whose interests they’ll be serving. They won’t divest from their stock. They won’t sever ties with their employer. They’ll just be on “leave” while accessing sensitive government data and making technology decisions that could directly benefit their future (and current) employer.
As the NextGov piece notes, this should set off every alarm:
“My first question with any programs like this are, ‘What are the rules that are in place to guard against conflicts of interest?’” said Rob Shriver, former acting OPM director and current managing director of Civil Service Strong at Democracy Forward.
This is especially worthy of attention, he said, given DOGE’s approach to data — “coming in and taking over agency systems and accessing data without going through the regular procedures” — which has been at the center of several lawsuits.
Scott Kupor, who is running this is a former Andreessen Horowitz partner, who was there for 16 years (basically since A16Z started) before taking this job. And he insists that there are no conflicts, so don’t worry about that at all:
The setup may vary by company, but the managing engineers from private companies participating in the program will “effectively take a leave of absence” to become full time government employees during the program, Kupor told reporters Monday. They won’t be required to divest from their stocks.
“We feel like we’ve run down all the various conflict issues and don’t believe that that’s actually going to be an impediment to getting people here,” said Kupor. “The huge benefit to the government will be getting people who are very skilled in the private sector at managing engineering teams.”
The idea is that the participants can return to their old jobs with new skills and expertise after working for the government, he said.
“We’ve run down all the various conflict issues”—except for the part where participants will keep their stock, maintain their guaranteed employment at private companies, and have access to sensitive government systems and data. But sure, no conflicts.
The value of tech expertise in government is real. That’s why USDS and 18F existed and succeeded for over a decade. What made those programs work was their careful construction to minimize conflicts while maximizing the transfer of knowledge and expertise.
This isn’t that. This is a hastily rebuilt version of a program they deliberately destroyed, now run by political appointees from the very industries that will benefit, with explicit mechanisms that invite corruption. They gutted the institutional knowledge, fired the people who knew how to do this right, and replaced it with a system where people from private companies get guaranteed access to government data and decision-making through employees who are explicitly planning to return to those same companies.
That doesn’t seem like innovation. It seems much more like regulatory capture with better branding and a cool “force” name.
Filed Under: 18f, corruption, doge, donald trump, elon musk, scott kupor, tech force, usds
Companies: a16z
Kansas Resident Patrick Roach Successfully Sues Verizon For Refusing To Unlock His Phone (And Wins)
from the not-all-heroes-wear-capes dept
If you’ve been around a while, you might remember that Verizon used to be completely obnoxious when it came to forcing you to use their phones and their shitty apps. At one point, Verizon wouldn’t even let you use a competing GPS mapping app, locking you to Verizon’s substandard VCAST apps. The company also adored locking you into long-term contracts and expensive phone payment plans, making it expensive, annoying, or impossible to switch carriers.
Two things changed all that. One, back in 2008 when the company acquired spectrum that came with requirements that users be allowed to use the devices of their choice. And two, as part of merger conditions affixed to its 2021 acquisition of Tracfone. Thanks to those two events Verizon was dragged, kicking and screaming, into a new era of openness that was of huge benefit to the public.
Enter the Trump administration. With the second Trump term taking an absolute hatchet to all consumer protection, Verizon has been lobbying the Trump administration to also eliminate phone unlocking requirements, once again making it difficult, annoying, or impossible to switch.
Under current rules, Verizon is supposed to unlock handsets 60 days after they are activated on its network. This includes both Verizon’s main brand, and its sub-brands like Straight Talk. But (correctly) confident the Trump administration won’t hold them accountable, Verizon has been refusing to unlock its phones, as Kansas resident Patrick Roach recently found out.
Roach bought a discounted iPhone 16e from Verizon’s Straight Talk earlier this year as a gift to his wife. He planned to pay a month of service, cancel, and then switch the phone to the US Mobile service they normally use. Under the rules, that was supposed to be possible. But Verizon blocked the attempt. So he sued them in small claims court, and won. From the October ruling:
“Under the KCPA [Kansas Consumer Protection Act], a consumer is not required to prove intent to defraud. The fact that after plaintiff purchased the phone, the defendant changed the requirements for unlocking it so that plaintiff could go to a different network essentially altered the nature of the device purchased.”
Before winning in court, Roach turned down a Verizon settlement offer for $600 because it would have restricted him from talking about his case openly:
“It’s just kind of slimy of them, so I feel like it deserves a spotlight,” he said. “I’m not sure with the current state of the FCC that anything would happen, but the rule of law should be respected.”
Not all heroes wear capes. Again, Verizon is currently lobbying the Trump FCC to eliminate these unlocking guidelines entirely; and, like everything else the telecom industry asks of Trump FCC boss Brendan Carr, they’re very likely to get it, shifting the wireless industry back to the shittier days of old where switching carriers was annoying and expensive. You know, to make America great again.
Filed Under: cell phones, competition, consumer protection, fcc, hardware, patrick roach, telecom, unlocking, wireless
Companies: straight talk, verizon


No comments:
Post a Comment