DOJ Admits DOGE Team Caught Sharing Social Security Data With Election Denier Group
from the found-the-waste,-fraud,-and-abuse! dept
We spent a lot of time last year calling out how dangerous it was that Elon Musk and his inexperienced 4chan-loving DOGE boys were gaining access to some of the most secure government systems. We also highlighted how it seemed likely that they were violating many laws in the process. One specific point of concern was DOGE’s desire to take control over Social Security data, something that many people warned would be abused for political reasons, in particular to make misleading or false claims about voting records.
For all the people who insisted that this was hyperbolic nonsense, and DOGE was just there to root out “waste, fraud, and abuse,” well… the DOJ last week quietly admitted that the DOGE boys almost certainly violated the Hatch Act and had given social security data to conspiracy theorists claiming Trump won the 2020 election (he did not).
Oh, and this only came out because the DOJ realized it had lied to a court (they claim it was because the Social Security Administration officials had given them bad info, but the net effect is the same) and had to correct the record.
Shapiro’s previously unreported disclosure, dated Friday, came as part of a list of “corrections” to testimony by top SSA officials during last year’s legal battles over DOGE’s access to Social Security data. They revealed that DOGE team members shared data on unapproved “third-party” servers and may have accessed private information that had been ruled off-limits by a court at the time.
Shapiro said the case of the two DOGE team members appeared to undermine a previous assertion by SSA that DOGE’s work was intended to “detect fraud, waste and abuse” in Social Security and modernize the agency’s technology.
From the actual filing in the case:
Also in his March 12 declaration, Mr. Russo attested that, “[t]he overall goal of the work performed by SSA’s DOGE Team is to detect fraud, waste and abuse in SSA programs and to provide recommendations for action to the Acting Commissioner of SSA, the SSA Office of the Inspector General, and the Executive Office of the President.”….
However, SSA determined in its recent review that in March 2025, a political advocacy group contacted two members of SSA’s DOGE Team with a request to analyze state voter rolls that the advocacy group had acquired. The advocacy group’s stated aim was to find evidence of voter fraud and to overturn election results in certain States. In connection with these communications, one of the DOGE team members signed a “Voter Data Agreement,” in his capacity as an SSA employee, with the advocacy group. He sent the executed agreement to the advocacy group on March 24, 2025.
The filing goes on to admit that the declaration from a Social Security administration employee that there were safeguards in place against sharing data, and that everyone had received training in not sharing data, was apparently wrong.
However, SSA has learned that, beginning March 7, 2025, and continuing until March 17 (approximately one week before the TRO was entered), members of SSA’s DOGE Team were using links to share data through the third-party server “Cloudflare.” Cloudflare is not approved for storing SSA data and when used in this manner is outside SSA’s security protocols. SSA did not know, until its recent review, that DOGE Team members were using Cloudflare during this period. Because Cloudflare is a third-party entity, SSA has not been able to determine exactly what data were shared to Cloudflare or whether the data still exist on the server.
Cool cool. No big deal. DOGE boys just put incredibly private data on a third party server and no one knows what data was there or even if it’s still there.
Have I got some waste, fraud, and abuse for you to check out!
Separately, the filing reveals that Elon Musk’s right hand man, Steve Davis—the “fixer” Musk deploys across all his organizations—was copied on an email containing an encrypted file of SSA data. The filing is careful to note that DOGE itself “never had access to SSA systems of record,” but that’s a distinction without much difference when your guy is getting emailed password-protected files derived from those systems. Oh and: SSA still can’t open the file to figure out exactly what was in it.
However, SSA has determined that on March 3, 2025—three weeks prior to entry of the TRO—an SSA DOGE Team member copied Mr. Steve Davis, who was then a senior advisor to Defendant U.S. DOGE Temporary Organization, as well as a DOGE-affiliated employee at the Department of Labor (“DOL”), on an email to Department of Homeland Security (“DHS”). The email attached an encrypted and password-protected file that SSA believes contained SSA data. Despite ongoing efforts by SSA’s Chief Information Office, SSA has been unable to access the file to determine exactly what it contained. From the explanation of the attached file in the email body and based on what SSA had approved to be released to DHS, SSA believes that the encrypted attachment contained PII derived from SSA systems of record, including names and addresses of approximately 1,000 people.
Looks like some more waste, fraud, and abuse right there.
So to recap: the team that stormed in to root out “waste, fraud, and abuse” committed what looks an awful lot like actual fraud and abuse—sharing data on unauthorized servers, misleading courts, cutting deals with election conspiracy groups, and emailing around encrypted files of PII that the agency itself can’t even open anymore. All of it now documented in federal court filings—not that anyone will do anything about it. Accountability is for people who don’t have Elon Musk on speed dial.
Filed Under: doge, election denial, elon musk, privacy, private data, social security, social security administration, steve davis
Thanks To Trump, Verizon Immediately Starts Making It Harder To Switch Mobile Carriers
from the we-hate-competition dept
Last week we noted how the Trump FCC, at the direct request of wireless phone giants, destroyed popular rules making it easier and cheaper to switch wireless carriers. The rules, applied via spectrum acquisition and merger conditions, required that Verizon unlock your phone within 60 days after purchase so you could easily switch to competitors.
Verizon, as we’ve long established, hates competition, and immediately got to work lobbying the Trump administration to destroy the rules. The pay-to-play Trump administration quickly agreed, and now Verizon has started telling wireless customers they have to wait a year before switching phones after purchasing one from Verizon:
“Verizon was previously required to unlock phones automatically after 60 days due to restrictions imposed on its spectrum licenses and merger conditions that helped Verizon obtain approval of its purchase of TracFone. But an update applied today to the TracFone unlocking policy said new phones will be locked for at least a year and that each customer will have to request an unlock instead of getting it automatically.”
Again, these conditions were broadly popular and served the public interest, ensuring that it was easier for consumers to switch between our ever-consolidating, anti-competitive wireless phone giants. Verizon lobbied the FCC by repeatedly lying, without evidence, that these conditions resulted in a wave of black market phone thefts. FCC boss Brendan Carr, ever the industry lackey, parroted the claims in his rulings.
To be clear this is, for now, only something Verizon is doing via its prepaid sub-brands that include Straight Talk, Tracfone, Net10 Wireless, Clearway, Total Wireless, Simple Mobile, SafeLink Wireless, and Walmart Family Mobile. These brands often attract lower income customers who can least afford to be trapped under an expensive provider like this.
Verizon’s quite intentionally targeting these folks first, effectively making freedom and choice a luxury tier (much like telecom providers tried to do with privacy before U.S. government corruption discarded privacy oversight entirely).
You can, for now, still buy an unlocked phone from an independent retailer, bring it to Verizon’s main postpaid brands, and port it back out again if you’d like. But when Verizon sees limited Democrat and press backlash to this first push (guaranteed with so much else going on), it will steadily keep expanding its restrictions to include its primary brands and all unlocked phones.
I know this because I’ve covered this company for a quarter century and this company’s anti-competitive ambitions are as predictable as the tides.
Ideally, Verizon wants to return to what it considers the golden era of cellular phones: circa 2007 or so when carriers restricted how you could use your phone and restricted what apps you could install (remember all the shitty VCast Verizon apps they wouldn’t let you uninstall? Or the way they’d block phone GPS hardware from working on third-party apps?). Back then, they would also tether you to one carrier via expensive long-term contracts with costly early termination fees.
If we stay on this path of zero U.S. corporate oversight, it’s all coming back, sooner or later. From there, should U.S. governance remain under corrupt authoritarian dominance, it’s only a matter of time before Verizon tries to dictate what content you can see in collaboration with the kakistocracy, thanks to the Trump administration’s destruction of popular net neutrality protections.
This has always been Verizon’s ambition as a lumbering telecom giant that can’t innovate and hates competition and government oversight. Thanks to Trump’s assault on regulators, it’s increasingly difficult to hold companies like AT&T and Verizon accountable for literally anything (see the 5th Circuit’s decision to let AT&T off the hook for lying to, and spying on, its users).
And the Trump administration’s ongoing quest to rubber stamp every merger that comes across its desk means more consolidation, and ultimately higher prices for U.S. wireless consumers who already pay some of the highest prices for mobile data in the developed world.
Verizon and other broadly despised telecoms have struck a generational blow against oversight and consumer protection across Trump’s two terms, and they intend to take full advantage of a presidency they helped purchase. All while the president informs his loyal rubes he’s a champion of affordability.
Filed Under: brendan carr, competition, fcc, phone unlocking, phones, telecom, unlocked, unlocking, wireless
Companies: verizon
The Measles Outbreak In South Carolina Is Spiraling Out Of Control
from the paging-RFK-Jr. dept
America is broken and it seems like nobody is bothering to try to repair it. That’s a general statement, to be sure, so if you need some marking point to serve as a specific example of our national malfunction, the return of measles to our country can fit the bill. It’s not quite as flashy as the secret police shooting citizens, of course. But I think that there is something about children with angry rashes across their necks sitting in hospital beds, or in body bags, that will have a way of clarifying the mind.
With a grifter like RFK Jr. at the helm of American health, having built a career based on anti-vaxxer conspiracy theories and health misinformation, our country became a fertile host once more to this horrific disease. Kennedy’s inability to properly communicate to the nation what needs to happen, which is another concentrated MMR vaccination effort, combined with his eugenics-lite belief system on matters of health, has all led to this. 2025 saw the highest number of Americans infected by measles in decades, 3 people died, we’re about to lose our elimination status for the disease, and an outbreak in South Carolina has us off to a rip roaring start to 2026.
While this is largely due to the unvaccinated population among us, allowing the disease to spread where it otherwise would not, we’ve seen enough breakthrough infections that even being one of the “responsible ones” won’t necessarily keep you safe any longer. And the South Carolina outbreak of measles is officially off the rails.
A week ago, ArsTechnica had an alarming post about how South Carolina saw well over a hundred new cases of measles and over 400 people quarantined in a handful of days.
Amid the outbreak, South Carolina health officials have been providing updates on cases every Tuesday and Friday. On Tuesday, state health officials reported 124 more cases since last Friday, which had 99 new cases since the previous Tuesday. On that day, January 6, officials noted a more modest increase of 26 cases, bringing the outbreak total at that point to 211 cases.
With the 3-month-old outbreak now doubled in just a week, health officials are renewing calls for people to get vaccinated against the highly infectious virus—an effort that has met with little success since October. Still, the health department is activating its mobile health unit to offer free measles-mumps-rubella (MMR) vaccinations, as well as flu vaccinations at two locations today and Thursday in the Spartanburg area, the epicenter of the outbreak.
Those same officials had another dire warning: the outbreak had grown so big that they no longer had the ability to perform contact tracing. Where the disease would go next was anyone’s guess.
The outbreak is still growing to date. At least 88 more cases of measles were recorded in South Carolina in less than a week since the Ars post. Schools remain the most problematic vector, but it’s no longer just elementary and secondary schools that are in trouble. Colleges are now part of the party.
There are at least 15 schools — including elementary, middle and high schools — which currently have students in quarantine.
Health officials also warned of exposures at Clemson University and Anderson University, both located in northwestern South Carolina, which have a combined 88 students in quarantine.
While these numbers from South Carolina are publicly stated, the CDC site tallying measles infections apparently can’t keep up. The last time the numbers were updated there was January 14th, but even those numbers appear to be incorrectly low. The site also announces that it is moving its reporting schedule from every Wednesday to Fridays, which is your classic “bad news dumping ground” day.
But that change won’t keep the news of how the South Carolina outbreak has gone national from being reported.
Measles continue to spread in the Upstate but now, health leaders in Washington state say the outbreak here in South Carolina is connected to cases on the west coast. The Snohomish County Health Department confirmed three cases in children who were exposed to a contagious family visiting from South Carolina.
Previously, the Snohomish County Health Department and Public Health – Seattle & King County were notified that three members of a South Carolina family, one adult and two children, were infectious while visiting King and Snohomish counties from Dec. 27, 2025 through Jan. 1, 2026. The family visited multiple locations in Everett, Marysville and Mukilteo while contagious before being diagnosed. They also traveled through Seattle-Tacoma International Airport and visited a car rental facility near the airport.
In any sane administration, a measles task force would be mobilized to build out a strategy to contain these outbreaks, to communicate actions plans to the public, and to execute on actions designed to keep the public healthy. Trump, RFK Jr., and the health agencies they’re in charge of are barely talking about this. They are ignoring the problem and that will ensure that it becomes much, much worse.
Impeachments are what’s necessary here, starting with Kennedy, who is clearly asleep at the wheel. A feckless Congress unwilling to do its job should have members tossed out on their ass. Staff at HHS and its child agencies should be in full revolt, sounding the alarm.
Measles is no fucking joke, folks. But our government currently is.
Filed Under: conspiracy theories, health and human services, make america sick again, measles, rfk jr., south carolina, vaccines
Congress Wants To Hand Your Parenting To Big Tech
from the ted-cruz-wants-to-be-everyone's-daddy dept
Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee recently held a hearing on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.”
That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control.
Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.
Kids Under 13 Are Already Banned From Social Media
One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.
Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA.
Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access.
Most Social Media Use By Younger Kids Is Family-Mediated
If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help.
A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.)
Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.
A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform.
The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together.
KOSMA Forces Platforms To Override Families
This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.
Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.
KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves.
Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.
Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.
Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.
Your Family, Their Algorithms
KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t.
These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
What Families Lose
This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.
These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust.
What Can Be Done Instead
KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.
If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.
But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.
Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.
Republished from the EFF’s Deeplinks blog.
Filed Under: brian schatz, coppa, kids, kosma, moral panic, parental controls, social media, ted cruz
Techdirt Podcast Episode 441: A Manifesto To Build A Better Internet
from the let-it-resonate dept
Last month, we dedicated an episode of the podcast to discussing our recently announced project to push for a better internet, The Resonant Computing Manifesto. This week, we’ve got a cross-post episode that serves as a followup to that discussion. Mike recently joined Charlie Warzel’s Galaxy Brain podcast along with manifesto organizer Alex Komoroske and contributor Zoe Weinberg to discuss the idea of resonant computing, and you can listen to the whole discussion here on this week’s episode.
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: alex komoroske, charlie warzel, podcast, resonant computing, zoe weinberg
Two Major Studies, 125,000 Kids: The Social Media Panic Doesn’t Hold Up
from the yet-more-evidence-that-haidt-is-wrong dept
For years now, we’ve been repeatedly pointing out that the “social media is destroying kids” narrative, popularized by Jonathan Haidt and others, has been built on a foundation of shaky, often contradictory research. We’ve noted that the actual data is far more nuanced than the moral panic suggests, and that policy responses built on that panic might end up causing more harm than they prevent.
Well, here come two massive new studies—one from Australia, one from the UK—that land like a sledgehammer on Haidt’s narrative—and, perhaps more importantly, on Australia’s much-celebrated social media ban for kids under 16.
The Australian study, published in JAMA Pediatrics, followed over 100,000 Australian adolescents across three years and found something that should give every policymaker pause: the relationship between social media use and well-being isn’t linear. It’s U-shaped. Perhaps most surprisingly, kids who use social media moderately have the best outcomes. Kids who use it excessively have worse outcomes. But here’s the kicker: kids who don’t use it at all also have worse outcomes.
This isn’t to say that all kids should use social media. Unlike some others, we’re not saying any of this shows that social media causes good or bad health outcomes. We’re pointing out that the claims of inherent harm seem not just overblown, but wrong.
From the study’s key findings:
A U-shaped association emerged where moderate social media use was associated with the best well-being outcomes, while both no use and highest use were associated with poorer well-being. For girls, moderate use became most favorable from middle adolescence onward, while for boys, no use became increasingly problematic from midadolescence, exceeding risks of high use by late adolescence.
This seems like pretty strong evidence that Haidt’s claims of inherent harm are not well-founded, and the policy proposals to ban kids entirely from social media are a bad idea. For older teenage boys, having no social media was associated with worse outcomes than having too much of it. The study found that nonusers in grades 10-12 had significantly higher odds of low well-being compared to moderate users—with boys showing an odds ratio of 3.00 and girls at 1.79.
Meanwhile, researchers at the University of Manchester just published a separate study in the Journal of Public Health that followed 25,000 11- to 14-year-olds over three school years. Their conclusion? Screen time spent on social media or gaming does not cause mental health problems in teenagers. At all.
From the Guardian’s coverage of the UK study:
The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers’ symptoms of anxiety or depression over the following year. Increases in girls’ and boys’ social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year.
Zero. Not “small.” Not “modest.” Zero.
The UK researchers also examined whether how kids use social media matters—active chatting versus passive scrolling. The answer? Neither appeared to drive mental health difficulties. As lead author Dr. Qiqi Cheng put it:
We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems – the story is far more complex than that.
The Australian researchers, to their credit, are appropriately cautious about causation:
While heavy use was associated with poorer well-being and abstinence sometimes coincided with less favorable outcomes, these findings are observational and should be interpreted cautiously.
But while researchers urge caution, politicians have been happy to sprint ahead.
Australia leapt into the fray, and the ban has so far proven to be a complete mess.
The entire premise of Australia’s ban—and similar proposals floating around in various US states and across Europe—is that social media is inherently harmful to young people, and that removing access is protective. But both studies suggest the reality is far more complicated. The Australian researchers explicitly call this out:
Social media’s association with adolescent well-being is complex and nonlinear, suggesting that both abstinence and excessive use can be problematic depending on developmental stage and sex.
In other words: Australia’s ban may be taking kids who would have been moderate users with good outcomes and forcing them into the “no use” category that the study associates with worse well-being. It’s potentially the worst of all possible policy outcomes.
The UK study’s co-author, Prof. Neil Humphrey, reinforced this point:
Our findings tell us that young people’s choices around social media and gaming may be shaped by how they’re feeling but not necessarily the other way around. Rather than blaming technology itself, we need to pay attention to what young people are doing online, who they’re connecting with and how supported they feel in their daily lives.
That’s a crucial distinction that the moral panic crowd keeps glossing over: correlation running in the opposite direction than assumed. Kids who are already struggling, and who aren’t getting the support they need, might use social media differently—not the other way around.
This shouldn’t be surprising to anyone who has been paying attention. We’ve covered study after study showing that the relationship between social media and teen mental health is complicated, context-dependent, and nowhere near as clear-cut as Haidt’s “The Anxious Generation” would have you believe. As we’ve noted before, correlation is not causation, and the timing of teen mental health declines doesn’t actually line up neatly with smartphone adoption the way the narrative claims.
But nuance doesn’t make for good headlines or popular books. “Social Media Is Complicated And The Effects Depend On How You Use It, Your Age, Your Sex, And A Bunch Of Other Factors” doesn’t quite have the same ring as “Smartphones Destroyed A Generation.”
No one’s beating down my door to write a book detailing the trade-offs and nuances. Instead, Haidt’s book remains on the NY Times’ best seller list almost two years after being published.
The Australian study also highlights something else that should be obvious but apparently needs repeating: social media serves genuine social functions for teenagers. Being completely cut off from the platforms where your peers are socializing, sharing, and connecting has costs. The researchers note:
Heavy use has been associated with distress, while abstinence may cause missed connections.
This is what we’ve been saying forever. These platforms aren’t just “distraction machines” or “attention hijackers” or whatever scary framing is popular this week. They’re where social life happens for a lot of young people. Cutting kids off entirely doesn’t return them to some idyllic pre-digital social existence. It cuts them off from their actual social world.
Both sets of researchers make the same point: online experiences aren’t inherently harmless—hurtful messages, online pressures, and extreme content can have real effects. But blunt instruments like time-based restrictions or outright bans completely miss the target, and are unlikely to help those who need it most. The Australian authors recommend “promotion of balanced and purposeful digital engagement as part of a broader strategy.”
That’s… actually sensible policy advice? Based on actual evidence?
Imagine that.
Meanwhile, Australia is out there celebrating how many accounts it’s deleted, tech companies are scrambling to comply with fines of up to $49.5 million, the UK is actively considering following Australia’s lead, and policymakers around the world are looking at Australia as a model to follow.
Maybe—just maybe—they should look at the actual research coming out of Australia and the UK instead.
Filed Under: jonathan haidt, kids, mental health, research, social media
Evil ICE Fucks Ate Lunch At A Mexican Restaurant Just So They Could Come Back And Detain The People Who Fed Them
from the bitchass-government dept
Do you still want to cling to this pretense, Trump supporters? Do you still want to pretend ICE efforts are targeting “the worst of the worst?” Are you just going to sit there and mumble some incomprehensible stuff about “respecting the laws?”
Go ahead. Do it, you cowards. This is exactly what you voted for, even if it now makes you a bit queasy. Just sit there and soak in it. You are who you support, even if you never thought it would go this far.
“Worst of the worst,” Trump’s parrot repeat on blast. “This one time we caught a guy who did actual crimes,” say spokespeople defending whatever the latest hideous violation of the social contract (if not actual constitutional rights) a federal agent has performed. “Targeted investigation/stop” say the enablers, even when it’s just officers turning white nationalism into Official Government Policy. “Brown people need to be gone” is the end game. Full stop.
Here’s where we’re at in Minnesota, where ICE officers are being shamed into retreat on the regular, punctuated by the occasional revenge killing of mouthy US citizens.
I don’t want you MAGA freaks to tell me you’re OK with this. I want you to tell me why.
Federal agents detained three workers from a family-owned Mexican restaurant in Willmar, Minn., on Jan. 15, hours after four agents ate lunch there.
Does that seem innocuous? Does this seem like some plausible deniability is in play here? Well, disabuse yourself of those notions. This is how it went down.
The arrest happened around 8:30 p.m. near a Lutheran church and Willmar Middle School as agents followed the workers after they closed up for the night. A handful of bystanders blew whistles and shouted at agents as they detained the people. “Would your mama be proud of you right now?” one of the bystanders asked.
Nice. Is this what you want from a presidential administration? Or would you rather complain ICE officers have been treated unfairly if people refuse to feed or house them, knowing full well that doing either of these things will turn their employees into targets.
To be sure, the meal wasn’t a meal. It was half-stakeout, half-intimidation.
An eyewitness who declined to give a name for fear of retribution, told the Minnesota Star Tribune that four ICE agents sat in a booth for a meal at El Tapatio restaurant a little before 3 p.m. Staff at the restaurant were frightened, said the eyewitness, who shared pictures from the restaurant as well as video of the arrest.
I’m not saying ICE officers shouldn’t be able to eat at ethnic restaurants. I am, however, saying that they definitely shouldn’t because everyone is going to think the officers are there for anything but the food. And I do believe any minority business owner should be able to refuse service to ICE officers who wander in under the pretense of buying a meal. The end result is going to be the same whether or not you decide to engage with this pretense. You’re getting raided either way. May as well deny them the meal.
Especially if ICE and the DHS are just going to lie about what happened. Here’s what eyewitnesses, business owners, and local journalists said about this display of ICE shittiness:
El Tapatio Mexican Restaurant closed after WCCO confirmed agents visited the spot for lunch and later returned, detaining its owners and a dishwasher nearby after they had closed early due to the federal law enforcement’s previous appearance.
And here’s the DHS statement, which pretends ICE officers didn’t eat a meal at a restaurant and then return a few hours later to detain employees when they left the building:
“On January 14, ICE officers conducted surveillance of a target, an illegal alien from Mexico. Officers observed that the target’s vehicle was outside of a local business and positively identified him as the target while inside the business. Following the positive identification of the target, officers then conducted a vehicle stop later in the day and apprehended the target and two additional illegal aliens who were in the car, including one who had a final order of removal from an immigration judge.”
Nope. I don’t care what the ICE apologists will say about this. These narratives have places where they overlap but it’s impossible to believe this went down exactly like the government said it did. These officers picked out an ethnic restaurant, were served by an intimidated staff, and then hung around to catch any stragglers leaving the business that previously had graciously served them, despite the threat they posed.
Abolish ICE. It’s no longer just a catchy phrase to shout during protests. It’s an imperative. If we don’t stop it now, it will only become even worse and even more difficult to remove. Treat ICE like the tumor it is. Pretending its MRSA gives it more power than it should ever be allowed to have.
Filed Under: bigots, dhs, donald trump, ice, kristi noem, mass deportation, minnesota, trump administration, willmar
Daily Deal: PiCar-X Smart Video Robot Car Kit for Raspberry Pi 4
from the good-deals-on-cool-stuff dept
Dive into the world of robotics, programming, and electronics with the PiCar-X, an engaging and versatile smart car designed for learners from elementary school to advanced hobbyists. Combining powerful features, exceptional quality, and a cool design, this robot car kit delivers an engaging learning experience in robotics, AI, and programming. Beyond being an educational tool, its powerful Robot Hat provides abundant resources for you to design and bring to life your projects. Plus, it comes enriched with 15 comprehensive video tutorials, guiding you through each step of discovery and innovation. Embark on a journey of discovery and creativity with Picar-X, where young learners become budding innovators! Without the Raspberry Pi board, it’s on sale for $80. With a RPi Zero 2W + 32GB, it’s on sale for $110. With a RPi 4 2GB + 32GB, it’s on sale for $141.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
Rand Paul Only Wants Google To Be The Arbiter Of Truth When The Videos Are About Him
from the your-own-bill-would-have-made-your-actions-illegal dept
Just a year and a half ago, Senator Rand Paul sponsored a bill that would make it illegal for federal government employees to ask internet companies to remove any speech. Now, in a NY Post op-ed, Paul proudly announces that he did exactly that—formally contacting Google executives to demand they remove a video he didn’t like.
The video apparently (falsely) claims Paul took money from Nicolas Maduro, the former Venezuelan President the US recently kidnapped. And Paul is furious that YouTube wouldn’t take it down for him.
But the straw that broke the camel’s back came this week when I notified Google executives that they were hosting a video of a woman posing as a newscaster posing in a fake news studio explaining that “Rand Paul is taking money from the Maduro regime.”
I’ve formally notified Google that this video is unsupported by facts, defames me, harasses me and now endangers my life.
Google responded that they don’t investigate the truth of accusations . . . and refused to take down the video.
Let’s pause here. Senator Paul—a sitting U.S. Senator—”formally notified” Google executives that they needed to remove content. Under his own proposed legislation, that would be illegal. His bill was explicitly designed to prevent government officials from pressuring platforms about speech. And yet here he is, doing exactly that.
This is also notably closer to actual government jawboning than most of what the Biden administration was accused of in the Murthy v. Missouri case—where the Supreme Court found no First Amendment violation because platforms felt free to say no. Paul, a Senator with legislative power over these companies, is “formally notifying” them of what he wants removed, and is now saying that Google’s refusal to do so means they should lose Section 230 protection. Remember, the “smoking gun” in the Murthy case was supposedly Biden officials (and Biden himself) threatening to remove Section 230 if the tech platforms didn’t remove content they didn’t like.
Rand Paul was furious about that and his bill was supposedly in direct response to the Murthy ruling, in which he wanted to make it clear that (1) no government official should ever demand content be taken down and (2) threatening to pass legislation to punish companies for their refusal to moderate content would also violate the law.
And here he’s doing both.
But it gets worse. Buried in the third-to-last paragraph of Paul’s op-ed is this remarkable admission:
Though Google refused to remove the defamatory content, the individual who posted the video finally took down the video under threat of legal penalty.
Wait. So the system worked exactly as designed? Paul threatened legal action against the person who actually created the content, and they took it down? That’s… that’s the whole point of Section 230. Liability attaches to the speaker, not the host. The creator is responsible. And when threatened with actual legal consequences, they removed the video.
So what, exactly, is Paul complaining about?!? He got the outcome he wanted through the mechanism that Section 230 preserved for him: the ability to bring legal action against the speaker. But instead of acknowledging that the law worked, he’s using this as his justification for destroying it.
Paul is a public figure. He has access to pretty much all the media he wants. If he wanted to use the famous “marketplace of ideas” he so frequently invokes to debunk a nonsense lie about him and Maduro, he was free to do that. If the video was actually defamatory, he could sue the creator—which he apparently threatened to do, and it worked! Instead, he wants to tear down the entire legal framework because YouTube wouldn’t do his bidding, even though the video was already taken down.
The Arbiter of Truth Hypocrisy
Here’s where Paul’s position becomes truly incoherent.
I asked one of Google’s executives what happens to the small town mayor whose enemies maliciously and without evidence, post that he is a pedophile on YouTube?. Would that be OK?
The executive responded that YouTube does not monitor their content for truth. But how would that small town mayor ever get his or her reputation back?
Just a few years ago, Rand Paul was apoplectic that YouTube tried to determine whether content—specifically about COVID-19—was true or not. He thought it was terrible that YouTube would dare to be the arbiter of truth, and he whined about it at length.
Now he’s demanding they be the arbiter of truth and remove one video because he says it’s false.
Paul even acknowledges this contradiction in his own op-ed, apparently without realizing it:
Interestingly, Google says it doesn’t assess the truth of the content it hosts, but throughout the pandemic they removed content that they perceived as untrue, such as skepticism toward vaccines, allegations that the pandemic originated in a Wuhan lab, and my assertion that cloth masks don’t prevent transmission.
Yes. And you screamed bloody murder about it. You insisted they should never do that. You built your entire position around the idea that platforms shouldn’t be deciding what’s true. And, with the re-election of Donald Trump, the big tech platforms all bent the knee and said they’d stop being arbiters of truth (even as it was legal for them to do so).
And so they stopped. And now you’re furious that they won’t make an exception for you.
Doesn’t that seem just a bit fucking hypocritical and entitled?
The “It’s Their Property” Problem
Paul’s real complaint—buried under all the high-minded rhetoric about defamation—is that Google makes its own decisions:
So, Google and YouTube not only choose to moderate speech they don’t like, but they also will remove speeches from the Senate floor despite such speeches being specifically protected by the Constitution.
Google’s defense of speech appears to be limited to defense of speech they agree with.
Yeah, dude. That’s how private property works. They get to decide what they host and what they don’t. That’s how it works. It’s also protected by their First Amendment rights. Compelled hosting or not hosting of speech you agree or disagree with is not a remedy available to you, Senator.
Paul continues:
Part of the liability protection granted internet platforms, section 230(c)(2), specifically allows companies the take down “harassing” content. This gives the companies wide leeway to take down defamatory content. Thus far, the companies have chosen to spend considerable time and money to take down content they politically disagree with yet leave content that is quite obviously defamatory. So Google does not have a blanket policy of refraining to evaluate truth. Google chooses to evaluate what it believes to be true when it is convenient and consistent with its own particular biases.
He says this as if it’s controversial. It’s not. It’s exactly how editorial discretion works. The company gets to make their own editorial decisions. You don’t have to like those decisions. But demanding they make different ones, and threatening to strip their legal protections if they don’t, is a government official using state power to coerce speech decisions.
You know, the thing Paul claimed to be against.
I think Google is, or should be, liable for hosting this defamatory video that accuses me of treason, at least from the point in time when Google was made aware of the defamation and danger.
Again: you already threatened the creator, and they took it down. The remedy worked. You used it successfully.
And if Paul’s standard is “Google becomes liable once made aware,” then anyone who wants content removed will just claim it’s defamatory and dangerous. How is this different from the COVID videos Paul was so mad they removed? People told Google those were false and dangerous, Google removed them, and Paul was furious that they acted after being “made aware” of allegedly false and dangerous content.
Now Google is doing exactly what Paul demanded—not removing content based on mere claims of falsity or danger—and he’s still mad at them.
The Section 230 Threat
So what’s Paul’s solution? Threaten to remove Section 230:
It is particularly galling that, even when informed of the death threats stemming from the unsubstantiated and defamatory allegations, Google refused to evaluate the truth of what it was hosting despite its widespread practice of evaluating and removing other content for perceived lack of truthfulness.
Remember when MAGA world insisted that Biden administration officials threatening platforms’ Section 230 protections was unconstitutional coercion? Remember how that was supposedly the worst violation of the First Amendment imaginable?
Rand Paul is now doing the same thing. A sitting Senator, using his platform and his legislative power, threatening to strip legal protections from a company because they won’t remove content he personally dislikes.
Paul literally told these platforms it wasn’t their job to determine truth or falsity. He literally sponsored a bill to prevent government officials from pressuring platforms about content. And now he’s doing exactly what he said was wrong—and threatening consequences if they don’t comply.
He didn’t “change his mind” on Section 230. He just revealed that he never had a principled position in the first place.
Paul supported Section 230 when he thought it meant platforms would leave up content he liked. He sponsored anti-jawboning legislation when he thought it would stop people he disagreed with from pressuring platforms. But the moment the system produces an outcome he doesn’t like—even though it worked exactly as designed and the video came down anyway—he’s ready to burn the whole thing down.
What is it with Senators and their thin skins? A few months ago we wrote about Senator Amy Klobuchar pressing for an obviously unconstitutional law against deepfakes after someone made an obviously fake satirical video about her. Now Paul joins the club: Senators who want to remake internet law because someone was mean to them online.
The video’s already down, Senator. You won. Maybe take the win instead of trying to burn down the open internet because Google wouldn’t do you a personal favor (the same favor you wanted to make illegal).
Filed Under: 1st amendment, defamation, editorial discretion, free speech, hypocrite, jawboning, liability, principles, rand paul, section 230
Companies: google, youtube

No comments:
Post a Comment