Brendan Carr Crafting ‘Patriotic’ Call Center Onshoring Plan To Provide Cover For Mass Looming Telecom Layoffs
from the corruption-is-patriotic dept
When he’s not busy trampling free speech, crushing the First Amendment, and destroying media consolidation and consumer protection standards, Brendan Carr has other hobbies. Like helping the telecom industry patriotically sell a brutal coming wave of new layoffs caused by the kind of industry consolidation he regularly rubber stamps.
Carr recently began circulating plans for something he claims will restrict U.S. telecom companies’ use of foreign call centers and require foreign-based customer service workers to be proficient in American Standard English. The plan is vague, but Reuters unskeptically frames it as a good faith effort to protect U.S. consumer privacy, improve customer service, and protect Americans from the scourge of foreign accents:
“Carr noted that nearly 70% of U.S. businesses outsource at least one department, including customer service and call center operations, to overseas locations.
“As a result, too many Americans have struggled to resolve an issue with a representative due to cultural and language barriers,” Carr said, adding foreign customer service centers “also raise concerns about protecting consumers’ personal information.”
What is Carr really up to here? I suspect he’s working closely with U.S. telecoms to craft pseudo-patriotic/nationalistic cover for another brutal round of layoffs. Some of which will be caused by AI, but a huge amount of which will have been caused by Carr’s love of rubber stamping harmful telecom industry consolidation.
- Carr likes to use cybersecurity as a bogeyman when convenient to something unpopular he’s trying to help industry sell.
2 Then, with his other hand, Carr is busy making U.S. consumers less safe and secure by gutting functional oversight of giant telecoms (despite the recent massive Salt Typhoon hack by China).
3 It’s also not really clear the FCC even has this authority. Especially in the Trump era, which has involved the Trump courts taking an absolutely brutal hatchet to regulatory independence. This sudden micromanagement of telcom support runs contrary to Carr’s “light regulatory touch” rhetoric. It’s also worth noting that a lot of telecoms, like Charter, already have mostly U.S. support agents.
But here’s the more important thing.
I’ve covered Brendan Carr probably longer and more extensively than pretty much anybody alive. And I can tell you, with 100% certainty, that Carr doesn’t do anything that’s just inherently in the public interest. That’s simply not who he is.
He’s always working an angle for industry or large companies, usually media and telecom giants. There’s just no evidence that he’s a good faith operator in any of the arenas Reuters gives him unearned credibility for, and his ethics and principles, as we’ve seen repeatedly, are not consistent.
So I really doubt this has anything to actually do with improving customer service, or holding telecoms accountable for shoddy overseas support. I suspect he’s cooking up a stage play.

We’ve long noted how these consolidated regional telecom monopolies have some of the worst customer service ratings of any industry in America (which is truly saying something). Maybe AI will improve some aspects of that, but as we’ve seen in other arenas where AI is layered on top of very broken sectors (journalism, health insurance) by unethical executives, the end result isn’t particularly great.
If you don’t fix the underlying monopolization, you can’t fix the symptoms of monopolization, which generally are high prices, spotty service, slow speeds, and abysmal customer service. Layer AI on top of a broken industry, and you usually get a badly automated broken industry.
It will be worth keeping an eye on Carr’s final proposed plan. But I suspect it mostly involves him working closely with telecom giants to put a nationalistic, racist veneer on looming plans to dramatically accelerate layoffs in a telecom sector that’s already seen massive workforce reductions, largely due to the mindless consolidation Carr regularly rubber stamps.
Filed Under: brendan carr, call centers, customer service, fcc, offshoring, onshoring, telecom, wireless
Greater Than Zero:
The Anti-AI Pushback On Gaming Preservation Efforts Makes No Sense
from the the-enemy-of-the-good dept
There is an old axiom you will have heard of before: don’t let the perfect be the enemy of the good. If we wanted to boil this down to a math equation, it might be described as something like: 0 < any positive integer. It’s not a difficult concept to grasp, typically, until you add in a dash of near-religious ideology into the equation. And that’s where the anti-AI crowd comes in.
Dustin Hubbard heads up Gaming Alexandria, a site dedicated to the preservation of obscure corners of video game history. Focused less on the actual games themselves, Gaming Alexandria instead focuses its efforts on media surrounding those games, such as manuals, box art, and old gaming journalism outputs. To that end, Hubbard’s group has amassed an impressive number of Japanese magazine scans throughout the years. To make this content useful to researchers elsewhere, he built a low-footprint app to make those scans searchable and, more importantly, to translate them. A Patreon page and subscriptions partially funded all of this.
And that’s what had Hubbard issuing apologies over this past weekend.
A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours.
“I sincerely apologize,” Hubbard wrote in his apology post. “My entire preservation philosophy has been to get people access to things we’ve never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI.”
And this is where we enter the realm of the silly. I’m not some AI evangelist. I fully recognize that there are error and other problems with AI… and I imagine there always will be, to some extent. AI is not always, or perhaps even mostly, the right tool to use. Nor will it always have benefits that outweigh problems it creates for we human beings.
But a positive number is greater than zero. This was a tool that suddenly made all of this culture content accessible to a wider range of people. Before it was not available to anyone that didn’t have a high-level of knowledge on the Japanese language. Translation errors also happen with human translators, too. We need only look at the ancient religious texts, and the very real wars started over their translations, to understand that.
Hubbard himself attempted to make this point over the weekend.
Writing on Patreon this weekend, Hubbard said he has long been tinkering with an improved automated OCR and translation process that could help turn more of those magazine scans into useful tools for Western researchers. And when he put Google Gemini AI model to the task recently, he said he was “blown away” by the results. While he still recommended using a professional human translator before citing these magazines in any scholarly research, he said the output from the Gemini AI tool “gets you a large percentage of the way there quickly.”
Inspired by those results, Hubbard set to work on a self-described “vibe coded” interface to view the original PDF scans alongside their AI-generated text translations for easy comparison and editing. The result was the Gaming Alexandria Researcher tool, posted to GitHub on Friday and shared with the site’s Patreon backers as a “beta” on Saturday. The tool, which runs locally on Windows, Mac, or Linux, can search, download, and edit Gaming Alexandria’s files from the cloud or sort through local files stored on your own machine.
“This app has been something I never would have dreamed could exist,” Hubbard enthused. “Now I can finally read and enjoy these Japanese magazines I’ve been scanning for years. A large part of that is due to your believing in my work and funding me so thank you so much for that.”
The negative responses he got for all of this are wild. There were calls to boycott the project. Calls to rescind Patreon subscriptions. Max Nichols, a game designer, cancelled his own Patreon membership and decried the project as “worthless and destructive”, likening any output generated using AI-based translations as “looking at history through a clownhouse mirror.”
I would argue that I’d rather get that look than get no look at all. I’d also argue that we need to see very specific examples of AI-created translation errors to understand just how grounded these criticisms are in reality, versus all of this being a case of overstating the case.
Some fans of the site, at least, managed to understand the context here.
For some supporters, though, using machine translations—including ones aided by AI models—is a practical necessity given the size of the task at hand. “There’s no world in which they could ever get hundreds of thousands of pages translated by hand,” game preservationist Chris Chapman wrote on social media. “Error-prone searchability is more useful to more people than none at all.”
“Famitsu alone is over 1,900 issues, each with [a hundred-plus] pages,” journalist and author Felipe Pepe noted. “That’s one magazine from one country. [Human translation] would be ideal, but it’s impossible.”
On the Gaming Alexandria Discord, user asie wrote that people who use tools like Google Lens or DeepL are already using AI-powered OCR and translation tools. At this point, these kinds of tools are “just a fact of reality,” they added.
Again, any positive number is greater than zero. Don’t let the perfect be the enemy of the good. Something is better than nothing.
I don’t know how to explain the negative responses here as anything other than a ideological commitment to disliking anything that even remotely touches upon artificial intelligence. Absolute moral stances certainly have their place, but they sure ought to be used sparingly.
And this particular stance is silly.
Filed Under: ai, ai translations, preservation, translations, video games
Ctrl-Alt-Speech: Money For Nothing And Clicks For A Fee
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Gamblers trying to win a bet on Polymarket are vowing to kill me if I don’t rewrite an Iran missile story (Times of Israel)
- Maybe Turning War Into a Casino Was a Bad Idea? (The Atlantic)
- French music streamer Deezer battles deluge of AI fraud (Financial Times)
- I hacked ChatGPT and Google’s AI – and it only took 20 minutes (BBC)
- US to Receive $10 Billion Fee for TikTok Deal, WSJ Reports (Bloomberg)
- ‘AI Is African Intelligence’: The Workers Who Train AI Are Fighting Back (404 Media)
- ‘Another internet is possible’: Norway rails against ‘enshittification’ (The Guardian)
Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!
Filed Under: africa, ai, artificial intelligence, chatgpt, content moderation, enshittification, trust and safety
Companies: deezer, openai, polymarket, tiktok
The Government Uses Targeted Advertising to Track Your Location. Here’s What We Need to Do.
from the protect-yourself dept
We’ve all had the unsettling experience of seeing an ad online that reveals just how much advertisers know about our lives. You’re right to be disturbed. Those very same online ad systems have been used by the government to warrantlessly track peoples’ locations, new reporting has confirmed.
For years, the internet advertising industry has been sucking up our data, including our location data, to serve us “more relevant ads.” At the same time, we know that federal law enforcement agencies have been buying up our location data from shady data brokers that most people have never heard of.
Now, a new report gives us direct evidence that Customs and Border Protection (CBP) has used location data taken from the internet advertising ecosystem to track phones. In a document uncovered by 404 Media, CBP admits what we’ve been saying for years: The technical systems powering creepy targeted ads also allow federal agencies to track your location.
The document acknowledges that a program by the agency to use “commercially available marketing location data” for surveillance drew from the process used to select the targeted ads shown to you on nearly every website and app you visit. In this blog post, we’ll tell you what this process is, how it can and is being used for state surveillance, and what can be done about it—by individuals, by lawmakers, and by the tech companies that enable these abuses.
Advertising Surveillance Enables Government Surveillance
The online advertising industry has built a massive surveillance machine, and the government can co-opt it to spy on us.
In the absence of strong privacy laws, surveillance-based advertising has become the norm online. Companies track our online and offline activity, then share it with ad tech companies and data brokers to help target ads. Law enforcement agencies take advantage of this advertising system to buy information about us that they would normally need a warrant for, like location data. They rely on the multi-billion-dollar data broker industry to buy location data harvested from people’s smartphones.
We’ve known for years that location data brokers are one part of federal law enforcement’s massive surveillance arsenal, including immigration enforcement agencies like CBP and Immigration and Customs Enforcement (ICE). ICE, CBP and the FBI have purchased location data from the data broker Venntell and used it to identify immigrants who were later arrested. Last year, ICE purchased a spy tool called Webloc that gathers the locations of millions of phones and makes it easy to search for phones within specific geographic areas over a period of time. Webloc also allows them to filter location data by the unique advertising IDs that Apple and Google assign to our phones.
But a document recently obtained by 404 Media is the first time CBP has acknowledged the location data it buys is partially sourced from the system powering nearly every ad you see online: real-time bidding (RTB). As CBP puts it, “RTB-sourced location data is recorded when an advertisement is served.”
Even though this document is about a 2019-2021 pilot use of this data, CBP and other federal agencies have continued to purchase and use commercially obtained location data. ICE has purchased location tracking tools since then and recently requested information on “Ad Tech” tools it could use for investigations.
The CBP document acknowledges two sources of location data that it relies on: software development kits (SDKs) and RTB, both methods of location-tracking that EFF has written about before. Apps for weather, navigation, dating, fitness, and “family safety” often request location permissions to enable key features. But once an app has access to your location, it could share it with data brokers directly through SDKs or indirectly (and often without the app developers’ knowledge) through RTB. Data brokers can collect location data from SDKs that they pay developers to put in their apps. When relying on RTB, data brokers don’t need any direct relationship with the apps and websites they’re collecting location data from. RTB is facilitated by ad companies that are already plugged into most websites and apps.
How Real-Time Bidding Works
RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your information, including location data, to thousands of companies a day. At a high-level, here’s how RTB works:
- The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you.
- This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
- The highest bidder gets to display an ad for you, but advertisers (or the adtech companies that represent them) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. For example, the FTC found that location data broker Mobilewalla collected data on over a billion people, with an estimated 60% sourced from RTB auctions. Leaked data from another location data broker, Gravy Analytics, referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. When confronted, several of these apps’ developers said they had never heard of Gravy Analytics.
As Venntel, one of the location data brokers that has sold to ICE, puts it, “Commercially available bidstream data from the advertising ecosystem has long been one of the most comprehensive sources of real-time location and device data available.” But the privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast the average person’s data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately exploited. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
What You Can Do To Protect Yourself
Revelations about the government’s exploitation of this location data shows how dangerous online tracking has become, but we’re not powerless. Here are two basic steps you can take to better protect your location data:
- Disable your mobile advertising ID (see instructions for iPhone/Android). Apple and Google assign unique advertising IDs to each of their phones. Location data brokers use these advertising IDs to stitch together the information they collect about you from different apps.
- Review apps you’ve granted location permissions to. Apps that have access to your location could share it with other companies, so make sure you’re only granting location permission to apps that really need it in order to function. If you can’t disable location access completely for an app, limit it to only when you have the app open or only approximate location instead of precise location.
For more tips, check out EFF’s guide to protecting yourself from mobile-device based location tracking. Keep in mind that the security plan that’s best for you will vary in different situations. For example, you may want to take stronger steps to protect your location data when traveling to a sensitive location, like a protest.
What Tech Companies and Lawmakers Must Do
Legislators and tech companies must act so that individuals don’t bear the burden of defending their data every time they use the internet.
Ad tech companies must reckon with their role in warrantless government surveillance, among other privacy harms. The systems they built for targeted advertising are actively used to track people’s location. The best way to prevent online ads from fueling surveillance is to stop targeting ads based on detailed behavioral profiles. Ads can still be targeted contextually—based on the content people are viewing—without collecting or exposing their sensitive personal information. Short of moving to contextual advertising, tech companies can limit the use of their systems for government location tracking by:
- Stopping the use of precise location data for targeted advertising. Ad tech companies facilitating ad auctions can and should remove precise location data from bid requests. Ads can be targeted based on people’s coarse location, like the city they’re in, without giving data brokers people’s exact GPS coordinates. Precise location data can reveal where we work, where we live, who we meet, where we protest, where we worship, and more. Broadcasting it to thousands of companies a day through RTB is dangerous.
- Removing advertising IDs from devices, or at minimum, disabling them by default. Advertising IDs have become a linchpin of the data broker economy and are actively used by law enforcement to track people’s location. Advertising IDs were added to phones in 2012 to let companies track you, and removing them is not a far-fetched idea. When Apple forced apps to request access to people’s advertising IDs starting in 2021 (if you have an iPhone you’ve probably seen the “Ask App Not to Track” pop-ups), 96% of U.S. users opted out, essentially disabling advertising IDs on most iOS devices. One study found that iPhone users were less likely to be victims of financial fraud after Apple implemented this change. Google should follow Apple’s lead and disable advertising IDs by default.
Lawmakers also need to step up to protect their constituents’ privacy. We need strong, federal privacy laws to stop companies from spying on us and selling our personal information. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move.
Legislators can and must also close the “data broker loophole” on the Fourth Amendment. Instead of obtaining a warrant signed by a judge, law enforcement agencies can just buy location data from private brokers to find out where you’ve been. Last year, Montana became the first state in the U.S. to pass a law blocking the government from buying sensitive data it would otherwise need a warrant to obtain. And in 2024, Senator Ron Wyden’s EFF-endorsed Fourth Amendment is Not for Sale Act passed the House before dying in the Senate. Others should follow suit to stop this end-run around constitutional protections.
Online behavioral advertising isn’t just creepy–it’s dangerous. It’s wrong that our personal information is being silently harvested, bought by shadow-y data brokers, and sold to anyone who wants to invade our privacy. This latest revelation of warrantless government surveillance should serve as a frightening wakeup call of how dangerous online behavioral advertising has become.
Reposted from the EFF’s Deeplinks blog.
Filed Under: 4th amendment, advertising, cbp, location data, mass surveillance, privacy, real time bidding, surveillance
No comments:
Post a Comment