See upcoming posts, with the Techdirt Crystal Ball...
Another Day, Another Bad Bill To Reform Section 230 That Will Do More Harm Than Good
from the no-bad dept
Last fall, when it first came out that Senator Brian Schatz was working on a bill to reform Section 230 of the Communications Decency Act, I raised questions publicly about the rumors concerning the bill. Schatz insisted to me that his staff was good, and when I highlighted that it was easy to mess this up, he said I should wait until the bill is written before trashing it:Feel free to trash my bill. But maybe we should draft it, and then you should read it?— Brian Schatz (@brianschatz) September 13, 2019
subject to subsection (e), making available a live company representative to take user complaints through a toll-free telephone number during regular business hours for not fewer than 8 hours per day and 5 days per week;While there is a small site exemption, at Techdirt we're right on the cusp of the definition of a small business (one million monthly unique visitors - and we have had many months over that, though sometimes we're just under it as well). There's no fucking way we can afford or staff a live call center to handle every troll who gets upset that users voted down his comment as trollish.
Again, I do think Schatz's intentions here are good -- they're just not based in the real world of anyone who's ever done any content moderation ever. They're based in a fantasy world, which is not a good place from which to make policy. Yes, many people do get upset about the lack of transparency in content moderation decisions, but there are often reasons for that lack of transparency. If you detail out exactly why a piece of content was blocked or taken down, then you get people trying to (1) litigate the issue and (2) skirt the rules. As an example, if someone gets kicked off a site for using a racist slur, and you have to explain to them why, you'll see them argue "that isn't racist" even though it's a judgment call. Or they'll try to say the same thing using a euphemism. Merely assuming that explaining exactly why you've been removed will fix problems is silly.
And, of course, for most sites the call volume would be overwhelming. I guess Schatz could rebrand this as a "jobs" bill, but I don't think that's his intention. During a livestream discussion put on by Yale where this bill was first discussed, Dave Willner (who was the original content policy person at Facebook) said that this requirement for a live call center to answer complaints was (a) not possible and (b) it would be better to just hand out cash to people to burn for heating, because that's how nonsensical this plan is. Large websites make millions of content moderation decisions every day. To have to answer phone calls with live humans about that is simply not possible.
And that's not all that's problematic. The bill also creates a 24 hour notice-and-takedown system for "illegal content." It seems to be more or less modeled on copyright's frequently abused notice-and-takedown provisions, but with a 24-hour ticking time bomb. This is similar to the French hate speech law that was just tossed out as unconstitutional. And if it's unconstitutional in France regarding censorship, you can pretty much bet that it wouldn't survive 1st Amendment scrutiny here:
Subject to subsection (e), if a provider of an interactive computer service receives notice of illegal content or illegal activity on the interactive computer service that substantially complies with the requirements under paragraph (3)(B)(ii) of section 230(c) of the Communications Act of 1934 (47 U.S.C. 230(c)), as added by section 6(a), the provider shall remove the content or stop the activity within 24 hours of receiving that notice, subject to reasonable exceptions based on concerns about the legitimacy of the notice.This is yet another one of those ideas that sounds good in a fantasy land that works the way many people would like the world to work, but fails the second it breathes the sweet air of reality. Among the many, many issues with this:
- Determining what is and what is not "illegal content" or "illegal activity" often takes a hell of a lot longer than 24 hours. This is why law enforcement investigations are not completed within 24 hours. Yet social media companies are supposed to be much better at it than law enforcement?
- In which jurisdiction should the laws be applied regarding "illegal content" and "illegal activity"? Because some content may be illegal in some places, but not others. So how does that get worked out?
- As we've seen in other situations with notice-and-takedown provisions, the strong incentive is to take the damn content down as quickly as possible to avoid massive, crippling liability. In other words, this becomes a tool of censorship.
- Given that, it's hard to see how this could past 1st Amendment muster.
Filed Under: appeals, brian schatz, call centers, censorship, john thune, notice-and-takedown, section 230, transparency
T-Mobile Is Already Trying To Wiggle Out Of Its Sprint Merger Conditions
from the told-you-so dept
Is it too early to say "I told you so" yet?Despite countless pre-merger promises that its $26 billion merger would create oodles of new jobs, T-Mobile laid off 6,000 employees at its Metro prepaid division before the ink was even dry. Another 200 Sprint employees were fired during a 6 minute conference call a few weeks ago. T-Mobile and Sprint quietly confirmed the layoffs had nothing to do with the pandemic.
Both the FCC and DOJ ignored all critical data and rubber stamped the deal, because that's what feckless, revolving-door regulators do. The only real resistance T-Mobile saw to its competition and job-eroding deal was the California PUC, which set certain 5G deployment (T-Mobile had to deliver 5G connections of at least 300Mbps to 93 percent of California by the end of 2024) and job (T-Mobile had to hire 1,000 additional employees within three years in California) targets. Given T-Mobile told regulators repeatedly that the merger would dramatically expand 5G deployment and jobs by default, neither should have been a problem.
Yet less than three months from the deal's closure and T-Mobile is already trying to wiggle out from underneath its obligations in California by claiming California regulators lack the authority to enforce them:
"The commission simply does not have the authority to require a wireless carrier to hire a particular number of employees in a given time period," T-Mobile wrote. "The legislature has never granted it such authority and, prior to the issuance of the decision, the Commission has not attempted to impose such a mandate on any other communications provider in any context." T-Mobile also said the condition is "particularly burdensome and unjustified in light of the current COVID-19 crisis."California state law makes in clear the CPUC has the authority to determine whether or not a merger "maintains or improves" service quality for consumers, and "the quality of management of the resulting public utility doing business in the state." Said law also mandates that mergers much be "beneficial on an overall basis to state and local economies," and be "fair and reasonable to affected public utility employees, including both union and nonunion employees." Telecom megadeals routinely fail that test, but, courtesy of a corrupted Congress, we simply adore rubber stamping them anyway.
Meanwhile, COVID-19 creates the perfect cover for T-Mobile to dodge its already flimsy deal conditions, even if the 6,200 employees who have already lost their jobs were laid off due to the merger, not the pandemic. And in April of last year, former T-Mobile CEO John Legere insisted that the company would easily add 11,000 additional employees within three years of the deal's closing:
"Let me be really clear on this increasingly important topic. This merger is all about creating new, high-quality, high-paying jobs, and the New T-Mobile will be jobs-positive from Day One and every day thereafter."That's in stark contrast to not only telecom history (such consolidation always causes job losses in support and middle management), but unions, consumer groups, economists, and Wall Street analysts, who all made it very clear the deal could, over time, result not only in upwards of 20,000 lost jobs, but lower pay across the sector overall. That's before you get to the fact that reducing overall competitors from four to three will almost certainly result in less price competition and higher prices.
The T-Mobile Sprint merger was always hot garbage. Countless experts made this clear. US telecom megamergers are routinely terrible for markets, consumers, and employees. It's why everybody spends all day bitching about AT&T and Comcast. Yet time and time again, pre-merger promises by companies looking for regulatory approval are eagerly swallowed and parroted by regulators and the press, and conditions are applied to terrible deals that are either too flimsy to be useful, or simply aren't enforced. US merger mania (especially in telecom) remains a purgatorial game of Charlie Brown and Lucy football, where we see the same outcome time and time again, but simply refuse to alter our behavior because, at least for executives and investors, the broken path forward is the more profitable one.
UK Information Commissioner Says Police Are Grabbing Too Much Data From Phones Owned By Crime Victims
from the losing-the-tech-race,-eh? dept
The UK's Information Commissioner's Office (ICO) has taken a look at what law enforcement officers are hoovering up from citizens' phones and doesn't like what it sees. The relentless march of technology has enabled nearly everyone to walk around with a voluminous, powerful computer in their pocket -- one filled with the details and detritus of everyday living. And that relentless march has propelled citizens and their pocket computers right into the UK's regulatory void.The ICO's report [PDF] doesn't just deal with the amount of data and communications UK cops can get from suspects' phones. It also deals with the insane amount of data cops are harvesting from devices owned by victims and witnesses of criminal acts. Left unaddressed, the lack of a solid legal framework surrounding mobile phone extractions (MPEs) will continue to lead law enforcement officers to believe they can harvest everything and look for the relevant stuff at their leisure.
Very few people would consent to this sort of intrusive search, but some aren't aware of how extensive these searches are. Those that are aware are less likely to come forward to help further an investigation, even if they're a victim of a crime.
Large volumes of data are likely to include intimate details of the private lives of not only device owners but also third parties (eg their family and friends). In other words, it is not just the privacy of the device owner that is affected, but all individuals that have communicated digitally with that person or whose contact details have been added by the owner. This presents considerable risks to privacy through excessive processing of data held on or accessed through the phone.Privacy is paramount when dealing with those not suspected of committing criminal acts. Standard MPE practice appears to ignore this crucial element, which only heightens distrust of law enforcement agencies. But privacy concerns are being swept aside in favor of efficiency. Why do things the right way when it's so much easier to do them this way?
Left unexplained, or in cases where it is not necessary or justified, this intrusion into individuals’ privacy presents a risk of undermining confidence in the criminal justice system. People may feel less inclined to assist the police as witnesses or to come forward as a victim, if they are concerned that their and their friends’ and families’ private lives will be open to police scrutiny without proper safeguards…
PI [Privacy International] published a report calling for an urgent review of the use of MPE by the police, (“Digital stop and search: how the UK police can secretly download everything from your mobile phone” 15). It raised questions over the lawful bases for carrying out extractions, the lack of national and local guidance, and the authorisation process. [...] PI were also concerned that the use of MPE kiosks (‘self-service’ devices used by police forces to download and analyse the contents of individuals’ mobile phones) is becoming more common and forming part of ‘business as usual’ for officers, despite a lack of governance, oversight and accountability in their use.But if law enforcement officers don't get what they want, there's a good chance crime victims won't get what they want. Here's where things get extremely unpleasant. It's not that crime victims feel this way. It's what's actually happening. When investigators aren't given consent to perform MPEs on crime victims' phones, there's a chance law enforcement will just decide to stop pursuing the investigation.
Here's what Big Brother Watch discovered after obtaining records from multiple police forces in England and Wales:
It found officers had asked for the complainant's mobile phone data in 84 sexual assault cases.The report calls for a long list of improvements and reforms, with an eye on safeguarding the privacy of crime victims and witnesses. ICO says all extractions should be logged and audited routinely. Investigators should be made aware that it's almost impossible to comply with data privacy laws when performing MPEs due to the amount of data/communications present on the average phone -- much of which "belongs" to other people who have communicated with the victim/witness.
And every one of the 14 of those in which the complainant had declined had then been dropped by police.
It won't be enough to establish guidelines under the current legal framework. ICO says new laws must be put in place to make it explicit what can and can't be obtained with phone searches and when they can be lawfully carried out. This law would also need to set time constraints on examinations of extracted data, as well as the length of time it can be stored once obtained.
For now, the UK's legal framework is more limited than law enforcement's powers. The ICO's report aims to change that. Unfortunately, this will move at the speed of bureaucracy while tech continues to move at the speed of innovation. It's impossible to play catch-up at this speed. I say it's time to hit law enforcement with some broadly-written legislation -- you know, the sort of stuff the normals are forced to deal with all the time. Maybe that will slow the roll of data extraction until the government as a whole has time to address all the nuances.
Top German Court Rules Facebook's Collection And Use Of Data From Third-Party Sources Requires 'Voluntary' Consent
from the oh-no,-not-more-pop-ups dept
Back at the end of 2017, Germany's competition authority, the Bundeskartellamt, made a preliminary assessment that Facebook's data collection is "abusive". At issue was a key component of Facebook's business model: amassing huge quantities of personal data about people, not just from their use of Facebook, WhatsApp and Instagram, but also from other sites. If a third-party website has embedded Facebook code for things such as the 'like' button or a 'Facebook login' option, or uses analytical services such as 'Facebook Analytics', data will be transmitted to Facebook via APIs when a user calls up that third party's website for the first time. The user is not given any choice in this, and it was this aspect that the Bundeskartellamt saw as "abusive".After the preliminary assessment, in February 2019 the German competition authority went on to forbid Facebook from gathering information in this way without voluntary permission from users:
(i) Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users' voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data.Naturally, Facebook appealed against this decision, and the Düsseldorf Higher Regional Court found in its favor. However, as the New York Times reports, the Federal Court of Justice, which monitors compliance with the German constitution, has just reversed that:
(ii) Collecting data from third party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent.
If consent is not given for data from Facebook-owned services and third party websites, Facebook will have to substantially restrict its collection and combining of data. Facebook is to develop proposals for solutions to this effect.
On Tuesday, the federal court said regulators were right in concluding that Facebook was abusing its dominant position in the market.Needless to say, Facebook vowed to fight on -- and to ignore the defeat for the moment. The case goes back to the lower court to rule again on the matter, but after the Federal Court of Justice guidance, it is unlikely to be in Facebook's favor this time. There is also the possibility that the case could be referred to the EU's top court, the Court of Justice of the European Union, to give its opinion on the matter.
"There are neither serious doubts about Facebook's dominant position on the German social network market nor the fact that Facebook is abusing this dominant position," the court said. "As the market-dominating network operator, Facebook bears a special responsibility for maintaining still-existing competition in the social networking market."
Assuming that doesn't happen, the ruling could have a big impact not only on Facebook, but on all the other Internet giants that gather personal details from third-party sites without asking their visitors for explicit, voluntary permission. Although the ruling only applies to Germany, the country is the EU's biggest market, and likely to influence what happens elsewhere in the region, and maybe beyond. One bad outcome might be even more pop-ups asking you to give permission to have your data gathered, and be tracked as you move around the Internet.
Follow me @glynmoody on Twitter, Diaspora, or Mastodon.
Judge Sides With Twitter Over Devin Nunes In Case Over Satirical Internet Cow: Section 230 Removes Twitter From Frivolous Case
from the one-for-the-cow dept
Well, some small bit of good news in the Section 230 front: after a judge was clearly skeptical over Devin Nunes' arguments for why Twitter should be involved in Nunes' frivolous SLAPP suit over a satirical internet cow that mocks him, the judge has now announced that Section 230 of the CDA rightly protects Twitter.In a letter that quickly dismisses each of Nunes's lawyer Steven Biss's silly arguments why 230 doesn't apply, the judge basically says "nope" to all of those arguments and tells Twitter's lawyer to draft an order dismissing Twitter from the case. Here's just one part of the letter:
The court must look to 47 USC Section 230 and the caselaw interpreting the act and analyze plaintiff's allegations to determin if Twitter has immunity under the act. Plaintiff would have Twitter be held liable for defamation for the content placed on its internet platform by others and would have Twitter found to be negligent for not removing the content place on its internet platform by others. Section 230 reads in subsection (c)(1) "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". Plaintiff seeks to have the court treat Twitter as the publisher or speaker of the content provided by others based on its allowing or not allowing certain content to be on its platform. The court refuses to do so and relies on the rulings in Zeran v. Am. Online... The court in Zeran stated "Section 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Specifically section 230 precludes courts from entertaining claims that would place a computer service provider in a publisher's role. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions -- such as deciding whether to publish, withdraw, postpone or alter content -- are barred".The judge is also not at all impressed by Biss's argument of "but Twitter is so biased!" That doesn't matter:
The plaintiff also alleges that Twitter has a bias towards a point of view and that bias is so extreme that it governs its decisions regarding content that is allowed on its internet platform and that course of conduct makes it a content provider. The allegations in the Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc.... were similar to those by the plaintiff in this case concerning content decisions being one sided and the court in the Nemet case ruled that the service provider was immune from suit pursuant to 47 USC Section 230.As an interesting side note, the court also cites Section (c)(2) of Section 230, the rarely used part of the law that says you also can't be liable for moderation decisions. A lot of cases around 230 don't even consider the (c)(2) issues, because (c)(1) is usually enough to dismiss. But here, the court basically says both of them are good enough to get Twitter out of the lawsuit.
The court finds the issues in this case substantially similar to the issues presented in the Zeran and Nemet cases and applying the rulings in the Zeran and Nemet cases the court finds that Twitter is not a content provider based on the allegations by Plaintiff in this lawsuit. The Court finds that Twitter is immune from the defamation claims of plaintiff based on 47 USC Section 230.
The court further finds that 47 USC Section 230 (c)(2) provides immunity for all civil liability and therefore Twitter is immune from Plaintiff's negligence claim based on the allegations in the complaint and the courts application of the rulings in the Zeran and Nemet cases to the allegations in this case.Next up: hopefully the court will dismiss the underlying defamation claims against the two satirical Twitter accounts (Devin Nunes' Cow and Devin Nunes' Mom) along with political consultant Liz Mair.
Filed Under: cda 230, devin nunes, devin nunes' cow, intermediary liability, nunescow, pat carome, section 230, steven biss
Companies: twitter
Companies: twitter
Is Data Privacy A Privilege? The Racial Implications Of Technology Based Tools In Government
from the algorithmic-bias-and-privacy dept
While we often read about (and most likely experience ourselves) public outrage regarding personal data pulled from websites like Facebook, the news often fails to highlight the staggering amounts of personal data collected by our governments, both directly and indirectly. Outside of the traditional Fourth Amendment protocols for constitutional searches and seizures, personally identifiable information (PII) – information that can be used to potentially identify an individual – is collected when we submit tax returns, apply for government assistance programs or interact with federal and government social media accounts.Technology has not only expanded governments’ capability to collect and hold onto our data, but has also transformed the ways in which that data is used. It is not uncommon now for entities to collect metadata or data that summarizes and provides information about other data (for example, the author of a file or the date and time the file was last edited). The NSA, for instance, collected metadata from over 500 million calls detailing records during 2017, much of which it did not have the legal authority to collect. Governments now even purchase huge amounts of data from third party tech companies.
The implementation of artificial intelligence tools throughout the government sector has influenced what these entities do with our data. Governments aiming to “reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data the name” have implemented tools like artificial intelligence decision making in both criminal and civil contexts. Algorithms can be effective tools in remedying government inefficiencies, and idealistic champions believe that artificial intelligence can eliminate human and subjective emotions to obtain a logical and “fairer” outcome. Data collected by governments plays a role in developing these tools. Individual data is taken and aggregated into data sets which are then used for algorithmic decision making.
With all this data, what steps do governments take to protect the information they collect from their citizens?
Currently, there are real and valid concerns that governments fail to take the adequate steps necessary to protect and secure data. Take, for instance, the ever-increasing number of data breaches in densely populated cities like New York and Atlanta. In 2018, the city of Atlanta was subjected to a major ransomware attack by an Iranian based group of hackers that shut down major city systems and led to outages that were related to “applications customers use to pay bills or access court related information,” (as per Richard Cox, the city's Chief of Operations at the time). Notably, the city had been heavily criticized for its subpar IT and cybersecurity infrastructure and apathetic attitude towards fixing any vulnerabilities in the city.
While the city claimed there was little evidence that the attack had compromised any of its citizens’ data, this assertion seems unrealistic given the span and length of the attack and the number of systems that were compromised.
Race, Algorithms and Data Privacy
As a current law student, I have given much thought over the last few years to the role of technology as the “great equalizer.” For decades, technology proponents have advocated for increased use in the government sector by highlighting its ability to level the playing field and provide opportunities for success to all, regardless of race, gender or economic income.
However, having gained familiarity with the legal and criminal justice systems, I have begun to see that human racial and gender biases, coupled with government officials’ failure to understand or question technological tools like artificial intelligence, often leads to inequitable results. Further, the allocation of governments funds for technological tools often go to police and prosecution rather than defense and protection of vulnerable communities.
There is a real threat that algorithms do not achieve the intended goals of objectivity and fairness, but further perpetuate the inequalities and biases that already exist within our societies. Artificial intelligence has enabled governments to cultivate “big data” and thus, have added another tool to their arsenals of surveillance technology. “Advances in computational science have created the ability to capture, collect, and combine everyone's digital trails and analyze them in ever finer detail." Through the weaponization of big data, governments can even more easily identify, control, and oppress marginalized groups of people within a society.
As our country currently addresses the decades of systematic racism inherent in our political and societal systems, privacy must be included in the conversation and reform. I believe that data privacy today is regarded as a privilege rather than a right, and this privilege is often reserved for white, middle- and upper class citizens. The complex, confusing and lengthy nature of privacy policies not only requires some familiarity with data privacy and what the government and companies do with data, but also the time, energy and resources to read through the entirety of the document. If the receipt of vital benefits was contingent on my acceptance of a government website privacy policy, I have no doubt that I would accept the terms regardless of how
unfavorable they were to me.
The very notion of the right to privacy in the United States is derived, historically, from white, male, and upper class values. In 1890, Samuel D. Warren and Louis Brandeis (future Supreme Court Justice) penned their famous and often quoted “The Right to Privacy” in the Harvard Law Review. The article was, in fact, a response to the discomfort that accompanied their high-society lives, as the invention of the camera now meant that their parties were captured and displayed prominently in newspapers and tabloid publications.
These men did not intend to include the general population when creating this new right to privacy, but instead aimed to safeguard their own interests. They were not looking to protect the privacy of the most vulnerable populations, but to make sure that the local tabloid didn’t publish any drunk or incriminating photos from the prior night’s party. Even the traditional conception of privacy, which employs physical space and the home to illustrate the public verses private divide, is a biased and elitist concept. Should someone, then, lose their right to privacy if they do not have a home themselves?
In the criminal justice system, how do we know that courts and governments are devoting an adequate amount of resources to secure records and the data of individuals in prison or court? Large portions of budgets are spent on prosecutorial tools, and it seems as though racial biases prevent governments from devoting monetary resources to protect minorities’ data and privacy as they move through the criminal justice system. Governments do not reveal much as to whether they notify prisoners and defendants if data is compromised, so it is clear that these systems must be scrutinized moving forward.
Moving Forward
Moving forward, how do we address race and inequity issues surrounding data privacy and hold our governments accountable? Personally, I think we need to start with better data privacy legislation. Currently, California is the only state with a tangible data privacy law, which should be expanded to the federal level. Limits must be placed on how long governments can hold onto data, what can be done with the data collected, and proper protocols for data destruction must be established. I believe there is a dire need for better cybersecurity legislation that places the burden on government entities to develop cybersecurity protections that exceed the bare minimum.
The few pieces of cybersecurity legislation that do exist tend to be reactive more than proactive, and often utilize ambiguous terms like “reasonable cybersecurity features,” which ultimately give more power to companies and entities to say they did what was reasonable for the situation at the time. Additionally, judges and lawyers need to be held accountable for data protection as well. Because technology is so deeply integrated into the court systems and the entirety of law itself, there should be an ethical and professional code of conduct-based requirement that holds judges and supporting court staff to a standard in which they must actively work to protect data.
We also need to implement better education in schools regarding the importance of data privacy and what governments and companies do with our personal identifying information. Countries throughout the European Union have developed robust programs in schools that focus on teaching the importance of digital privacy and skills. Programs like Poland’s “Your data – your concern” enable the youth to understand and take ownership of their privacy rather than blatantly click “Accept” on a privacy policy. To address economic and racial inequalities, non-profit groups should also aim to integrate these courses into public programming, adult education curricula, and prison educational programs.
Finally, and most importantly, we need to place limits on and reconsider what technological tools both local and federal governments are using and the racial biases inherent in these tools. Because technology can be weaponized to continue oppression, I question whether governments should implement these solutions prior to addressing the underlying systematic racism that already exists within our societies. It is important to remember the algorithms and the outcomes generated – especially in the context of government – reflect existing biases and prejudices in our society. It is clear that governments are not yet willing to accept responsibility for the biases present in the algorithm or strive to protect data regardless of race, gender and income level.
For example, a study conducted on an algorithm used to determine a criminal defendant’s likelihood of reoffending had an 80% error rate in predictions of violent recidivism. Problematically, these errors impacted minority groups significantly more than they did white defendants. The study determined that the algorithm incorrectly determined blacks as re-offenders at almost double the rate that it incorrectly identified white defendants. Because recidivism rates are considered in sentencing and bail determinations, these algorithms disastrously impact minorities’ livelihoods by subjecting them to harsher punishment and more time in prison; individuals lose valuable time and are unable to work or support their families and communities. Until women and minorities have more of a presence in both the government and programming, and can use their diverse perspectives to ensure that algorithms do not contain biases, these technology tools will continue to oppress.
We must now ask if we have succeeded in creating an environment in which these tools can be implemented to help more than cause harm. While I think these tools currently cause more harm, I am hopeful that as our country begins to address and remedy the underlying systemic racism that exists, we can create government systems that can safely implement tools in ways that benefit all.
Chynna Foucek is a rising third year student at Brooklyn Law School, where she focuses on Intellectual Property, Cybersecurity and Data Privacy law.
DOJ Finally Uses FOSTA, Over Two Years Later... To Shut Down A Site Used By Sex Workers
from the worth-it? dept
For years leading up to the passage of FOSTA, we were told that Congress had to pass the law as quickly as possible because so many women were "at risk" due to trafficking. And when asked for evidence of this, people would point to Backpage, even though the site had shut down its "adult" section under pressure from Congress a year earlier. Of course, the actual stats that were provided turned out to be fake and Backpage was seized before the law was even passed. The charges against the founders did not include sex trafficking charges. Also, as the details have come out about Backpage, it's become evident that rather than facilitating sex trafficking, the company was actively working with law enforcement to find and arrest sex traffickers. However, where they started to push back on law enforcement was when law enforcement wanted to go after non-trafficked sex workers.However, with all of the moral panic around the need to pass FOSTA, we highlighted earlier this year that two years had gone by and the DOJ had not used the law a single time to go after any "sex trafficking" site. Instead, as we predicted, the law was being used in nuisance lawsuits, such as mailing list provider MailChimp and CRM provider Salesforce because Backpage had used those services.
Finally, last week, however, the DOJ made use of FOSTA in shutting down a website and arresting its operator. A site called CityXGuide.com (and some other sites that it ran -- including one with a name similar to Backpage) were seized, and the guy who ran it, Wilhan Martono, was arrested in California. From the details provided, it does look like Martono saw an opportunity to jump into the market vacated by Backpage, and the charges claim that he brought in $21 million doing so.
The original indictment was done in early June, but it was only just unsealed with Martono's arrest and the seizure of the various websites. It does seem clear that Martono sought to be the source for advertising sex work, but the DOJ conveniently mashes together sex work and sex trafficking, because that's the kind of thing law enforcement likes to do.
Indeed, the immediate reaction to this appears to be that plenty of non-trafficked sex workers, who previously had relied on Backpage to remain safe and now relied on Martono's sites, are again put in danger. The Hacking/Hustling collective -- a group of sex workers who came together to advocate around issues such as FOSTA -- put out a press release calling out what a stupid, counterproductive move this is:
“When we are re-envisioning public safety, this is a perfect example of why we can’t exempt human trafficking. Instead of resources going to real investigations or victim support, you have six agencies spending time and resources reading ads and looking for the word ‘blow job’” said Lorelei Lee, a collective member of Hacking//Hustling.I was going to link to another website that has a blog advocating for sex workers' rights that explained in great detail how this puts sex workers at danger, but honestly, under a broad interpretation of FOSTA, linking to that website might violate the law. That's because after reading the blog post, I saw that there was a link to a "find escorts" site associated with the blog, and while I think I should be able to link to such a blog post, with its cogent explanation for why this DOJ action puts women at risk... merely linking to it would put me at risk under a broad reading of FOSTA (a stupid, unconstitutional reading, but, alas, these are some of the chilling effects created by the law).
Either way, it's difficult to see how this does anything to stop actual sex trafficking. Indeed, again, it's likely to put victims at even greater risk -- while also putting sex workers at greater risk. Studies have shown that when these sites go down, more women are put at risk. Even worse, as noted, Backpage actually helped law enforcement track down and arrest traffickers. But by making everything else such sites do illegal, it appears that (obviously) Martono avoided helping law enforcement at all (the indictment suggests he ignored various subpoenas).
Again, this is exactly as tons of people predicted. When these ads were appearing on places like Craigslist and Backpage, those companies worked closely with law enforcement to go after actual traffickers, and get them arrested. But now, with things like FOSTA, rather than do that, law enforcement can just... take down the best source to find and track down traffickers, pushing them to sites that are less and less likely to help law enforcement? How does that make sense. Indeed, we've covered a number of law enforcement officials saying that the shutdown of Backpage has made it more difficult to find actual traffickers.
And, if you want any more evidence of that: note that nowhere with this announcement is there anything about arresting any actual traffickers. I have a request in to the DOJ asking if they or any law enforcement have arrested any actual traffickers who used these sites -- but at the time of publishing they have not responded. The indictment claims -- somewhat salaciously -- that the sites seized were used to identify "numerous victims of child sex trafficking," including a "13-year-old Jane Doe." Obviously, it's horrific to find out about that Jane Doe or any victim of sex trafficking, but it does seem odd that there is no mention of any arrests of the traffickers.
Because... wouldn't arresting actual traffickers be the goal here?
Either way, this story is getting buzz on Twitter from two communities: sex workers who are pissed off and angry that they're now losing business and the ability to operate safely... and... believers in the ridiculous Q anon nonsense conspiracy theory, who believe that everything going on in the world is a plot to cover up child sex trafficking. To them, this is evidence that the big promised crackdown on sex trafficking rings has begun. Of course, the lack of any actual arrests for actual sex trafficking kinda suggests that's not the goal here.
Still, this whole thing allowed FOSTA co-author Senator Rob Portman to take a victory lap. Someone should ask him why he's celebrating putting women at risk -- or at least ask him where the arrests are for actual sex trafficking.
Filed Under: doj, fosta, section 230, sex trafficking, sex workers, wilhan martono
Companies: backpage, cityxguide
Companies: backpage, cityxguide
Daily Deal: The All-In-One Mastering Organization Bundle
from the good-deals-on-cool-stuff dept
The All-In-One Mastering Organization Bundle has 5 courses to help you become more organized and efficient. You'll learn how to organize all your digital files into a single inbox-based system, how to organize your ideas into a hierarchy, how to categorize each object in your home/apartment/office/vehicle into one of the categories from the "One System Framework," and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
Senators Launch Full On Nuclear War Against Encryption: Bill Will Require Broken Encryption, Putting Everyone At Risk
from the stop-pushing-this-bullshit dept
Another day, another bad bill. Just as we're coming to terms with the EARN IT Act moving forward in Congress, three Senators -- Lindsey Graham, Tom Cotton, and Marsha Blackburn -- have announced a direct attack on encryption. The full bill is here. It's 51 pages of insanity that would effectively destroy privacy and security on the internet. This is five-alarm fire bad.For what it's worth, Graham is also a co-sponsor of the EARN IT Act, which makes me wonder if he's going to agree to an amendment of EARN IT that keeps encryption out of it while pushing this bill instead. That's now the rumor making the rounds, and I even received a press release from an anti-porn activist group supporting this bill because they think it will help clarify that EARN IT won't end encryption (none of that makes sense to me either, but...)
The announcement of the bill includes all the usual "think of the children" nonsense, claiming that we can't have encryption because some bad people might use it for bad stuff. The press release summarizes what they claim the bill will do:
Highlights of the Lawful Access to Encrypted Data Act:In short, this basically says "break encryption, but we won't tell you how." We're right back to "nerd harder" except that this time it's "nerd harder, or you're breaking the law."
- Enables law enforcement to obtain lawful access to encrypted data.
- Once a warrant is obtained, the bill would require device manufacturers and service providers to assist law enforcement with accessing encrypted data if assistance would aid in the execution of the warrant.
- In addition, it allows the Attorney General to issue directives to service providers and device manufacturers to report on their ability to comply with court orders, including timelines for implementation.
- The Attorney General is prohibited from issuing a directive with specific technical steps for implementing the required capabilities.
- Anyone issued a directive may appeal in federal court to change or set aside the directive.
- The Government would be responsible for compensating the recipient of a directive for reasonable costs incurred in complying with the directive.
Incentivizes technical innovation.
- Directs the Attorney General to create a prize competition to award participants who create a lawful access solution in an encrypted environment, while maximizing privacy and security.
Promotes technical and lawful access training and provides real-time assistance.
- Funds a grant program within the Justice Department’s National Domestic Communications Assistance Center (NDCAC) to increase digital evidence training for law enforcement and creates a call center for advice and assistance during investigations.
Attorney General Barr statement: "I am confident that our world-class technology companies can engineer secure products that protect user information and allow for lawful access."— Sean Lyngaas (@snlyngaas) June 23, 2020
The actual text of the bill is even worse than the summary. It's crazy long so I won't do a full breakdown here, but will call out a few scary, scary bits. The key part is that this basically requires the end of encryption. While there is some language early on about it not applying if "technically impossible," there is other language that more or less cancels that out. Specifically, it requires Apple and other large device sellers to backdoor encryption. It is not an option, but a requirement:
DEVICE MANUFACTURERS.—A device manufacturer that sold more than 1,000,000 consumer electronic devices in the United States in 2016 or any calendar year thereafter, or that has received an assistance capability directive under section 3513, shall ensure that the manufacturer has the ability to provide the assistance described in subsection (b)(2) for any consumer electronic device that the manufacturer—So, if you sell more than a million consumer electronic devices in the US, you are required to make sure they have backdoors. That's... going to be a LOT of backdoors. Every Alexa device. Every smart TV. And, of course, every phone. That's devices. How about apps and services? More of the same:
‘‘(A) designs, manufactures, fabricates, or assembles; and ‘‘(B) intends for sale or distribution in the United States.
PROVIDERS OF REMOTE COMPUTING SERVICE; OPERATING SYSTEM PROVIDERS.—A provider of remote computing service or operating system provider that provided service to more than 1,000,000 subscribers or users in the United States in 2016 or any calendar year thereafter, or that has received an assistance capability directive under section 3513, shall ensure that the provider has the ability to provide the assistance described in subparagraphs (A) and (B) of subsection (b)(2) for any remotely stored data that the provider processes or stores.That's... a lot of websites that will be barred from using real end-to-end encryption. The "shall ensure" part is what should scare everyone.
That's still talking about data stored on those servers though. As for "data in motion" again, services will have to provide backdoors under this bill. The reference to "technically impossible" only seems to apply to "independent actions of an unaffiliated entity that make it technically impossible to do". So, the only way to avoid having to break encryption on your own services is to... outsource it to an unaffiliated entity who can make it impossible for you to break the encryption?
As for messaging services: again, the bill "shall ensure" assistance:
A provider of wire or electronic communication service that had more than 1,000,000 monthly active users in the United States in January 2016 or any month thereafter, or has received an assistance capability directive under section 3513, shall ensure that the provider has the ability to provide the information, facilities, and technical assistance described in section 2518(4).And it gets worse. The bill allows the Attorney General to order someone to break encryption:
If a person fails to comply with a directive issued under subsection (b), the Attorney General may file a petition for an order to compel the person to comply with the directive in the United States District Court for the District of Columbia, which shall have jurisdiction to review the petition.There's also a giant "NERD HARDER" section, which explains Bill Barr's comments above. Basically it creates a contest, run by the Attorney General, to create a type of backdoored encryption where the Attorney General and his hand-picked judges will determine which technology wins. And by "wins" I mean loses, because that technology will be broken in no time at all, putting everyone at risk.
This whole thing is so incredibly dangerous, and it's not even clear that encryption is a real problem for law enforcement. The basic cost-benefit analysis here is that this law would put everyone, and all our communications, at risk of attack, for a possible benefit in a tiny number of cases, where there remains no evidence that a backdoor would have helped stop any crime. I can't see how the tradeoff is worth it, and any elected official pushing this nonsense should be asked to explain how they weigh these costs and benefits. And if they answer like Bill Barr by saying "smart techies can figure it out" they should have their views discounted for being idiots.
Meanwhile, the press release leads off with quotes from the three sponsors, all of which are head-bangingly wrong, but designed to do the usual tugging at the emotional strings rather than any actual recognition of what they're pushing here:
“Terrorists and criminals routinely use technology, whether smartphones, apps, or other means, to coordinate and communicate their daily activities. In recent history, we have experienced numerous terrorism cases and serious criminal activity where vital information could not be accessed, even after a court order was issued. Unfortunately, tech companies have refused to honor these court orders and assist law enforcement in their investigations. My position is clear: After law enforcement obtains the necessary court authorizations, they should be able to retrieve information to assist in their investigations. Our legislation respects and protects the privacy rights of law-abiding Americans. It also puts the terrorists and criminals on notice that they will no longer be able to hide behind technology to cover their tracks,” said Graham.There remains little evidence that terrorists have been able to communicate without law enforcement being able to access the info. Remember, the FBI flat out lied about how many devices it had in its possession that it couldn't get into, and has since refused to give an updated number (despite multiple requests). At the same time, every time the FBI does come out and point to a situation where it can't get into a phone, a few months later, they seem to admit that, well, actually, there was a technology that let them get in.
On top of that, we've discussed how law enforcement and the FBI have access to so much other information thanks to social media, and various open source intelligence tools, that the idea that they need to attack encryption is just ridiculous.
And that leaves out something else too: if we put backdoors into encryption, guess what will become a huge target for "terrorists and criminals"? That's right: all of our communications.
“Tech companies’ increasing reliance on encryption has turned their platforms into a new, lawless playground of criminal activity. Criminals from child predators to terrorists are taking full advantage. This bill will ensure law enforcement can access encrypted material with a warrant based on probable cause and help put an end to the Wild West of crime on the Internet,” said Cotton.This is just a joke. The internet is not "lawless" and there's no indication of increased criminal activity, nor any evidence that law enforcement cannot solve crimes because of encryption or the internet. This bill won't ensure anything other than opening up a new avenue for terrorists and criminals to terrorize.
“User privacy and public safety can and should work in tandem. What we have learned is that in the absence of a lawful warrant application process, terrorists, drug traffickers and child predators will exploit encrypted communications to run their operations,” said Blackburn.Yes, user privacy and public safety do work in tandem. But you know would would ruin that? Breaking encryption and throwing both of those things into the gutter.
This bill should be trashed and these three Senators (and the Attorney General) deserve mockery for a technically ignorant, totally clueless and dangerous bill that would harm Americans and destroy both privacy and security, because some law enforcement agencies are too lazy to do their jobs. Frankly, the intelligence community should come out screaming about this bill as well, as they know full well how much more dangerous this will make their own work. This is a ridiculous attack on the internet.
Filed Under: backdoors, doj, earn it, encryption, fbi, going dark, laed, lindsey graham, marsha blackburn, tom cotton, william barr
The Coronavirus Laid Bare Our Empty Lip Service To Fixing The 'Digital Divide'
from the do-not-pass-go,-do-not-collect-$200 dept
FCC boss Ajit Pai likes to repeatedly proclaim that one of his top priorities while chair of the FCC is to "close the digital divide." Pai, who clearly harbors post-FCC political aspirations, can often be found touring the nation's least-connected states proclaiming that he's working tirelessly to shore up broadband connectivity and competition nationwide. More often than not, the junkets involve Pai informing locals that gutting FCC oversight of some of the least competitive, least liked companies in America resulted in near-miraculous outcomes.Reality continues to have something else to say.
In the wake of COVID-19 quarantines, more attention than ever has been given to the fact that upwards of 41 million Americans (double official FCC estimates) still can't get any type of broadband despite thirty years of subsidization and lip service toward fixing the nation's "digital divide." Millions more can't afford service because feckless regulators and limited competition work in concert to ensure U.S. broadband prices remain some of the highest in the developed world. This was always a problem. It's just more obvious now that citizens in countless COVID-19 hotspots are forced to actually pay attention to it.
While there's a universe of folks paid by the sector to pretend this is all fantasy or hyperbole, at the heart of the problem remains captured regulators who can't be bothered to hold bad actors accountable or adequately map where US broadband is or isn't available. The Reveal has a good piece talking to policy experts who, (once again with feeling) note that the core of the problem is bad FCC leadership and bad data. As in, we literally do not know where broadband is available in the United States or at what speeds and price points it's offered. We pretend we do, but we simply don't:
"No one really knows how many people don’t have high-speed internet access. The government puts the figure at 21 million, but most studies show that’s a drastic undercount – Microsoft estimates it could be as many as half of all Americans. But there is something that most people involved agree on.Yet we're still throwing countless billions at broadband providers to fix a problem we don't actually understand. Often we don't understand it by design; lobbyists for the biggest broadband providers have for decades fought against more accurate broadband mapping, knowing full well it will only reveal the sorry state of US broadband availability and competition. And when ISPs are caught time, and time, and time again taking taxpayer subsidies for services only half deployed (if you're lucky), our feckless regulators don't genuinely do much about it:
“This has been a colossal failure,” said Christopher Ali, an assistant professor in media studies at the University of Virginia, who has spent the last few years writing a book about broadband deployment in the United States. “We’re spending a lot of money. We’re just not spending it efficiently, and we’re not spending it democratically.”
"The last time the FCC handed out money to build internet infrastructure, in 2015, Frontier Communications and CenturyLink, two of the country’s largest internet service providers, collectively won grants worth $3.2 billion over the next four years – more than $800 million a year between them and more than a third of the program’s $9 billion total.Instead of ramping up competitive policies and accountability for a broken market, the FCC, sans any legitimate evidence, decided to obliterate FCC authority over telecom at lobbyist behest. And while we have seen some efforts to improve broadband mapping of via the Broadband Data Act (which encourages more verification of data by the FCC and the integration of more crowdsourced and detailed data), any actual improvement remains many years and many more millions away from fruition. While the wireless industry is already hard at work trying to ensure that 5G networks are exempt from many of these improvements.
In January, CenturyLink wrote to the FCC to say it was failing to meet a deadline for deploying networks in 23 of the 33 states in which it was working. In April, having also failed to meet deadlines and facing mountains of debt, Frontier Communications declared bankruptcy."
There's an entire cottage industry of telecom-linked experts whose entire mission is to assure you that none of this is actually happening, or if it is happening, it's only because the United States is so gosh darned big. The reality, however, is quite simple: US broadband is patchy, expensive, uncompetitive, and mediocre because the telecom industry lobbies weak-kneed regulators and lawmakers to keep it that way. It's a truth we go to great, comical lengths to deny, because whether it's unaccountable subsidies, monopoly rents, or campaign contributions, it's simply far more profitable to deny it.