About Techdirt.
Started in 1997 by Floor64 founder Mike Masnick and then growing into a group blogging effort, the Techdirt blog relies on a proven economic framework to analyze and offer insight into news stories about changes in government policy, technology and legal issues that affect companies’ ability to innovate and grow. As the impact of technological innovation on society, civil liberties and consumer rights has grown, Techdirt’s coverage has expanded to include these critical topics.
The dynamic and interactive community of Techdirt readers often comment on the addictive quality of the content on the site, a feeling supported by the blog’s average of ~1 million visitors per month and more than 1.7 million comments on 73,000+ posts. Both Business Week and Forbes have awarded Techdirt Best of the Web thought leader awards.
You can also find Techdirt on Twitter and Facebook.
Dear Marin County Board of Supervisors: Reject The Sheriff’s Proposal To Install License Plate Cameras In The County
from the bay-area-big-brother dept
With almost zero public notice, the Board of Supervisors of Marin County, California (just to the north of San Francisco over the Golden Gate Bridge) is on the verge of approving tomorrow a demand by the county sheriff’s department to install license plate cameras throughout the county. As a county resident, I object. My comment submitted to the board is below.
Dear Marin County Supervisors:
In the last 30 days I have entered the Gateway Shopping Center in Marin City on at least 11/6, 11/21, and 11/24 to get groceries, dine, and purchase other household goods.
None of this information is your business, and it is certainly not the business of the Marin County Sheriff’s Department. But if you authorize their proposal to allow automatic license plate reader cameras to be installed throughout Marin County this location information is exactly the sort they will be able to know about each and every person driving in Marin County, be they residents or their guests.
I have also gone to Strawberry on at least 10/31, 11/7, 11/8, 11/10, 11/15, 11/16, and 11/21, to go grocery shopping, dine, and seek medical care.
As a resident in unincorporated Marin, these places are in my neighborhood and where I need to go to shop, dine, and do the business life requires. It is also the activity businesses in Marin depend on people doing. But if you let the Marin County Sheriff Department hang these cameras, it will be impossible to go to any of these places without them knowing.
I have also regularly driven on Highway 1 to enter Mill Valley. I do not have complete records of these travels, but if you let the Sheriff’s Department hang the cameras where they propose, they will.
And it is not just residents of unincorporated Marin who will have the details of their personal life documented by the police; it will be every single person with any reason to be here in the county, including every lawful one. The proposal preys on fear, such as with the included “crime heat map.” But it is a “heat map” that happens to directly correlate to where people live and conduct business in the county and thus happens to reflect where most activity occurs, including lawful activity, which would all be caught by this camera dragnet too.
The sheriff further proposes to hang cameras on Sir Francis Drake, a major artery through Marin County, providing access to much of central Marin, including countless medical establishments in Greenbrae itself. Do you wish to also know about when I’ve visited doctors there? Soon the sheriff will be able to tell you.
None of this information is something the police are entitled to know. The privacy the United States Constitution affords to be secure in our papers and effects restricts this sort of incursion into the public’s private lives without probable cause that a crime has already been committed so that people can be free to go about their lives, unchilled by the prospect of agents of the state knowing their business without any justification. The sheriff’s department alleges in its paperwork that county counsel has reviewed the proposal, but nothing submitted reflects any coherent practical or legal argument that it is constitutionally appropriate or possible for you to allow the sheriff’s department to invade every resident’s privacy as they so propose. In fact, all of the paperwork submitted is entirely self-serving and supplied by the very government agency that seeks to have this additional power over civilian lives. Nothing more neutral or independent has been provided to the board by any other state or county agency, nor any other civil society organization, who could provide you with the information you need to recognize the immense cost of the proposal in forms other than purely financial.
Granted, I may have little to fear from the cameras the sheriff wants to install in the Oak Manor neighborhood, as I’m rarely there. But the people living in the neighborhood surely go out and about, so soon you will have information about their comings and goings.
However, the sheriff also proposes to have these cameras on the streets approaching the Marin County Civic Center, surrounding the heart of local county government with a moat of surveillance, which means that the sheriff will be able to track every single person who approaches the building for any reason, including to attend public hearings (such as this one), to petition their local government for any reason a resident might need to seek assistance from their local government, or to register to vote. Personally I think it has been more than 30 days since my last visit to this famous Frank Lloyd Wright-designed building (which also contains a public library), but when I make my next visit, the sheriff will know.
The sheriff proposal says it is to help it police against property crime. And no one likes crime. But crime is not the only harm the public can experience. The cameras themselves pose their own, and it is incumbent on this board to recognize how damaging the oversight police are demanding to have over our lives itself is. The reason people worry about equity impact is that there is a very real harm done to the public when they cannot live lives free from police scrutiny. But that effect reaches everyone in the public, not just those the police have a known habit of unduly targeting. With these ubiquitous cameras, every single person in Marin County will have the details of their lives available for the police to scrutinize. No pallor can protect anyone from the harm that can follow to have their lives recorded in police-controlled ledgers because it is that recording itself that is a harm now everyone must incur.
It will be incurred by everyone traveling to central and western Marin on Lucas Valley Road. I last was there more than 30 days ago, on October 22, but the next time I try to attend a concert in Nicasio (or go biking, or go buy cheese) you will have record of it.
And for no good reason. The deterrence effect of these cameras the police tout is overstated. License plate cameras do not magically prevent crime. Crime still happens. Sometimes serious crimes. But instead of looking at how ineffective cameras are, the lesson we’ve learned from the local towns that have already inflicted cameras on us is that their inherent inability to prevent crime tends to just lead to calls for more cameras, because the police’s appetite to know the details of people’s lives is insatiable. They won’t stop here, asking for just these cameras. When crime inevitably happens they will want more: more cameras, in more places, and maybe even other tools that will help them know more about the private details of the lives of the people in this county. After all, if one invests in the fallacy that these cameras will help anything, then there is no limiting principle to think that more such tools won’t similarly be warranted, until there is no place anywhere in Marin where people can go about their lives without being watched by the government.
At least I won’t personally have to worry much about the cameras proposed for the Atherton area near Highway 37, because now that I’ve relocated to southern Marin I’m seldom there. But I used to be there often, and if you’d had the cameras hung then, you’d know.
Because there’s no assurance by any of the hand-waving phrases contained within the proposal to convince you that there are no real concerns raised. For instance, it uses words like, “encryption,” which is indeed important, but also not itself a magic solution for every problem, and which is also useless as a defense for the interests of the public when the police still have the key to all the data. The proposal also includes language saying that the sheriff will own the data, as if that provides any sort of assurance for the public when it is their data that the police want to own. Don’t be fooled by the platitudes; instead recognize them as the smoke and mirrors being deployed to distract from the serious issues license plate cameras raise (and the profit motive of the vendor, who has no reason to care as long as they are paid).
We all will feel the effects, even for cameras hung in places where we visit less frequently. We are still a community, and people come to us as much as we go to them. For instance, I still have friends in the Novato area, and I’m sure you’d be interested to know that I visited one in the Indian Valley area where you plan to have cameras on 11/11, as well as 10/28.
This board should stand up for the rights of its constituents and vote to reject the sheriff’s proposal to install cameras anywhere in the county. But at minimum it should delay any action until there can be greater public input with ample notice. This proposal has been treated like a ministerial budgetary item few in the county would care about evaluating. Indeed the fiscal impact may be relatively minor, although if the sheriff’s department really believes it has money to burn on cameras perhaps that money could be reclaimed for the general budget and better spent on, say, a guidance counselor or other public resources that might actually deter criminality.
But its overall impact is enormous, affecting the lives of every single person in the county. Thus requires everyone to be able to carefully scrutinize what this board plans to do to them if it were to approve the proposal. Yet we can’t; this proposal is getting slipped past us without any meaningful effort to call attention to it commensurate with its impact. The “staff report” item in the agenda, which was written not by county staff but by the sheriff’s department, is itself is dated as of tomorrow, which calls into question whether approval could even be compliance with SB 34 requiring the agency to provide adequate notice to the public before installing these cameras, since the report itself does not even legally exist until the day it appears on the agenda and after the deadline for written comments at 3:30pm on November 27.
The county is certainly capable of providing more conspicuous notice, like as it does every time it wants the public to vote on one of its propositions. And for something this serious, similar advertising efforts are warranted. After all, if this board is inclined to allow the police so much oversight of our lives, then it should do everything possible to ensure that the public is able to provide meaningful oversight of its choices so that we can hold those who make them accountable.
I urge you to vote no on the proposal.
Filed Under: alpr, license plate cameras, license plates, marin, marin county, surveillance
__________________________________________________________________________________
Fifth Circuit Says Siccing A Police Dog On A Suicidal Person Is Excessive Force, Still Grants Immunity To Officer
from the harming-someone-to-protect-them-from-self-harm dept
I don’t know what it is about US law enforcement culture, but it far too often seems to be that officers deployed to help people choose to hurt people instead. When people are suffering mental distress, cops become first responders. Unlike other first responders, like EMTs or firefighters, the desire to harm tends to surpass any desire to help.
This is why so many suicidal people are helped to death by officers who seemingly view any resistance or hesitation as a threat to their safety, rather than just the normal responses of a person already dealing with a great deal of mental duress. And that’s why some cities are taking these interactions out of cops’ hands by choosing to respond to mental health calls with people better trained to handle these situations, like actual mental health professionals.
But, because cops are still the most common first response to distress calls, things like this continue to happen. People who are suicidal or threatening to engage in self-harm are being killed or harmed by responding officers, rather than being handled responsibly and diverted from their plans to hurt themselves.
Just because cops aren’t trained to handle these kinds of calls doesn’t let them off the hook for deploying excessive force when they just as easily could have deployed almost no force at all. This decision [PDF], handed down by the most cop-friendly appellate district in the nation, says it’s unconstitutional to brutalize someone clearly in need of care and compassion. That it was their dog that did all the damage doesn’t change the constitutional equation. (h/t Short Circuit)
It starts with a call for help.
At 1:39 a.m. on July 5, 2018, Plaintiff-Appellant Olivia Sligh’s partner called 911 to report that Sligh was suicidal, had hurt herself, and had left her house on foot. Sligh’s partner requested an ambulance, and he indicated that Sligh was unarmed and not a violent person.
Somehow, this is how the so-called first responders first responded to this 911 call:
The Montgomery County Sheriff’s Office notified the City of Conroe of the emergency medical call and requested a canine officer if available.
The hell? How does someone field a mental distress call and decide this is the sort of thing that requires the use of a police dog? That goes unexplained in this decision, which only notes the facts, as well as those that are still disputed. Tyson Sutton of the Conroe City PD brought the attack dog. He was joined by Alexis Montes, a deputy employed by the Montgomery County Sheriff’s Office.
Here’s what happened after the officers and their shared dog located the suicidal, already injured woman:
The complaint alleges that when the officers located Sligh, Sutton shined a flashlight in Sligh’s face as Thor barked and lunged at her. Montes grabbed Sligh, who pulled away. Sutton then sicced Thor on Sligh, and Thor initially bit Sligh in the upper thigh. Sligh sat down, and Sutton continued to direct Thor to bite Sligh on the rear of her upper leg and her ankle. Sligh alleges that “Sutton used the dog to purposively attack and bite” her; that “Montes did not intervene in the multiple dog bites by words or actions even though the attack lasted one minute and some seconds”; and that she never resisted seizure, tried to escape, or assaulted Montes.
The court notes the incident was captured by Officer Sutton’s body camera. That footage supports most of Sligh’s allegations. However, it’s not exactly true Sligh “never resisted seizure.” The recording shows Sligh slapping away Deputy Montes’ arms as the deputy tried to subdue her. Following a short struggle (11 seconds according to the recording), Officer Tyson instructed Thor to attack Sligh.
Sutton then releases Thor with a bite command, and Thor bites Sligh as Sutton commands her to get on the ground. Sligh falls to a seated position on the ground and cries out in pain. Beginning eight seconds after the bite command, Sutton repeatedly commands Thor to release Sligh, but Thor does not immediately comply. Sligh begins lying on her side. 36 seconds after giving the first bite command, Sutton grabs and pulls Thor’s collar. Thor releases Sligh around 64 seconds following the first bite command. While Thor was biting Sligh, Montes reaches to control Sligh’s hands and commands her to put her hands behind her back. Montes handcuffs Sligh after Thor’s release.
Cops love to talk about their own “training and expertise.” They also consider their dogs to be “officers.” They also tend to talk up how well-trained their four-legged “officers” are. And then things like this happen: a cop orders his cop dog to release and it does not comply for nearly an entire minute and not until Officer Sutton placed his hand on the dog’s collar. Nearly a minute of non-compliance — if performed by an arrestee — would be considered a criminal act (resisting arrest) in and of itself, even without adding in the injuries inflicting by the unresponsive police dog (assault). But when a cop dog does it, it’s just good police work. When a citizen does it, it’s multiple criminal acts.
What law enforcement officers refuse to believe is that this is a violation of rights. But it definitely is exactly that, which only makes the ultimate conclusion by this court even more inexplicable.
Using long-standing Supreme Court precedent (most specifically, 1989’s Graham v. Connor decision), the Fifth Circuit says law enforcement is wrong on two of three factors. Contrary to the assertions of non-party Michael Lee Aday, two out of three is actually pretty bad.
The first Graham factor is the severity of the suspected crime. These officers had been sent to help a person suspected of [checks lawsuit] being suicidal and already injured. So, that’s a complete loss for the responding officers, since there’s no crime, which means there’s no justification for nearly any use of force whatsoever.
The second factor doesn’t help the officers, either. The officers seemed to realize this, which is why their defense lies somewhere between incoherent and insane.
The second Graham factor, whether the suspect posed an immediate threat to the safety of officers or others, also weighs in Sligh’s favor. Slighmay have posed a safety threat to herself, as she had cut herself and was potentially suicidal, but the officers received no indication that Sligh was violent, armed, or otherwise posed a threat to others. Defendants-Appellees’ contention that the employment of a dog bite was justified due to Sligh’s immediate safety threat to herself is unpersuasive in this case. Sligh did not appear to be engaging in self-harm during her interactions with the officers, which undermines Defendants-Appellees’ argument that Sligh posed an “immediate” safety threat to herself that warranted such a dangerous use of force. It is also difficult to see how Sligh’s self-harm justifies the employment of a dog bite, which will inevitably lead to more punctures or lacerations.
As if claiming that injuring someone to “protect” them from injuring themselves wasn’t ridiculous enough, the officers also leaned hard on “officer safety,” just as ridiculously claiming Sligh might have been armed and dangerous (to them in addition to herself). The court says this argument isn’t any better than the one dismantled above.
Defendants-Appellees contend that Sutton could not determine whether Sligh had a weapon in her clothing, which weighs in favor of employing the dog bite. But it is difficult to imagine that Sutton would have believed that Sligh, who was wearing a tank top and women’s athletic shorts, was armed when no weapon was produced during the physical struggle between Sligh and Montes. Furthermore, because the officers did not suspect that Sligh was violent or had committed a crime, the fact that she was unsearched is not enough to permit a reasonable officer to assume that she posed an immediate threat.
The third Graham factor — whether or not the suspect resisted arrest or was otherwise non-compliant — weighs in favor of the cops… but just barely. Sure, Sligh “slapped away” Deputy Montes’ initial attempt to subdue her and engaged in an 11-second “struggle.” But that mild resistance — especially from someone suspected of nothing more than being in the midst of mental health crisis — isn’t enough to justify a minute-long attack by a police dog.
Without any further attempts to subdue Sligh without the use of a dog bite, and without providing Sligh any warning that she may be subjected to a dog bite if she did not comply, Sutton sicced a dog on a woman who (1) was not suspected of any crime; (2) did not pose an immediate safety threat to officers or others; and (3) was in need of emergency medical intervention due to self-harm. Furthermore, Sligh—surrounded by a fence and thick foliage— was not attempting to flee the officers. Employing a dog bite under these circumstances arguably constituted an unreasonable seizure in violation of Sligh’s Fourth Amendment rights.
“Arguably” is the key here. It doesn’t necessarily mean Sligh will be able to convince a jury her rights were violated by this dog attack. But, just as importantly, it means Officer Sutton may be similarly unlikely to convince a jury this dog deployment wasn’t excessive force. That’s why we have juries. And that’s why cops shouldn’t be allowed to exit lawsuits just because it isn’t initially clear who will be considered more credible by a jury of their peers.
But that’s now how the law works in federal courts, thanks in large part to the Supreme Court continually rewriting the immunity ground rules in favor of accused cops. So, even though the Fifth Circuit says this is a clear violation of rights, it’s not so clear a cop could see it.
Because the present case involves an application of unintentionally prolonged force against an actively resisting plaintiff, we do not find that Sutton’s violation of Sligh’s constitutional right was clearly established. Sutton is therefore entitled to qualified immunity.
And that’s how it stands: sending poorly trained dogs to attack people not suspected of criminal activity is still acceptable in the Fifth Circuit. It wasn’t “clearly established” before and this unpublished opinion guarantees the court won’t be creating any new bright line for excessive force determinations. Sligh will lose. The cops will win. And no one in power will learn anything from this experience.
Filed Under: alexis montes, dogs, excessive force, olivia sligh, police, police dogs, qualified immunity, tyson sutton
from the we-are-not-serious-people dept
We’ve noted how the GOP’s obsession with TikTok is… weird and superficial. Guys like Ted Cruz or Brendan Carr will suffer absolute embolisms about TikTok (and TikTok only) to get on cable news where they’ll be portrayed as good faith privacy reformers. While simultaneously refusing to pass a privacy law or regulate dodgy data brokers (who routinely sell consumer data to everyone, including Chinese intelligence).
At the same time, the GOP’s “solution” to TikTok is somehow even more superficial and stupid: a blanket ban that doesn’t work. You’ll recall how it took college kids all of forty seconds to realize they could bypass the Montana TikTok University ban by simply switching their phone from Wi-Fi to cellular, something the GOP brain trust still hasn’t gotten its collective noggin’ around.
Case in point: Senator Ted Cruz and friends recently proposed a new law that would cut schools off from FCC broadband funding if they refuse to ban social media platforms:
“Led by Sens. Ted Cruz (R-Tex.), Ted Budd (R-N.C.) and Shelley Moore Capito (R-W.Va.), the bill would require that schools prohibit youths from using social media on their networks to be eligible to for the E-Rate program, which provides lower prices for internet access.”
The E-Rate program is an essential cornerstone of ensuring affordable broadband access to schools, and tethering it to your weird, half-assed obsession with TikTok is just kind of gross and inane. As is the assumption that modern social media networks don’t provide any educational or research value.
Again, there’s no thought or realization that such bans are trivial to bypass by simply switching a phone from the school Wi-Fi network to cellular. There’s no real coherent understanding that these folks are wasting legislative time and resources on a plan that won’t work, for a problem they don’t understand.
That’s not to say that TikTok doesn’t play fast and loose with consumer privacy in hugely problematic ways. But you could ban TikTok nationally, immediately in a fireworks of patriotic splendor, and still not fix the actual underlying problems that create and embolden TikTok.
Without meaningful privacy laws, there’s no shortage of companies that are every bit as dodgy on privacy as TikTok. Without data broker regulation, there’s a massive industry of barely regulated dodgy data middlemen selling elaborate profiles of your daily habits — that in many instances go well beyond the kind of data TikTok is collecting — to any idiot with a nickel. The government’s apathy stems from, in part, its abuse of this lax oversight to help it avoid having to get warrants.
The GOP pretends to be concerned that TikTok is being exploited to spread propaganda, but the party very clearly supports propaganda (if it’s theirs). The GOP pretends to be concerned about consumer privacy, but opposes even the most basic internet privacy law or efforts to regulate data brokers, who are every bit as compromised, far reaching, and ethically dubious as TikTok ever was.
I maintain a lion’s share of the GOP obsession with TikTok is rank xenophobia (“it’s simply outrageous that non-white people from overseas created a popular alternative to U.S. social media networks, sending money we’re owed by divine decree overseas!). Another big chunk is Facebook lobbyists seeding TikTok hysteria among gullible GOP natsec rubes for anti-competitive reasons.
The end result is, as usual for the GOP, a kind of simulacrum of competent governance that wastes time and money creating impractical non-solutions for problems legislators don’t understand. Or do understand, but don’t want to actually fix it (obstructionism) — and want to distract you.
Filed Under: ban, broadband, e-rate, education, fcc, obstructionism, privacy, propaganda, school funding, social media ban, tiktok
Companies: tiktok
Facial Recognition Tech Is Encouraging Cops To Ignore The Best Suspects In Favor Of The *Easiest* Suspects
from the efficiency-over-accuracy dept
Facial recognition tech has slowly gone mainstream over the past half-decade. Not just in acceptance, but also in opposition. Kashmir Hill exposed perhaps the worst purveyor of this tech — Clearview — with a series of articles exposing the company’s tactics as well as its far right backers.
Clearview has managed to become a pariah in a tech field mostly populated by would-be pariahs. Facial recognition tech only works for those who believe it works. For everyone else, it’s an existential threat to their freedom. For a few people (most of them located in the Detroit, Michigan area) the threat is very real.
Facial recognition tech does its best work when it’s trained properly. And most training, unfortunately, involves the people least likely to find themselves harassed by law enforcement: white males. None other than the National Institute of Standards and Technology (NIST) recognized this fact back in 2019.
This is from the NIST’s study of 189 facial recognition tech algorithms:
Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.
Not white or male? Good luck. You’re screwed. But for those who’ve benefited the most from this nation’s predilection for catering to/electing certain people, the status remains quo.
Middle-aged white men generally benefited from the highest accuracy rates.
And that’s just with algorithms and databases compiled in a somewhat ethical fashion. Clearview, however (the subject of Kashmir Hill’s latest book), chose to go a different route. It scrapes any data not locked down from the internet and sells search access to government agencies and private customers anywhere it can get away with it. Clearview’s market reach continues to be trimmed by litigation or eviction notices from European governments, but the company continues to remain a large part of the facial recognition scene.
Mainstream attention isn’t helping these tech purveyors. The latest large-scale journalistic outlet to call bullshit on the tech is the New Yorker, with a withering examination of the tech performed by Eyal Press.
The article opens with the description of a wrongful arrest by Maryland Transit Administration Police — one triggered by facial recognition tech that led the MTAP to believe a black man in his mid-fifties, Alonzo Sawyer, was involved in the physical assault of a female cab driver. This assumption was backed by a records check — one that showed nothing more than a handful of traffic violations. Sawyer was roughed up by law enforcement, arrested, denied bail and… ultimately cleared of all charges.
The problem isn’t necessarily the use of facial recognition tech to identify suspects, although — given the tech’s know issues with accuracy when it doesn’t involve middle-aged white males — this is definitely still a problem. The problem is cops are treating this tech as the beginning, middle, and end of investigations, even though facial recognition tech suppliers always caution their law enforcement customers that matches should be considered the starting point for investigations, not something equivalent to probable cause for an arrest.
The reality of day-to-day facial recognition tech use undermines law enforcement’s arguments that it’s nothing more than part of extensive mesh network of investigative tools — one of the many excuses it uses to keep documents out of the hands of public records requesters.
Law-enforcement officials argue that they aren’t obligated to disclose such information because, in theory at least, facial-recognition searches are being used only to generate leads for a fuller investigation, and do not alone serve as probable cause for making an arrest. Yet, in a striking number of the wrongful arrests that have been documented, the searches represented virtually the entire investigation. No other evidence seemed to link Randal Reid, who lives in Georgia, to the thefts in Louisiana, a state he had never even visited. No investigator from the Detroit police checked the location data on Robert Williams’s phone to verify whether he had been in the store on the day that he allegedly robbed it. The police did consult a security contractor, who reviewed surveillance video of the shoplifting incident and then chose Williams from a photo lineup of six people. But the security contractor had not been in the store when the incident occurred and had never seen Williams in person.
Advocates of this tech — a group mainly composed of facial recognition tech and their law enforcement customers — claim this problem isn’t as bad as it looks. Most proponents claim AI matches are backstopped by human beings, reducing the risk that someone — especially a person of color — will be misidentified and subjected to the sort of things the people listed above have been subjected to.
But it’s clear the human backstops aren’t always following the guidelines laid down by tech providers, which strongly caution against treating matches like probable cause. For those that do bother to backstop matches with human beings, their confidence that they’re able to determine match errors is misplaced. They’re often no better than the software and hardware they’re asked to oversee, not just because they’re wrong about their own innate ability to recognize faces, but because they’re, for the most part, given little to no training before being asked to vet AI judgment calls.
If comparing and identifying unfamiliar faces were tasks that human beings could easily master, the lack of training might not be cause for concern. But, in one study in which participants were asked to identify someone from a photographic pool of suspects, the error rate was as high as thirty per cent. And the study used mainly high-quality images of people in straightforward poses—a luxury that law-enforcement agents examining images extracted from grainy surveillance video usually don’t have. Studies using low-quality images have resulted in even higher error rates. You might assume that professionals with experience performing forensic face examinations would be less likely to misidentify someone, but this isn’t the case. A study comparing passport officials with college students found that the officers performed as poorly as the students.
It’s errors being compounded. Most people tend to believe they’re smarter than computers, even if they have no rational reason for believing this. And other people assume tech is infallible, simply because they believe the people who created the tech are smarter than they are. In both cases, the subjective estimation of skill level is off. Fallible tech doesn’t get better when it’s backstopped by fallible humans, especially humans most government agencies feel don’t need any specific training prior to operating facial recognition systems.
Those incorrect assumptions result in things like this:
The photograph of Robert Williams that led to his arrest for robbing the Detroit store came from an old driver’s license. The analyst at the Michigan State Police who conducted the search didn’t bother to check whether the license had expired—it had. Nor did she appear to consider why more recent pictures of Williams, which were also in the database, didn’t turn up as candidates. The dated picture of Williams was only the ninth most likely match for the probe photograph, which was obtained from surveillance video of the incident. But the analyst who ran the search did a morphological assessment of Williams’s face, including the shape of his nostrils, and found that his was the most similar to the suspect’s. Two other algorithms were then run. In one of them, which returned two hundred and forty-three results, Williams wasn’t even on the candidate list. In the other—of an F.B.I. database—the probe photograph generated no results at all.
A whole lot of AI-generated information strongly suggested Robert Williams wasn’t the thief. And all of that was ignored by the human backstop, who decided the initial match was all that mattered.
This is not just a law enforcement problem. Some of the blame lies with legislators, who — outside of a few pockets of codified resistance — have been unwilling to step in to regulate the tech tools used by law enforcement. A few bans and moratoriums have been enacted in a handful of US cities, but for most of the nation, facial recognition tech use by government agencies is still the Wild West.
In addition, the tech firms providing cops with facial recognition AI aren’t simply willing to provide face-matching software. In the case of the AI used to “identify” Alonzo Sawyer, the provider — DataWorks Plus — allows its cop customers to make the description fit someone who doesn’t necessarily fit the description.
DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.”
There’s not a lot of positive stuff to draw from this coverage, other than the fact that if large news agencies are asking questions, it makes it much more difficult for government officials to pretend there’s nothing wrong with facial recognition tech. But that’s hardly heartening news. For most tech providers, the fact that they have paying customers is enough to allow them to ignore the long-term societal effects of pushing algorithms tainted by bias. For law enforcement agencies, the existence of arrests and successful prosecutions initiated by facial recognition tech is all the reason they need to keep using it, no matter how often it results in wrongful arrests of future civil rights lawsuits.
For the rest of us, it just means we’re at the mercy of more than just the government. Our freedom is in the hands of unproven, often erroneous tech. And until someone with actual power cares enough about that fact, we’ll remain the government’s lab rats, expected to suffer the consequences until the bugs in the system can be worked out.
Filed Under: facial recognition, wrongful arrests
Companies: dataworks plus
Daily Deal: The Award-Winning Luminar Neo Bundle
from the good-deals-on-cool-stuff dept
Luminar Neo is an easy-to-use photo editing software that empowers photography lovers to express the beauty they imagined using innovative AI-driven tools. Luminar Neo was built from the ground up to be different from previous Luminar editors. It keeps your favorite LuminarAI tools and expands your arsenal with more state-of-the-art technologies and important changes at its core. Meanwhile, the recognizable Luminar design is retained, making Neo simple to use and fun to explore. The bundle comes with an introductory course and 6 add-ons. It’s on sale for $149.97.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
Elon’s Censorial Lawsuit Against Media Matters Inspiring Many More People To Find ExTwitter Ads On Awful Content
from the elon-should-learn-about-the-streisand-effect dept
We’ve already discussed the extremely censorial nature of ExTwitter’s lawsuit against Media Matters for accurately describing ads from major brands that appeared next to explicitly neoNazi content. The lawsuit outright admits that Media Matters did, in fact, see those ads next to that content. Its main complaint is that Elon is mad that he thinks they said that such ads regularly appear next to such content, when it only (according to him) rarely appears next to that content, which he admits the site mostly allows.
Of course, there are a few rather large problems with all of this. The first is that the lawsuit admits that what Media Matters observed and said is truthful. The second is that while Elon and his fans keep insisting that the problem is about how often those ads appear next to such content, Media Matters never made any such claim about how frequently such ads showed up, and as IBM noted in pulling its ads, it wants a zero tolerance policy on its ads showing up next to Nazi content, meaning that even if it’s true that only Media Matters employees saw that content, that’s still one too many people.
But there’s a bigger problem: in making a big deal out of this and filing one of the worst SLAPP suits I’ve ever seen, all while claiming that Media Matters “manipulated” things (even as the lawsuit admits that it did no such thing), it is only begging more people to go looking for ads appearing next to terrible content.
And they’re finding them. Easily.
As the DailyDot pointed out, a bunch of users started looking around and found that ads were being served next to the tag #HeilHitler and “killjews” among other neo-Nazi content and accounts. Avi Bueno kicked things off, noting that he didn’t need to do any of the things the lawsuit accuses Media Matters of doing:
Of course, lots of others found similar things, again without any sort of “manipulation,” and, if anything, showing that it was possible to see big name brands show up in ads next to vile content in a manner that is even easier to find than Media Matters ever implied.
Some users started calling for the #ElonHitlerChallenge, asking users to search the hashtag #heilhitler and screenshot that ads they found:
Bizarrely, a bunch of people found that if you searched that hashtag, ExTwitter recommended you follow the fast food chain Jack in the Box.
On Sunday evening I tested this, and it’s true that if you do a search on #heilhitler, and then see who are the “people” it recommends you follow, it lists two authenticated accounts: Jack in the Box and Linda Yaccarino, and then a bunch of accounts with “HeilHitler” either in their username or display name. Cool cool.
Meanwhile, if Musk thought that his SLAPP suits against the Center for Countering Digital Hate and Media Matters were somehow going to stop organizations from looking to see if big time company ads are showing up next to questionable content, he seems to have predicted poorly.
A few days after the lawsuit against Media Matters, NewsGuard released a report looking at ads that appeared “below 30 viral tweets that contained false or egregiously misleading information” regarding the Israeli/Hamas conflict. And, well, it’s not good news for companies that believe in trying to avoid having their ads appear next to nonsense.
These 30 viral tweets were posted by ten of X’s worst purveyors of Israel-Hamas war-related misinformation; these accounts have previously been identified by NewsGuard as repeat spreaders of misinformation about the conflict. These 30 tweets have cumulatively reached an audience of over 92 million viewers, according to X data. On average, each tweet was seen by 3 million people.
A list of the 30 tweets and the 10 accounts used in NewsGuard’s analysis is available here.
The 30 tweets advanced some of the most egregious false or misleading claims about the war, which NewsGuard had previously debunked in its Misinformation Fingerprints database of the most significant false and misleading claims spreading online. These include that the Oct. 7, 2023, Hamas attack against Israel was a “false flag” and that CNN staged footage of an October 2023 rocket attack on a news crew in Israel. Half of the tweets (15) were flagged with a fact-check by Community Notes, X’s crowd-source fact-checking feature, which under the X policy would have made them ineligible for advertising revenue. However, the other half did not feature a Community Note. Ads for major brands, such as Pizza Hut, Airbnb, Microsoft, Paramount, and Oracle, were found by NewsGuard on posts with and without a Community Note (more on this below).
In total, NewsGuard analysts cumulatively identified 200 ads from 86 major brands, nonprofits, educational institutions, and governments that appeared in the feeds below 24 of the 30 tweets containing false or egregiously misleading claims about the Israel-Hamas war. The other six tweets did not feature advertisements.
As NewsGuard notes, the accounts in question appear to pass the threshold to make money from the ads on their posts:
It is worth noting that to be eligible for X’s ad revenue sharing, account holders must meet three specific criteria: they must be subscribers to X Premium ($8 per month), have garnered at least five million organic impressions across their posts in the past three months, and have a minimum of 500 followers. Each of the 10 super-spreader accounts NewsGuard analyzed appears to fit those criteria.
Hell, NewsGuard even found that the FBI is paying for ads on ExTwitter, and they’re showing up next to nonsense:
For example, NewsGuard found an ad for the FBI on a Nov. 9, 2023, post from Jackson Hinkle that claimed a video showed an Israeli military helicopter firing on its own citizens. The post did not contain a Community Note and had been viewed more than 1.7 million times as of Nov. 20.
This seems especially noteworthy given the false Twitter Files claim (promoted by Elon Musk) that any time the FBI gives a company money, it’s for “censorship.” In that case, the FBI reimbursed Twitter for information lookups, which is required under the law.
Either way, good job, Elon, in filing the world’s worst SLAPP suit against Media Matters, and insisting that their report about big name brands appearing next to awful content was “manipulated,” you’ve made sure that lots of people tested that claim, and found that it was quite easy to see big brand ads next to terrible content.
Filed Under: ads, brand safety, elon musk, hate, misinformation, neonazis
Companies: media matters, newsguard, twitter, x
California Activists Say State Isn’t Being Transparent About How Billions In Broadband Subsidies Are Being Spent
from the sorry-that's-a-state-secret dept
Two years ago the state of California unveiled a major broadband plan that, among other things, aims to spend $3.5 billion to create a massive, open access “middle mile” fiber network in a bid to boost competition. It’s part of a broader quest to make broadband both more affordable and more competitive (see our Copia report from last year discussing the benefits of open access fiber).
Leveraging COVID recovery and billions in looming infrastructure subsidies, the plan also involved spending another $2 billion on “last mile” broadband connections to folks’ homes. And millions more on digital equity training and equipment. The whole plan is immensely ambitious and has huge potential to transform broadband competition in California.
But there’s trouble in paradise. Earlier this year, the state started making surprise cuts to the ambitious plan, most of which impacted low income and minority neighborhoods, much to the chagrin of activists. While Governor Newsom and some news outlets like the San Francisco Chronicle claimed the cuts were reversed after backlash, I’ve found that’s not really true.
What really happened was the deployment was split into two phases. Phase 1 remains well funded, and phase 2 isn’t, with an expected state budget shortfall looming. A big chunk of the state’s promised deployment into low income and minority neighborhoods was shoved into phase 2. With zero transparency into what was cut or why, or what shaped the state’s decision-making process.
I’ve spent the last few weeks talking to local equity activists who say the lack of any transparency is a huge problem.
The EFF’s Chao Liu, for example, wrote a blog post back in September noting that the California CPUC and Department of Technology (CDT) long utilized inaccurate maps to determine who gets funding. Maps that tend to downplay competition gaps and keep changing during deployment, without any transparency into what’s being changed or why.
Despite major backlash this fall, Liu told me little has changed:
“The state has not adequately addressed the concerns. The original maps were not restored and the CPUC and CDT moved forward with their plans, signing contracts and disbursing funds based on the new, not as great maps. In response to an outcry from the local communities the Governor has promised to make a budget request to build out the sections that were cut. The big wrinkle in this is California is almost definitely headed into a steep budget deficit so making any ask to spend large chunks of money will be difficult.”
Similarly, Shayna Englin, Director of the Digital Equity Initiative at the California Community Foundation, told me the state has also doled out nearly $2 billion in contracts but has shared no meaningful insight into how that money is actually being spent:
“There are $1.8 billion in signed contracts,” Englin noted. “You have spent that money. What segments are covered by that? What are the terms and conditions? How this network is actually being built completely undermines the last mile projects that it’s supposed to be supporting.”
A lack of transparency isn’t just bad government, it works to undermine California’s goals. Without knowing the terms and technical specifics of existing grants and deployments, companies, municipalities, and reformers hoping to expand access to affordable broadband by piggybacking on planned middle mile deployments may struggle to develop their own proposals or nail down funding.
California’s lack of transparency in relation to minority neighborhood deployment is particularly troubling, especially given the history of government redlining that contributed to many of these infrastructure and broadband gaps in the first place. That’s before you get to the U.S. government’s long history of doling out billions in subsidies, tax breaks, and regulatory favors to regional monopolies for fiber upgrades that wind up, quite mysteriously, always somehow only half-deployed.
California’s approach was supposed to be different and historic. But without some meaningful transparency into the decision making process, local equity activists like Englin say there’s a growing number of red flags:
“This process is so broken it can’t just be papered over,” Englin said. “Throwing more money down the rabbit hole won’t fix what’s rotten at the bottom: bad data, no transparency, no accountability, and no community engagement.”
Throwing billions at regional telecom monopolies in exchange for half-completed networks has been a generational pastime for the U.S. government. California’s initiative is supposed to break with that tradition by focusing more intently on open access competition that challenges regional mono/duopolies. But without transparency and public engagement, there’s growing distrust that the project will be anywhere near as transformative as promised, especially for California’s low income and minority communities.
Filed Under: affordable access, broadband, california, digital divide, digital equity, high speed internet
No comments:
Post a Comment