Mozilla Drops New Privacy Partner After CEO Found Tethered To Data Brokers
from the this-is-why-we-can't-have-nice-things dept
Last month we noted how Mozilla had launched a new privacy protection tool dubbed Mozilla Monitor Plus. According to Mozilla, the new service scours the web for your personal information at over 190 sites where brokers sell information they’ve gathered from online sources like social media sites, apps, and browser trackers.
We noted that the tool was built on the back of services from a company named Onerep, which basically offers the same service. We also noted that the effort was likely a game of whac-a-mole given the sheer volume of data brokers and other companies trafficking in consumer data in a country too corrupt to pass even a baseline privacy law for the internet era.
Anyway, about that.
Before the ink was dry on the new deal, Mozilla announced they were severing their relationship with Onerep. Why? Security researcher Brian Krebs found the company had ties to the very privacy-violating companies and services it professes to be protecting users from.
More specifically, Krebs found that Onerep CEO and founder Dimitiri Shelest had founded dozens of data-hoovering “people finder” type websites over the years, including Nuwber, a data broker with a checkered past that sells detailed consumer behavior, location, and other data gleaned from user devices.
Shelest was forced to issue an apology for not being more up front about his not insignificant role in an industry he professes to be protecting people from:
“I get it. My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.”
Mozilla issued its own statement clarifying that no user data was put at risk, but that “the outside financial interests and activities of Onerep’s CEO do not align with our values.”
We’ve noted repeatedly how the U.S.’ corrupt refusal to pass a privacy law or regulate data brokers isn’t much of a laughing matter. The largely unregulated industry is now routinely caught up in dangerous scandals involving over-collecting consumer data, then selling access to any nitwit with a nickel (like, say, right wing activists targeting abortion clinic visitors with misinformation).
Mozilla, which publishes numerous excellent reports on consumer privacy, likely provided Onerep with a reputation boost. But this latest mess once again highlights how modern America’s online privacy problems aren’t something that can be fixed with an app. The rot runs deep, and fixing it requires passing a privacy law — and giving regulators the staff and resources they’ll need to enforce it.
Unfortunately when you have so many interconnected industries making a killing on the existing dysfunction (even apparently the ones claiming to help), meaningful reform is hard to come by.
Filed Under: behavioral data, congress, data brokers, dmitri shelest, location data, mozilla monitor, privacy, privacy law, security, surveillance
Companies: mozilla, newber, onerep
The Murthy Arguments Went So Poorly For The States That The FBI Feels Comfortable Talking To Social Media Companies Again
from the reconnecting dept
How badly did the arguments in the Murthy v. Missouri case go for the states last week? So badly that the FBI has already re-established communications with social media companies that had stopped in light of the earlier rulings in that case.
The FBI has resumed some of its efforts to share information with some American tech companies about foreign propagandists using their platforms after it ceased contact for more than half a year, multiple people familiar with the matter told NBC News.
The program, established during the Trump administration, briefed tech giants like Microsoft, Google and Meta when the U.S. intelligence community found evidence of covert influence operations using their products to mislead Americans. It was put on hold this summer in the wake of a lawsuit that accused the U.S. government of improperly pressuring tech companies about how to moderate their sitesand an aggressive inquisition from the House Judiciary Committee and its chair, Jim Jordan, R-Ohio.
This is important for a few reasons. First, many people have widely misunderstood why and how the FBI was in touch with the social media companies throughout this discussion. I tend to agree with many people that contact between private companies and the FBI should be minimal and companies should always be wary of what the FBI wants.
But, there are times that it does make sense for the FBI to be in communication, which the oral arguments made clear.
Justice Amy Coney Barrett highlighted that there clearly are times when the FBI should be in contact with the platforms. Even the lawyer for the states, Louisiana’s Solicitor General Benjamin Aguiñaga, admitted that there were clearly cases where it would make sense for the FBI to send information to platforms, such as when there is a danger to someone, or a threat.
JUSTICE BARRETT: So the FBI can’t make –do you know how often the FBI makes those kinds of calls?
MR. AGUINAGA: And that’s why –and that’s why I have backup answer, Your Honor, which is, if you think there needs to be more, the FBI absolutely can identify certain troubling situations like that for the platforms and let the platforms take action.
But, thanks to the rulings in the lower courts, the FBI had stopped any kind of contact along those lines, for fear of violating the injunction. This is from last November:
The FBI told the House Judiciary Committee that, since the court rulings, the bureau had discovered foreign influence campaigns on social media platforms but in some cases did not inform the companies about them because they were hamstrung by the new legal oversight, according to a congressional official.
Again, it’s true that how close the FBI is with companies matters. We don’t want another scenario like with AT&T and federal intelligence apparatus, where they literally had employees embedded with each other. But, straight-up information sharing on foreign threats certainly seems reasonable.
And this is why it would still be nice if the Supreme Court drew the line in the proper place, distinguishing general information sharing about such things, and any sort of coerced pressure or threats directed at the social media companies regarding their policies or decision-making.
Filed Under: fbi, information sharing, jawboning, murthy v. missouri, social media
PA State Senator Piles Into The Chemtrail Ban Clown Car, Announces Dumbass Bill Of His Own
from the all-hail-the-new-wave-of-legislative-snakehandlers dept
Earlier this week, we covered a truly insane and insane bill being pushed by the Tennessee State Senate that vowed to ban something that actually isn’t happening and, indeed, has never happened.
Stapled to rote effort to ensure the state’s Air Pollution Control Board didn’t remain understaffed for more than 30 days was some batshit crazy performative horseshit. The rider attached to this bill was an amendment to the state code to basically ban any form of altering the atmosphere over Tennessee via chemicals emitted by airplanes.
SECTION 1. Tennessee Code Annotated, Title 68, Chapter 201, Part 1, is amended by adding the following as a new section:
The intentional injection, release, or dispersion, by any means, of chemicals, chemical compounds, substances, or apparatus within the borders of this state into the atmosphere with the express purpose of affecting temperature, weather, or the intensity of the sunlight is prohibited.
This is the result of two things: people seeing contrails and being convinced these are part of a nationwide conspiracy to engage in mind control or mass sterilization or whatever, and 25 Tennessee senators deciding it would be best to get out in front of any efforts to combat climate change via atmospheric, um, interference by airplanes.
To date, there have been no attempts, much less successful efforts, to counteract the negative effects of greenhouse gases by introducing other substances to the atmosphere. There has also never been any attempt to negatively affect people on the ground by deliberately introducing other chemicals to the atmosphere. “Chemtrails” have never existed. Condensation naturally formed by the movement of hot engines through cold air has been a fact of life since the introduction of aircraft capable of flying high enough to create this phenomenon.
But it’s not as though this was a serious effort to do anything more than allow certain politicians to beclown themselves for the short-lived adulation of the most ignorant members of their voting bases.
And so it is in Pennsylvania, where state senator Doug Mastriano has decided the government should be in the business of performatively blocking something that isn’t happening and, in fact, does not exist. (h/t Techdirt reader mvario)
His statement pretends it’s mostly interested in protecting residents from unintended side effects of cloud seeding efforts.
Soon, I will introduce legislation amending the PA Cloud Seeding Licensure Law to ensure the skies over Pennsylvania are protected well into the future.
Enshrined in Article 1, Section 27 of the Constitution of Pennsylvania is the people’s “right to clean air, pure water, and to the preservation of the natural, scenic, historic and esthetic values of the environment.”
In 1967, the General Assembly passed the PA Cloud Seeding Licensing Law to regulate weather modification experiments and create a Weather Modification Board within the Department of Agriculture. The law was inspired by unauthorized weather modification by the Blue Ridge Weather Modification Association in 1963 which used planes and ground generators to emit silver iodide into the air to suppress hail in Fulton and Franklin counties.
Recent developments and new technology have brought forward the need to modernize the 1967 law. According to the U.S Patent and Trade Office, over 100 new weather modification patents now exist that are owned by combination of Federal Government Agencies, Non-Governmental Organizations, and large multinational corporations.
Seems sensible until you actually start looking at what he’s saying and who Doug Mastriano actually is.
That’s what Peter Hall at the Pennsylvania Capital-Star did. Mastriano doesn’t care one way or another about cloud seeding efforts. He does, however, entertain chemtrail conspiracy theories and likely wishes to appear opposed to efforts designed to combat climate change because that plays well to the climate change deniers in his voter base.
The legislation would ban the release of substances within the borders of Pennsylvania to affect the temperature, weather or intensity of sunlight. It would mirror legislation that passed in the Tennessee Senate on Wednesday.
Mastriano, an election denier who lost his 2022 gubernatorial bid to Gov. Josh Shapiro, has made repeated references to the chemtrail conspiracy theory on social media.
In a November Facebook post with a photo of condensation trails in the sky above Chambersburg, Mastriano wrote, “I have legislation to stop this … Normal contrails dissolve / evaporate within 30-90 seconds.”
Shortly after his loss to Shapiro in 2022 Mastriano posted on Twitter — now called “X” — four photos of condensation trails above his district. In a reply to his own tweet, he linked to an article detailing a proposal to distribute reflective material in the atmosphere to reflect more of the sun’s energy back into space, implying the two are linked.
As for the supposed latent threat posed by unregulated use of… stuff… in the atmosphere for cloud seeding efforts, there’s nothing there to work with. Mastriano claims he wants to head off potentially dangerous “weather modification” efforts, something already overseen by the state’s Weather Modification Board and regulated by existing statutes. So far, this regulatory board has yet to regulate anything.
The department’s Weather Modification Board has never received a license application and has never investigated unauthorized cloud seeding, Deputy Press Secretary Jay Losiewicz said in an email.
And that makes Mastriano’s closing proclamation even more meaningless than it would be in the context of chemtrail conspiracy theorizing.
My legislation will amend the Cloud Seeding Licensure law to ban the injection, release, or dispersion of chemicals, chemical compounds, or substances within the borders of Pennsylvania into the atmosphere for purposes of affecting temperature, weather, or intensity of sunlight.
OK, election denier. Let’s make sure something that isn’t happening continues to not happen. And while Mastriano quotes (but does not link to) a Wall Street Journal article about “Solar Radiation Mitigation” efforts being conducted in two other countries (Israel and Australia), his refusal to quote the article directly (much less give readers of his statement a chance to read it for themselves) conveniently leaves out the fact that the substances used were not “chemicals or chemical compounds” (although, really pretty much everything is a “chemical compound”), but rather smoke and sea water.
Of course, none of this matters to Senator Mastriano and it certainly won’t matter to many of the people who elected him. It’s a voter bloc well-stocked with conspiracy theorists and people who’d rather see the entire world burn than share the road with bicyclists or hybrid owners without dragging their Truck Nutz and/or rolling their coal.
Filed Under: chemtrails, conspiracy theories, doug mastriano, pennsylvania
Daily Deal: Olden Golden Retro Mini Gramophone Bluetooth Speaker
from the good-deals-on-cool-stuff dept
Listening to your favorite tunes coming out of a mini Gramophone style Bluetooth speaker is fun and this speaker is small enough to have on your desk while you do your work or as a part of the décor in your den while you are enjoying Sunday brunch and lounge around the house! Enjoy Olden Golden super hits with this Retro Mini, it’s a small conversational piece that will get attention from every music lover in your circle of family and friends. It’s available in four colors and on sale for $40.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
California State Senator Pushes Bill To Remove Anonymity From Anyone Who Is Influential Online
from the someone-buy-padilla-a-'constitutional-lawmaking-for-dummies'-book dept
What the fuck is wrong with state lawmakers?
It seems that across the country, they cannot help but to introduce the absolute craziest, obviously unconstitutional bullshit, and seem shocked when people suggest the bills are bad.
The latest comes from California state Senator Steve Padilla, who recently proposed a ridiculous bill, SB 1228, to end anonymity for “influential” accounts on social media. (I saw some people online confusing him with Alex Padilla, who is the US Senator from California, but they’re different people.)
This bill would require a large online platform, as defined, to seek to verify the name, telephone number, and email address of an influential user, as defined, by a means chosen by the large online platform and would require the platform to seek to verify the identity of a highly influential user, as defined, by asking to review the highly influential user’s government-issued identification.
This bill would require a large online platform to note on the profile page of an influential or highly influential user, in type at least as large and as visible as the user’s name, whether the user has been authenticated pursuant to those provisions, as prescribed, and would require the platform to attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated, as prescribed.
First off, this is unconstitutional. The First Amendment has been (rightly) read to protect anonymity in most cases — especially regarding election-related information. That’s the whole point of McIntyre v. Ohio. It’s difficult to know what Padilla is thinking, especially given his blatant admission that this bill seeks to target speech regarding elections. There are exceptions to the right to be anonymous, but they are limited to pretty specific scenarios. Cases like Dendrite lay out a pretty strict test for de-anonymizing a person (while limited as a precedent, but adopted in other courts), and it has to only be after a plaintiff demonstrates to a court that the underlying speech is actionable under the law. And not, as in this bill, because the speech is “influential.”
Padilla’s bill recognizes none of that, and almost gleefully makes it clear that he is either ignorant of the legal precedents here, or he doesn’t care. As he lays out in his own press release about the bill, he wants platforms to “authenticate” users because he’s worried about misinformation online about elections (again, that’s exactly what the McIntyre case said you can’t target this way).
“Foreign adversaries hope to harness new and powerful technology to misinform and divide America this election cycle,” said Senator Steve Padilla. “Bad actors and foreign bots now have the ability to create fake videos and images and spread lies to millions at the touch of a button. We need to ensure our content platforms protect against the kind of malicious interference that we know is possible. Verifying the identities of accounts with large followings allows us to weed out those that seek to corrupt our information stream.”
That’s an understandable concern, but an unconstitutional remedy. Anonymous speech, especially political speech, is a hallmark of American freedom. Hell, the very Constitution that this law violates was adopted, in part, due to “influential” anonymous pamphlets.
The bill is weird in other ways as well. It seems to be trying to attack both anonymous influential users and AI-generated content in the same bill, and does so sloppily. It defines “influential users” as someone who where
“Content authored, created, or shared by the user has been seen by more than 25,000 users over the lifetime of the accounts that they control or administer on the platform.”
This is odd on multiple levels. First, “over the lifetime of the account,” would mean a ridiculously large number of accounts will, at some point in the future, reach that threshold. Basically, you make ONE SINGLE viral post, and the social media site has to get your data and you can no longer be anonymous. Second, does Senator Padilla really think it’s wise to require social media sites to have to track “lifetime” views of content? Because that could be a bit of a privacy nightmare.
And then it adds in a weird AI component. This also counts as an “influential user”:
Accounts controlled or administered by the user have posted or sent more than 1,000 pieces of content, whether text, images, audio, or video, that are found to be 90 percent or more likely to contain content generated by artificial intelligence, as assessed by the platform using state-of-the-art tools and techniques for detecting AI-generated content.
So, first, posting 1,000 pieces of AI-generated content hardly makes an account “influential.” There are plenty of AI-posting bots that have little to no followings. Why should they have to be “verified” by platforms? Second, I have a real problem with the whole “if ‘state-of-the-art tools’ identify your content as mostly AI, then you lose your rights to anonymity,” when there’s zero explanation of why, or whether or not these “state-of-the-art tools” are even reliable (hint: they’re not!). Has Padilla run an analysis of these tools?
There are higher thresholds that designate someone as “highly influential”: 100,000 lifetime user views and 5,000 potentially AI-created pieces of content. Under these terms, I would be legally designated “highly influential” on a few platforms (my parents will be so proud). But then, “large online platforms” would be required to “verify” the “influential users’” identity, including the user’s name, phone number, and email, and would be required to “seek” government-issued IDs from “highly influential” users.
There is no fucking way I’m giving ExTwitter my government ID, but under the bill, Elon Musk would be required to ask me for it. No offense, Senator Padilla, but I’m taking the state of California to court for violating my rights long before I ever hand my driver’s license over to Elon Musk at your demand.
While the bill only says that the platforms “shall seek” this info, it would then require them to add a tag “at least as large and as visible as the user’s name” to their profile designating them “authenticated” or “unauthenticated.”
It would then further require that any site allow users to block all content from “unauthenticated influential or highly influential” users.
It even gets down to the level of product management, in that it tells “large online platforms” how it has to handle showing content from “unauthenticated” influential users:
(1) A large online platform shall attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated.
(2) For a post from an unauthenticated influential or highly influential user, the notation required by paragraph (1) shall be visible for at least two seconds before the rest of the post is visible and then shall remain visible with the post.
Again, there is so much problematic about this bill. Anyone who knows anything about anonymity would know this is so far beyond what the Constitution allows, that it should be an embarrassment for Senator Padilla, who should pull this bill.
And, on top of anything else, this would become a massive target for anyone who wants to identify anonymous users. Companies are going to get hit with a ton of subpoenas or other legal demands for information on people, which they’ll have collected, because someone had a post go viral.
Senator Padilla should be required to read Jeff Kosseff’s excellent book, “The United States of Anonymous,” as penance, and to publish a book report that details the many ways in which his bill is an unconstitutional attack on free speech and anonymity.
Yes, it’s reasonable to be concerned about manipulation and a flood of AI content. But, we don’t throw out basic constitutional principles based on such concerns. Tragically, Senator Padilla failed at this basic test of constitutional civics.
Filed Under: 1st amendment, ai, anonymity, california, elections, influencers, steve padilla
When It Comes To TikTok Hyperventilation, Financial Conflicts Of Interest Abound
from the I-am-entirely-objective-and-operating-in-good-faith dept
Earlier this month we noted how despite all of the sound, fury, and hyperventilation surrounding the push to ban TikTok, most Americans don’t actually support such a move (you know, the whole representative democracy thing). Support is particularly lacking among young Democrats, a demographic the Biden administration has struggled to connect with in the wake of the ongoing carnage in Gaza.
The fact that this Congress is too corrupt to agree on anything of substance (but was able to quickly get a TikTok ban through the House) already spoke pretty loudly to TikTok creators. But it probably speaks louder still given that data reveals the House Reps that voted for the ban personally own between $29 million and $126 million worth of stock in competing tech companies that directly stand to benefit.
Of the 352 members of the House of Representatives who voted “yes” on the bill, 44 reported they own shares of companies including Amazon, Google, Meta, Microsoft, and Snap. Tracking their exact stock ownership total is tricky, in part, because Congress successfully crushed efforts to make their financial disclosures more easily searchable, notes Quartz:
“Some members file their financial disclosures by hand, and that information isn’t present in the data set. There are several other caveats to consider. Members of Congress have to report stock transactions within 45 days and disclose their overall stock holdings annually. Because there’s a grace period in both cases, the most recent information dates back to earlier this year, before the TikTok vote. Officials also don’t have to report the exact value of these investments, but instead have to disclose a range ($15,001 to $50,000 of Microsoft stock, for example). The value of the stocks has also changed since reports were filed.”
A TikTok ban also benefits military contractors and tech companies keen on seeing greater and broader animosity between the United States and China in order to sell more weaponry, automation, and surveillance technology.
The Intercept, for example, notes that Jacob Helberg, an extremely vocal supporter of the ban TikTok movement, is both a former advisor to Google, a member of the U.S.–China Economic and Security Review Commission, and a current advisor at the military contractor Palantir.
Helberg’s rhetoric on TikTok is not subtle:
“TikTok is a scourge attacking our children and our social fabric, a threat to our national security, and likely the most extensive intelligence operation a foreign power has ever conducted against the United States.”
There’s a large segment of these folks who freak out about TikTok, but ignore not only the same type of abuses by U.S. companies, but the same kind of abuses from international companies where U.S. interests might financially tethered; including the vast data broker industry whose ethics-optional monetization of consumer location, behavior, and other data is the source of endless scandals.
Unfortunately it’s an era where even being marginally transparent about your financial conflicts of interest has become… passé:
“It is a clear conflict-of-interest to have an advisor to Palantir serve on a commission that is making sensitive recommendations about economic and security relations between the U.S. and China,” said Bill Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft and scholar of the U.S. defense industry. “From their perspective, China is a mortal adversary and the only way to ‘beat’ them is to further subsidize the tech sector so we can rapidly build next generation systems that can overwhelm China in a potential conflict — to the financial benefit of Palantir and its Silicon Valley allies.”
Getting the greater gerontocracy in DC agitated about China has long been trivial for lobbyists, as one such hill advisor noted back in 2012 during the sustained freak out about Huawei (a not insubstantial chunk of which was driven by telecom competitors like Cisco):
“What happens is you get competitors who are able to gin up lawmakers who are already wound up about China,” said one Hill staffer who was not authorized to speak publicly about the matter. “What they do is pull the string and see where the top spins.”
It’s also relatively trivial for these kinds of folks to publish various op-eds at outlets like The Hill without disclosing financial conflicts of interests. Lawyers working for law firms doing lobbying and policy work routinely publish missives under their own name, while doing lobbying work on the side. That their arguments frequently stem from an obvious financial conflict of interest is, apparently, of no note.
Facebook has seen success for several years seeding various moral panics about TikTok around DC with the help of policy and PR firms like Targeted Victory. It’s pretty clear Facebook’s interest isn’t in privacy or national security, but in using both concerns as bludgeons to eliminate a direct competitor the company has, so far, proven incapable of out-innovating in the short-form video space.
That’s not to say that there aren’t meaningful national security issues at play, or people in DC who pursue national security issues in good faith. But they’re certainly and clearly outnumbered.
It’s clear that if lawmakers really cared about national security, they wouldn’t be supporting a multiple-indictment facing NYC real estate con man for President. If Congress really cared about consumer privacy, they’d pass a privacy law that applies to all companies and regulate data brokers, who routinely sell U.S. consumer data to foreign intelligence agencies.
Instead you sort of get a sort of lobbyist-driven, vibes-based, legislative process where the public interest is a distant afterthought, and even the most rudimentary transparency is simply a bridge too far.
Filed Under: apps, china, lobbying, national security, policy, tiktok ban, warhawk
Companies: tiktok
Nigerian Woman Faces Jail Time For Facebook Review Of Tomato Sauce
from the saucy dept
Nigeria doesn’t exactly have a stellar reputation when it comes to respecting the speech rights of its own citizens, nor the rights of platforms that its citizens use. By way of examples, there was the time that the country suspended Twitter for several months for the crime of taking down a tweet from its president that sure sounded like a threat of genocide. The country has also been known to abuse its cybercrime laws to wage legal battles with citizens that have dared to criticize the government.
But I will admit that even with that reputation in place, I’m a bit at a loss as to why the country decided to arrest and charge a woman for violating those same laws because she wrote an unkind review of a can of tomato puree on Facebook.
A Nigerian woman who wrote an online review of a can of tomato puree is facing imprisonment after its manufacturer accused her of making a “malicious allegation” that damaged its business.
Chioma Okoli, a 39-year-old entrepreneur from Lagos, is being prosecuted and sued in civil court for allegedly breaching the country’s cybercrime laws, in a case that has gripped the West African nation and sparked protests by locals who believe she is being persecuted for exercising her right to free speech.
By now you’re wondering what actually happened here. Well, Okoli got on Facebook after having tried a can of Nagiko Tomato Mix, made by local Nigerian company Erisco Foods. Her initial post essentially complained about it being too sugary. So pretty standard fair for a review-type post on Facebook. When she started getting some mixed replies, some of them told her to stop trying to ruin the company and just buy something else, with one such message supposedly coming from a relative of the company’s ownership. To that, she replied:
Okoli responded: “Help me advise your brother to stop ki***ing people with his product, yesterday was my first time of using and it’s pure sugar.”
By the way, you can see all of this laid out by Erisco Foods itself on its own Facebook page. The company also claims that she exchanged messages with others talking about how she wanted to trash the product online so that nobody would buy it and that sort of thing. Whatever the truth about that situation is, this all stems from a poor review of a product posted online, which is the kind of speech countries with free speech laws typically protect.
In Okoli’s case, she was arrested shortly after those posts.
According to the police, Okoli was charged with “instigating Erisco Foods Limited, knowing the said information to be false under Section 24 (1) (B) of Nigeria’s Cyber Crime Prohibition Act.” If found guilty, she could face up to three years in jail or a fine of 7 million naira (around $5,000), or both.
Okoli was separately charged with conspiring with two other individuals “with the intention of instigating people against Erisco Foods Limited,” which the charge sheet noted was punishable under Section 27(1)(B) of the same act. She risks a seven-year sentence if convicted of this charge.
Okoli is pregnant and was placed in a cell during her arrest that had water leaking into it, by her account. She was also forced to apologize to Erisco Foods as part of her bond release, which she then publicly stated was done under duress and refused to apologize once out of holding. Erisco Foods, for its part, has said it didn’t instigate the arrest — note: I find this hard to believe — and that it was suing in civil court because of how much harm Okoli’s post did to its total business.
The Lagos-based food company said it also “suffered the loss of multiple credit lines” and had therefore filed a civil lawsuit against Okoli that sought 5 billion naira (more than $3 million) in damages. This case is due to be heard on May 20, her lawyer, Inibehe Effiong, told CNN.
It was one person’s review of a can of tomato puree. If a single review really could result in the destruction of multiple credit lines and millions of dollars of harm, I would like the company to please show me any evidence of that, because I don’t believe that either. And the claim that the company is purusing the civil case because of the harm that Okoli did to its reputation sure looks silly considering the harm that the company is doing to its reputation by going after Okoli, in true Streisand Effect.
“Harassment and intimidation of Chioma Okoli must end now,” Amnesty International Nigeria said earlier this month, as Nigerians began crowdfunding online to support her legal fees.
Okoli’s case has sparked protests at Erisco’s Lagos facility as many on social media called for a boycott of its products. The company’s founder, Eric Umeofia, refused to budge, however, saying in a recent documentary on the local Arise Television channel that he won’t drop the lawsuit against Okoli and that he would “rather die than allow someone to tarnish my image I worked 40 years to grow.”
Okoli is also countersuing both Erisco Foods and the police, arguing for a violation of her speech rights.
Now, those speech rights aren’t the same as exist in America, of course, but if a country doesn’t even allow for an online review of a can of tomato puree, then what actual speech rights does the company have anyway?
Filed Under: chioma okoli, cybercrime, free speech, lagos, malicious allegation, nigeria, reviews, tomato puree
Companies: erisco foods, facebook
ShotSpotter Pitches In To Help Cops Open Fire On A Teen Setting Off Fireworks
from the can't-end-that-contract-soon-enough dept
Back in 2021, the Chicago Office of the Inspector General released a report on the PD’s ShotSpotter tech. The acoustic detection system was apparently mostly useless, no matter what ShotSpotter may have commented in response.
Residents of Chicago are paying nearly $11 million a year for this system. But it’s obvious they’re not getting much bang for their buck, so to speak. ShotSpotter (which has since rebranded to SoundThinking) claims its detection system is worth every penny blown on it, stating that it is “highly accurate” and “benefits communities battling gun violence.”
Plenty of cities that have spent money on this product say otherwise. So do lawsuit plaintiffs and other victims of civil rights abuses, who have claimed ShotSpotter will alter detection records to align with the narratives crafted by police officers following acts of police violence or wrongful arrests.
The Chicago OIG report disputes ShotSpotter’s claim that its tech “benefits communities battling gun violence.” It’s actually the opposite of that, according to the data gathered by the Inspector General.
OIG concluded from its analysis that CPD responses to ShotSpotter alerts can seldom be shown to lead to investigatory stops which might have investigative value and rarely produce evidence of a gun-related crime.
[…]
The CPD data examined by OIG does not support a conclusion that ShotSpotter is an effective tool in developing evidence of gun-related crime.
Despite this report (and a lawsuit against the city and its police department), Chicago is apparently still paying $11 million a year for a system that doesn’t appear to work.
No gun crime got stopped here, as Adam Schwartz reports for the EFF. However, it did give Chicago police officers the reasonable suspicion to go traipsing around the neighborhood with their guns at the ready, resulting in the following (thankfully not deadly) debacle.
On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.
Lots of stuff going on here. Presumably, the ShotSpotter system was triggered by the fireworks but was unable to distinguish between the detonated fireworks and actual gunshots.
Second, the officers were unable to make this distinction either, as they immediately treated the percussive noise as shots fired at them and responded with actual gunshots.
Third, the PD then told local reporters officers had “fired shots at a person” who they only identified as “male.” The rest of the facts were withheld until the Civilian Office of Police Accountability (COPA) concluded its investigation and released the body cam video. In that video, an officer is heard informing dispatch that officers had just shot at a teenager. These facts were all known by the Chicago PD, but no one from the department bothered to call the Chicago Sun Times to get the headline referring to the shot-at person as a “man” corrected.
This was all cleared up by the COPA investigation. And, it appears the Chicago PD is taking this incident seriously. All three officers have been placed on administrative duty and are being investigated to see whether department policies were violated.
The bigger concern is obviously the tech that brought the officers there in the first place. It’s literally called “ShotSpotter” so every alert is obviously going to be treated as actual gunfire, even if it isn’t. This puts officers on edge and makes them more prone to react the way these officers did — something that could easily have resulted in the injury or killing of a minor doing nothing more than setting off fireworks.
The other good news is that Chicago’s contract with ShotSpotter will expire in September, which will hopefully head off further incidents like these. And, as Schwartz notes in his article for the EFF, it means the Chicago PD will stop spending millions a year for the dubious privilege of being worse at policing.
[The] 2021 [Inspector General’s] study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report.
So, that pretty shoots “better than doing nothing” arguments all to hell. It’s literally worse than doing nothing. The alternative — not using ShotSpotter — would be better. And that’s where the city is headed before the end of this year. Hopefully, more cities will take a closer look at this tech and realize spending this money on pretty much anything else is probably a better use of public funds.
Filed Under: chicago, chicago pd, gunshot detection
Companies: shotspotter, soundthinking
Dodgy Group That Targeted Gigi Sohn FCC Nomination Now Under IRS Inquiry For Lying About Ad Spending
from the this-is-why-we-can't-have-nice-things dept
You might recall how popular telecom and media consumer advocate Gigi Sohn saw her nomination to the FCC scuttled after a variety of right wing and telecom-tethered lobbying groups ran a successful, year-long public smear campaign.
The campaign tried to frame Sohn as an unhinged radical extremist, giving corrupt Republican and Democrat lawmakers the flimsy justification they needed to scuttle the nomination. Attacks ran the gamut from homophobic efforts to frame her as deviant for simply being on the EFF’s board, claims she hated cops, and attacks insisting Sohn (long an advocate for rural broadband) secretly hated rural America.
One of the groups behind those attacks, The “American Accountability Foundation,” has suddenly found itself under IRS inquiry after it previously bragged about how much money it had spent to scuttle Sohn’s nomination. The organization, which has tethers to telecom and media giants looking to lobotomize the FCC, reported no spending on lobbying or advertising in 2021 and 2022.
Yet research shows the organization spent nearly a quarter a million dollars buying ads on Facebook alone that attacked Sohn in 2022:
“According to data obtained by the ad analytics company AdImpact, AAF spent over $230,000 on Meta ads alone that year to oppose Biden’s FCC nominee, Gigi Sohn. Data from Meta also confirms that the ad spending to target Sohn in 2022 was around that figure. That figure does not include other means of advertising, nor does it include spending on other issues.”
U.S. lobbying and financial disclosure laws are the technical equivalent of damp street corner cardboard, so if you’re violating them and encouraging inquiry by feckless U.S. enforcers, you’re truly screwing up.
The American Accountability Foundation calls itself a “nonprofit government oversight and research organization that uses investigative tools to educate the public on issues related to personnel, policy and spending.” But it’s the exact kind of Conservative dark money group companies like AT&T and Comcast like to use to seed lies in the discourse and scuttle any effort at consumer protection.
The New Yorker profiled the group back in 2022, noting it was a key player in numerous attacks on Biden regulatory and judicial nominees. By the time it faces anything vaguely resembling accountability, its crafters will have already moved on to creating numerous new, similar sleaze merchants.
The Sohn thing was quickly forgotten by the AI and crypto obsessed news cycle, but it really was a new high water mark for U.S. policy corruption. Sohn is broadly experienced, fiercely intelligent, and popular across both sides of the aisle; yet faced a year-long relentless assault at the hands of telecom and media companies whose lobbying tendrils extend into every last crevice of corrupt U.S. policymaking.
Filed Under: ad spending, broadband, corruption, gigi sohn, irs
Companies: american accountability foundation
No comments:
Post a Comment