Yes, You Can Use This!
Started in 1997 by Floor64 founder Mike Masnick and then growing into a group blogging effort, the Techdirt blog relies on a proven economic framework to analyze and offer insight into news stories about changes in government policy, technology and legal issues that affect companies’ ability to innovate and grow. As the impact of technological innovation on society, civil liberties and consumer rights has grown, Techdirt’s coverage has expanded to include these critical topics.
The dynamic and interactive community of Techdirt readers often comment on the addictive quality of the content on the site, a feeling supported by the blog’s average of ~1 million visitors per month and more than 1.7 million comments on 73,000+ posts. Both Business Week and Forbes have awarded Techdirt Best of the Web thought leader awards.
You can also find Techdirt on Twitter and Facebook.
Trending Posts
- The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win
- Every Major Pharmacy Chain Is Giving The Government Warrantless Access To Medical Records
- Turns Out Taser's 'Tragic' Backstory Is Mostly Just Alternate Facts Cooked Up By Its Founder
New Year’s Message: Moving Fast And Breaking Things Is The Opposite Of Tech Optimism
from the it-ain't-optimism-if-you-ignore-reality dept
Every year since 2008, my final post of the year for Techdirt is about optimism. This makes this year’s post (which will be the only post for today — go out and enjoy the holiday times, people) my 15th such post. As I said, this process began back in 2008 when I had a few people note that there was this weird dichotomy in which I wrote about all of the ways that technological progress was under attack, and yet I remained a strong believer in the power of innovation to make the world a better place. The question raised to me was: how is it that I remained optimistic, despite seeing all these attacks on progress?
You can go back and read that very first message, or any of the other final optimistic posts of the year here:
- 2008: On Staying Happy
- 2009: Creativity, Innovation And Happiness
- 2010: From Pessimism To Optimism… And The Power Of Innovation
- 2011: From Optimism And Innovation… To The Power To Make A Difference
- 2012: Innovation, Optimism And Opportunity: All Coming Together To Make Real Change
- 2013: Optimism On The Cusp Of Big Changes
- 2014: Change, Innovation And Optimism, Despite Challenges
- 2015: Keep Moving Forward
- 2016: No One Said It Would Be Easy…
- 2017: Keep On Believing
- 2018: Do Something Different
- 2019: Opportunities Come From Unexpected Places
- 2020: Make The World A Better Place
- 2021: The Arc Of The Moral Universe Is A Twisty Path
- 2022: The Opportunity To Build A Better Internet Is Here. Right Now.
I think about what I’m going to write in these posts all year long, and initially I thought this year’s post would be a continuation on last year’s, which talked about the opportunities for new, independent and decentralized services to take away market share from the large centralized silos, as well as new advances in generative AI often coming from smaller companies, rather than the old giants (though some of the momentum has shifted a bit this past year). The growth of decentralized systems has been super exciting and I’m super optimistic about where things are headed on that front.
But, then, in October, venture capitalist Marc Andreessen published his own Techno-Optimist Manifesto, and suddenly there were all sorts of discussions about techno optimism… and most of those discussions were mind-numbingly stupid.
I should note upfront that I know some people have a kneerjerk reaction to people like Andreessen. Many of the responses I saw were along the lines of “stupid out of touch rich guy…” and I get where those responses come from, but I had a different one. Over the years, I’ve learned a lot from Marc, and find that I tend to agree with 60 to 70% of what he says while finding the other part… confusingly simplistic. And I sorta had the same response to his Techno-Optimist manifesto.
(Just as a disclaimer, years back, Marc donated a small amount of money to us when we were sued, and used to link to Techdirt articles regularly. Another partner at his VC firm, A16Z, called me once to say that Marc told the entire A16Z staff that they should read Techdirt. But then, something shifted, and Marc blocked me — and tons of other journalists — on Twitter and stopped linking to Techdirt. So, apparently his opinion of us changed at some point. My opinion of him remains pretty much the same).
There’s actually plenty of stuff in the manifesto that I agree with. It’s just that most of it is the kind of obvious stuff. Technological progress has, on the whole, been incredibly beneficial to the world. It has improved the lives of literally billions of people, providing them much more for way less. I know that it has become out of style among some these days, but I’m a big believer in the Paul Romer view of the world regarding how technological innovation is the lever of economic growth, by taking ideas that are infinitely reproduceable (an abundance) and using them to effectively level up all sorts of things, including much that is or was scarce.
Ideas and the ability to share them are the key to growth. Thomas Jefferson got this right two centuries ago in his letter to Isaac McPherson. While it is often quoted in the context of questions around patents or other intellectual property, what Jefferson is actually explaining is how technological progress is the engine of economic growth, in that it enables new things without using up the resource (the idea) that create them:
if nature has made any one thing less susceptible, than all others, of exclusive property, it is the action of the thinking power called an Idea; which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the reciever cannot dispossess himself of it. it’s peculiar character too is that no one possesses the less, because every other possesses the whole of it. he who recieves an idea from me, recieves instruction himself, without lessening mine; as he who lights his taper at mine, recieves light without darkening me. that ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benvolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point; and like the air in which we breathe, move, and have our physical being, incapable of confinement, or exclusive appropriation. inventions then cannot in nature be a subject of property.
I see that in conjunction with Joel Mokyr’s concept of “the lever of riches,” and how technological innovation really does help bring so many people out of poverty.
There are, of course, plenty of important questions and concerns about the distribution of riches and those still left behind. There are important questions, similarly, about the concentration of power (not just wealth) that some of this technology has enabled as well. And I think those are questions worth thinking about, whereas Andreessen appears to be arguing that we can mostly ignore those questions if we just push for even more innovation and growth. I think that’s wrong, and actually limits growth as we’ll get to shortly.
And this is where Andreessen’s manifesto loses me. He argues that anyone trying to look at these issues and to come up with better approaches is somehow an “enemy of progress.”
We have enemies.
Our enemies are not bad people – but rather bad ideas.
Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like “existential risk”, “sustainability”, “ESG”, “Sustainable Development Goals”, “social responsibility”, “stakeholder capitalism”, “Precautionary Principle”, “trust and safety”, “tech ethics”, “risk management”, “de-growth”, “the limits of growth”.
This demoralization campaign is based on bad ideas of the past – zombie ideas, many derived from Communism, disastrous then and now – that have refused to die.
First of all, it’s weird to claim that these ideas stem from “communism,” when, um, basically none of them do?
But, more importantly, many of these principles are not at all “enemies” of technological progress, but making sure that it is most useful. I can agree that concept like “de-growth” are generally ridiculous and ignorant, but many of the other ideas… are actually important for the sake of technological progress. Take, for example, Andreessen’s discussion of nuclear power. Andreessen rightly points out that nuclear power (both fission and the potential for fusion) could be a silver bullet for “virtually unlimited zero-emissions energy,” but that it has not come to pass. He implies that the concepts he discusses above as the “enemies” are to blame for this:
Our enemy is the Precautionary Principle, which would have prevented virtually all progress since man first harnessed fire. The Precautionary Principle was invented to prevent the large-scale deployment of civilian nuclear power, perhaps the most catastrophic mistake in Western society in my lifetime. The Precautionary Principle continues to inflict enormous unnecessary suffering on our world today. It is deeply immoral, and we must jettison it with extreme prejudice.
But… that gets everything backwards. The reason for this excessive caution around nuclear power was the lack of thoughtful and careful early deployments, leading to disasters like Three Mile Island and Chernobyl.
It’s perfectly reasonable to suggest that the precautionary principle has gone too far, and overreacted to such a degree that we’re holding back useful nuclear deployments, but if we had been more careful and thoughtful in the early deployments of nuclear power, such that meltdowns were not something we had to deal with and the risk was effectively zero, then we’d see much more nuclear power around the globe.
The same is true in other contexts. Almost all of the examples he puts forth as “enemies” here are not trying to hold back progress, but rather to make sure that progress is done in a way that maximizes the benefits while minimizing the downside risks.
Marc’s manifesto reads as though any attempt to minimize downside risks is, itself, immoral, but he misses the forest for the trees: if you (ahem) just move fast and break things, the backlash and restrictions are going to be much greater in the long run then if you just take some time and some effort to think about how to deploy things in a way that does much less damage upfront.
If you deploy nuclear safely and avoid the meltdowns, you get more nuclear power. If you avoid existential risk by creating tech thoughtfully, you avoid regulations that limit the usefulness of the tech. If you build safer platforms through smart trust & safety approaches, you avoid governments around the world trying to take over and control platforms through regulation.
Over and over again the things he fears as “brakes” on progress are almost always the opposite. They’re attempts to make sure that the progress is in its most useful, least damaging form, in part to avoid an overreaction and limitations as we saw with nuclear power and which some are (ridiculously) seeking around AI and online speech.
It’s one thing to be a techno optimist. I still very much consider myself to be one. I said years ago that the reason Techdirt exists is to try to advocate against those seeking to hold back innovation, because I believe the advantages of innovation are tremendous. That sounds similar to Marc’s manifesto, but the big difference is that I recognize that part of seeing through to that kind of future, where innovation comes faster and more widely distributed than it would otherwise be, is to not fuck it up in the process.
Nearly all of the things that Marc describes as “enemies,” are mostly attempts to make sure things don’t get fucked up in the process.
Are there some cases where people take those things too far? Sure. And it’s reasonable to push back against that and highlight the various trade-offs. But describing all of these concepts as enemies, when mostly they are seeking to simply make sure that we improve the outcomes, is silly.
I am reminded of the comment that Cory Doctorow has mentioned in reference to EFF founder John Perry Barlow. People often accused Barlow and others like him of being “cyber utopians” who naturally believed that technology would obviously be a force for good. But you don’t go creating an organization like EFF, which spends all its time and effort fighting to make sure technology is a force for good, if you think that’s the inevitable outcome of the technology.
You recognize that bad shit can happen, and that if you want to be a real techno optimist, you look for ways to minimize the bad and promote the good. That’s the optimism: that with some effort we can make sure that the good of technology outweighs the bad. But Andreessen’s version is that we should just ignore the bad and the good will magically wipe out all the bad. That’s not just simplistic, it’s ahistorical, as his own example with nuclear power proves.
There’s been a big push lately among Andreessen and others in Silicon Valley for this concept of “tech accelerationism” or “effective accelerationism” (sometimes abbreviated as e/acc). It pushes for this tech progress as quick as possible. And while I have said that the whole reason behind Techdirt is in the hopes of seeing more innovation happen faster, I’ve always been uncomfortable with the whole e/acc stuff, and it took Marc’s manifesto to make me understand why.
My view is more along the lines of Barlow’s: the most effective way to bring about more tech innovation and progress is to recognize that bad shit can happen, and to work to limit that bad shit, rather than solely focusing on more more faster faster.
The Andreessen manifesto, on the other hand, seeks to denigrate those looking to make sure that innovation is done in a way that is less likely to create the kinds of harms that would lead to backlash and restrictions in the hopes that maybe they can somehow magically reach some mythical end-state before the hammer comes down.
But, true techno optimism should be focused on figuring out the ways to enable such tech progress in ways that are more fair, equitable, and sustainable. That it’s done in a way that limits the downside risks, such that people are more eager and willing to embrace what it delivers, rather than cringe in fear of its negative impacts.
And I do still believe we’re in a moment where so much is possible. The things I said in last year’s final post still stand. This year we’ve seen great developments in decentralized tools, and the ability to break down centralized silos and push more power to the ends of the network. We have this opportunity now to build a better internet, one that isn’t just controlled by a few giant companies (including one Andreessen sits on the board of…), but rather one where the wider internet gets to decide how their information is used and who is in control.
Being a techno optimist requires an understanding of reality beyond “move fast and break things.” It requires an understanding that if you break too much shit society is going to shut you down, and potentially hold back important innovations (see: nuclear power).
I don’t see Andreessen’s vision of “techno-optimism” as that optimistic at all. It strikes me as the opposite. It seems mostly pessimistic, in that it feels the need to promote recklessness and danger in support of the benefit of a few. It is pessimistic about the idea that the world might embrace these innovations if they are first shown to be safe and thoughtful, rather than reckless and destructive.
Techno optimism is not blind faith that “all tech is good.” Techno optimism has to be couched in an understanding that there are tradeoffs with every decision, and if you want to get to those better goals sooner, it helps to think through who might be harmed and how, and seek to limit those risks, such that those risks won’t overwhelm the entire project.
Again, I return to Cory Doctorow’s memories of John Perry Barlow, who was often wrongly considered to be an optimist in the Andreessen sense. But as Doctorow notes, that’s not accurate at all:
But incentives do matter. Designing a system that can only be navigated by being a selfish bastard creates selfish bastardry, and the cognitive dissonance of everyday cruelties generates a kind of protective scar-tissue in the form of a reflex of judgment, dismissal, and cruelty.
And contrariwise, designing a system where we celebrate civic duty, kindness, empathy and the giving of gifts without the expectation of a reward produces an environment where the angels of our better nature can shout down the cruel, lizard-brain impulses that mutter just below the threshold of perception.
I remain an optimist in that I believe there are ways in which to design these systems that maximize the benefits and minimize the harms, and this is the best way to avoid the “nuclear” problem Andreessen describes.
Optimism is not blind faith, but actually working on the real challenges. I understand why people like Marc might wish to avoid those inconvenient realities, but it’s not optimism he’s presenting. It’s an attempt to dump the costs of his solutions on those least prepared to deal with them. And that strikes me as counterproductive.
Let’s celebrate actual tech optimism in the belief that through innovation we can actually seek to minimize the downsides and risks, rather than ignore them. That we can create wonderful new things in a manner that doesn’t lead many in the world to fear their impact, but to celebrate the benefits they bring. The enemies of techno optimism are not things like “trust and safety,” but rather the naive view that if we ignore trust and safety, the world will magically work out just fine.
As always, my final paragraph of these posts is thanking all of you, the community around Techdirt, for making all of this worthwhile. The community remains an amazing thing to me. I’ve said in the past that I write as if I’m going to share my thoughts into an empty void, not expecting anyone to ever pay attention, and I’m always amazed when anyone does, whether it’s to disagree with me, add some additional insights, challenge my thinking, or even reach out to talk about how to actually move some ideas forward. So, once again, thank you who are reading this for making Techdirt such a wonderful and special place, and let’s focus on being truly optimistic about the opportunities in front of us.
Filed Under: marc andreessen, new year's message, optimism, tech optimism, techno optimism, techno optimist manifesto
Generative AI Will Be A Huge Boon For The Public Domain, Unless Copyright Blocks It
from the a-chance-for-open-culture dept
A year ago, I noted that many of Walled Culture’s illustrations were being produced using generative AI. During that time, AI has developed rapidly. For example, in the field of images, OpenAI has introduced DALL-E 3 in ChatGPT:
When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.
Ars Technica has written a good intro to the new DALL-E 3, describing it as “a wake-up call for visual artists” in terms of its advanced capabilities. The article naturally touches on the current situation regarding copyright for these creations:
In the United States, purely AI-generated art cannot currently be copyrighted and exists in the public domain. It’s not cut and dried, though, because the US Copyright Office has supported the idea of allowing copyright protection for AI-generated artwork that has been appreciably altered by humans or incorporated into a larger work.
The article goes on to explore an interesting aspect of that situation:
there’s suddenly a huge new pool of public domain media to work with, and it’s often “open source”—as in, many people share the prompts and recipes used to create the artworks so that others can replicate and build on them. That spirit of sharing has been behind the popularity of the Midjourney community on Discord, for example, where people typically freely see each other’s prompts.
When several mesmerizing AI-generated spiral images went viral in September, the AI art community on Reddit quickly built off of the trend since the originator detailed his workflow publicly. People created their own variations and simplified the tools used in creating the optical illusions. It was a good example of what the future of an “open source creative media” or “open source generative media” landscape might look like (to play with a few terms).
There are two important points there. First, that the current, admittedly tentative, status of generative AI creations as being outside the copyright system means that many of them, perhaps most, are available for anyone to use in any way. Generative AI could drive a massive expansion of the public domain, acting as a welcome antidote to constant attempts to enclose the public domain by re-imposing copyright on older works – for example, as attempted by galleries and museums.
The second point is that without the shackles of copyright, these creations can form the basis of collaborative works among artists willing to embrace that approach, and to work with this new technology in new ways. That’s a really exciting possibility that has been hard to implement without recourse to legal approaches like Creative Commons. Although the intention there is laudable, most people don’t really want to worry about the finer points of licensing – not least out of fear that they might get it wrong, and be sued by the famously litigious copyright industry.
A situation in which generative AI creations are unequivocally in the public domain could unleash a flood of pent-up creativity. Unfortunately, as the Ars Technica article rightly points out, the status of AI generated artworks is already slightly unclear. We can expect the copyright world to push hard to exploit that opening, and to demand that everything created by computers should be locked down under copyright for decades, just as human inspiration generally is from the moment it is in a fixed form. Artists should enjoy this new freedom to explore and build on generative AI images while they can – it may not last.
Follow me @glynmoody on Mastodon. Originally posted to Walled Culture.
Filed Under: ai, copyright, generative ai, public domain
Every Major Pharmacy Chain Is Giving The Government Warrantless Access To Medical Records
from the third-party-doctrine-beats-HIPAA dept
The Fourth Amendment is rarely a match for the Third Party Doctrine. In recent years, things have gotten a wee bit better thanks to a couple of Supreme Court rulings. But the operative principle still overrides: whatever we share (voluntarily or not) with private companies can often be obtained without a warrant.
That’s why bills have been introduced to add Fourth Amendment protections to cell location data gathered by phone apps. That’s why there’s been a constant struggle in courts and in Congress to reconcile the Third Party Doctrine with the Fourth Amendment, given the vast amount of information and data Americans now share with thousands of third parties.
Then there’s the players in the Third Party Doctrine market. There’s the government, which wants as much information as it can obtain without having to subject its actions and motives to judicial scrutiny. And there are the private companies, who figure it’s far more cost effective to just give the government what it wants, rather than challenge government requests for data in court.
The private entities involved here probably have more reason than most to not try to piss the government off. Not only are they still struggling to recover from a widespread retail downturn ignited by a worldwide pandemic, but they’re also paying off large settlements to the government for playing things a bit too fast and loose when it came to handing out opioids to Americans.
As Beth Mole reports for Ars Technica (and following on the heels of the news pharmacy chain Rite Aid is facing a five-year facial recognition tech ban), every major player in the retail pharmacy business has been handing over sensitive medical data to the government without ever demanding to see an actual warrant.
All of the big pharmacy chains in the US hand over sensitive medical records to law enforcement without a warrant—and some will do so without even running the requests by a legal professional, according to a congressional investigation.
[…]
They include the seven largest pharmacy chains in the country: CVS Health, Walgreens Boots Alliance, Cigna, Optum Rx, Walmart Stores, Inc., The Kroger Company, and Rite Aid Corporation. The lawmakers also spoke with Amazon Pharmacy.
All eight of the pharmacies said they do not require law enforcement to have a warrant prior to sharing private and sensitive medical records, which can include the prescription drugs a person used or uses and their medical conditions. Instead, all the pharmacies hand over such information with nothing more than a subpoena, which can be issued by government agencies and does not require review or approval by a judge.
Three chains (CVS, Kroger, and Rite Aid) all told Congress they don’t even do a legal review of the subpoenas handed to them by government agencies. Instead, they apparently assume that if the government’s name is on it, it must be a valid request. The good news, I suppose, is that the other chains are at least involving their lawyers when it comes to data requests.
HIPAA (Health Insurance Portability and Accountability Act) — the medical record privacy law frequently misunderstood (and mis-acronymed) by laymen, lawyers, and legislators alike — is of no use here. HIPAA only prevents medical information from being released without permission to private parties not specifically authorized to obtain it. Pretty much any request originating from law enforcement agencies is considered to fall under the “if required by law” exception, even if the requests haven’t actually been vetted by pharmacy company lawyers and/or may not be legitimate demands for sensitive medical info.
The “required by law” phrase is important here. Law enforcement agencies have their own legal interpretations of the Third Party Doctrine, but none of that matters much in the case of HIPAA. All it would take to prevent pharmacy chains from handing out this data without a warrant would be the federal Department of Health and Human Services (HHS) taking this out of the Third Party Doctrine’s hands and placing a presumption of privacy on it.
That’s the gist of the letter [PDF] recently sent to HHS Secretary Xavier Becerra by Senator Ron Wyden, Rep. Pramila Jaypal, and Rep. Sara Jacobs. It cites a bit of courtroom and private company precedent to urge this situation along.
We urge HHS to consider further strengthening its HIPAA regulations to more closely align them with Americans’ reasonable expectations of privacy and Constitutional principles. Pharmacies can and should insist on a warrant, and invite law enforcement agencies that insist on demanding patient medical records with solely a subpoena to go to court to enforce that demand. The requirement for a warrant is exactly the approach taken by tech companies to protect customer privacy. In 2010, after just one Federal Court of Appeals held that Americans have a reasonable expectation of privacy in their emails and that the 1986 Congressionally enacted law permitting disclosures of email pursuant to a subpoena was unconstitutional, all of the major free email providers — Google, Yahoo, and Microsoft — started insisting on a warrant before disclosing such data.
Looks pretty simple. All that’s needed is a change of policy, even if there’s no change in law. The problem with this, though, is that the head of the HHS has had plenty of time to change this policy to erect a higher standard for demands for customers’ information. The letter notes the legislators first informed Becerra of this potential issue in July, following the Dobbs decision in June, hoping the HHS would erect more protections to prevent people from being prosecuted for obtaining birth control products.
The following months delivered confirmation of the legislators’ concerns. Now, it’s up to the HHS to move forward. While we wait to see whether a former prosecutor is willing to elevate the privacy of Americans above the warrantless desires of law enforcement, we can at least be somewhat comforted by the fact that some of these companies are going to be a bit more transparent about their cooperation with the government. CVS, Walgreens, and Kroger have all promised to publish periodic reports about government requests for data. Amazon has gone one step further by notifying customers about government demands for their data.
There’s no reason the government shouldn’t need to secure a warrant to obtain this data. It’s protected by federal law against everyone else patients haven’t specifically granted permission to obtain. The government shouldn’t presume the existence of the Third Party Doctrine means customers’ prescription records are an open book. But it does and that needs to change, either through voluntary action or legislative mandate if the government can’t be talked into respecting the privacy of records most Americans likely assume are already covered by federal privacy protections.
Filed Under: 4th amendment, drug records, hipaa, pharmacies, surveillance, third party doctrine
Lost In The Latest Apple Watch Patent Battle: The ITC Loophole Creates A Mess
from the two-bites-at-the-apple dept
If you follow tech news at all, you likely heard some stuff about the potential for an Apple Watch ban over patent infringement. It was all over the news. Apple had pulled its high end watches from its store last week, following an ITC ruling from earlier this year claiming that Apple’s blood oxygen reading sensor in the watch violated the patents of a company by the name of Masimo. The patent claim here might even have some level of validity, given the history of how Apple ended up developing such tech.
Then, last week, there were claims that the Biden administration could step in and block the the enforcement of the ITC’s ruling. But that always seemed unlikely (even if Obama did step in a decade ago to block a similar enforcement of an ITC ruling saying that Apple infringed on Samsung patents).
You also might have heard the news yesterday that Apple’s watches were going back on sale following an appeals court stepping in to halt the import ban. Of course, many articles about this failed to mention which court stopped the ban, so we’ll help you out on that one. It was the Court of Appeals for the Federal Circuit (CAFC) granting the stay of the ITC’s order in a two page order while it reviews it.
If all of this sounds vaguely familiar, well, it should. Almost exactly a year ago, we wrote about something that sounded nearly identical. The ITC had said it was going to ban the import of Apple watches, after saying that Apple infringed on a patent from a different company, AliveCor, regarding heart rate tracking (as opposed to blood oxygen tracking). In that case, also, the Biden administration chose not to step in and veto the ITC. And, also, in that case, the ITC ban was put on hold while Apple and AliveCor continued to fight things out in court. (One major difference in the AliveCor story is that the Patent Trial and Appeal Board invalidated AliveCor’s patents even before the ITC ruled, which made things… well… weird.)
But, really, what this should be doing is shining a bigger light on the silly ITC loophole through which both of these cases happened. We’ve been writing about the ITC loophole for over 15 years. It basically is a mechanism that gives patent holders two separate chances to file a lawsuit against a company for patent infringement, as the ITC process can happen in parallel with a case in federal court for patent infringement.
While the ITC is limited in the remedies it can issue (it can’t order a company to pay up, it can only block the import of goods from abroad), it leads to these weird situations where the ITC can effectively route around the courts and issue a separate ban order (like these two in the past year or so regarding Apple Watches) regardless of what a court finds (or if the Patent Office admits it made a mistake in issuing the patent).
This whole process is a mess. There is an Article III court process for reviewing if someone infringed a patent or not. There’s no need for another agency, one unrelated to patents, to have the authority to review the case separately. All it does is give patent holders an extra shot at trying to force another company to pay up.
In this case (as with the AliveCor situation), at least, it seems that the CAFC is stepping in to say “hey, let’s wait on the ITC bans until the entire court process has played out,” but that just reinforces the idea that the ITC process is confusing and duplicative.
Let the courts go through their process, including appeals, and if by the end of that process it’s determined that one company infringed on another’s patents, and then is unable to negotiate a license, then open up the ITC process to put in place an import ban. At that point, it’s been thoroughly adjudicated in courts of law, rather than through the administrative ITC process, and avoids situations like this where an import ban threatens to block an entire product before it’s been fully adjudicated in court.
Filed Under: apple watch, import ban, itc, itc loophole, patents, ustr
Companies: alivecor, apple, masimo
Turns Out Taser’s ‘Tragic’ Backstory Is Mostly Just Alternate Facts Cooked Up By Its Founder
from the pretenses-need-pretenses dept
Jeffrey Dastin, writing for Reuters, has dug up some very interesting information about TASER, which has since re-branded to Axon (and has since set its sights on arming cops with body cams, in addition to its infamous electrical devices).
The story behind the founding of TASER is something its founder, Rick Smith, loves to expound upon. The same narrative has been delivered to purchasers, shareholders, and company employees. Smith was motivated to create a so-called “less lethal” device because of tragedies he personally experienced.
It makes for a good story:
For years, Smith, a charismatic and fit 53-year old, has told variations of the same inspirational story – that he co-founded his now highly successful company because of the gun violence that killed his friends, whom he sometimes describes as football teammates. Their deaths feature in various promotions the company has run, including one this year in honor of its 30th anniversary. Smith even cites them in a 2020 Axon filing with the U.S. Securities and Exchange Commission (SEC).
The problem is this: it’s just a story. It sounds good. It makes Rick Smith appear to be very personally motivated to create something that gives cops (and cops only) a less-lethal way to drop perps in their tracks.
It would have been a better story if these two students had been killed by cops who only had lethal force options available to them. It also would have been a better story if it was, you know, actually true.
But it isn’t, as multiple people spoken to by Jeffery Dastin have indicated:
Smith was not friends with the deceased, Todd Bogers and Cory Holmes, according to three immediate family members and a close friend of the young men. They were gunned down after a road rage incident in 1991, not 1990, as indicated on Smith’s slide in Las Vegas. Smith played on the same football team as the boys at Chaparral in Scottsdale, Arizona – but not at the same time, according to school yearbooks seen by Reuters. The boys who were killed graduated in 1986. Smith does not appear in the yearbooks until the school year that ended in 1987.
Axon “ran a whole advertising campaign based on the murder of my son,” Todd’s father John Bogers said in an interview, recalling feelings of bereavement that the ads triggered. “They profited off that, and they didn’t ask for permission.”
That’s pretty ugly. What’s even uglier is the fact that the high school Rick Smith believes is so instrumental to his inspiration was rejected when it asked Taser/Axon to contribute to improvements of the football field where Smith claims he bonded with the dead people he now uses as bullet points in sales presentations.
Everyone personally related to the deceased students denies Smith being a close friend with either of these students. That leaves Smith alone with his preferred narrative, which only contains a tenuous link to actual facts.
The thing is Rick Smith never needed to do this. He could have pitched his devices by saying nothing more than he wanted to prevent tragedies like those of the two teens that attended the same school he did. But he chose to embellish the facts and make it all about him and how much he personally was affected by the deaths of supposedly close friends.
Presenting alternate facts and alternate realities is just the way Taser/Axon does business. It’s not just the CEO ginning up sympathy by exaggerating his relationship to two teens who died senseless deaths. This is the company that is almost solely responsible for the myth of “excited delirium,” a deadly medical condition that almost always presents itself only when cops are choking, tasing, shooting, or beating someone to death.
Because Taser devices were sold to cops as less-than-lethal devices, cops felt they could apply them anywhere they wanted for as long as they wanted, whether it meant deploying an electric shock to someone covered in gasoline or drive-stunning handcuffed teens for the crime of failing to recover immediately from a mental health crisis. The company prefers its own facts because the actual facts are way more horrific.
There’s much more in Dastin’s full report on Axon and Rick Smith, although — despite my dislike for the company and its tactics — I don’t think a lot of what’s reported depicts anything more than a CEO acting like a CEO and a corporation acting like a corporation.
Dastin points out that Rick Smith promised employees he would not go over the top with his personal compensation, stating that he would keep it in the 50th percentile for people in his position. But he soon found a way around that promised limitation by “allowing” the company to purchase him a $240,000 sports car in lieu of a cash bonus or exercising stock options worth $246 million, which made Smith one of the highest earning (but not highest paid) CEOs in 2018.
There’s other shady stuff in there as well, including nepotism, cash payouts delivered directly (and I mean directly, as in on a literal silver platter at a high-end restaurant) to executive staff, and quid pro quo sponsorship deals with a golf tournament that swiftly escalated Axon president Josh Isner into an executive position on the golf tournament’s board.
While all of us would like to see CEOs stick to their promises and companies refuse to bend/break internal policies, this isn’t really an Axon-centric issue. It’s a vast majority of corporate America. That Axon is doing it too doesn’t make Axon any better. But it also doesn’t make Axon any worse.
That being said, it’s a very well-written examination of everything questionable the company has done. The fact that it’s so comfortable lying to its employees and engaging in seemingly unethical behavior can be directly linked to its founder’s refusal to state the facts as they actually are, rather than what he would prefer them to be.
The fact is two students who attended the same high school as Axon founder Rick Smith were tragically killed. That should have been enough for him. But Smith decided to make the tragedy personal (without any basis in facts) so he could move as much merchandise as he could. That’s what’s really sickening here. Everything around it is just capitalism in action.
Filed Under: cory holmes, john bogers, less lethal weapons, police, rick smith, todd bogers
Companies: axon, taser
The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win
from the it's-okay-when-we-do-it,-we're-the-new-york-times dept
This week the NY Times somehow broke the story of… well, the NY Times suing OpenAI and Microsoft. I wonder who tipped them off. Anyhoo, the lawsuit in many ways is similar to some of the over a dozen lawsuits filed by copyright holders against AI companies. We’ve written about how silly many of these lawsuits are, in that they appear to be written by people who don’t much understand copyright law. And, as we noted, even if courts actually decide in favor of the copyright holders, it’s not like it will turn into any major windfall. All it will do is create another corruptible collection point, while locking in only a few large AI companies who can afford to pay up.
I’ve seen some people arguing that the NY Times lawsuit is somehow “stronger” and more effective than the others, but I honestly don’t see that. Indeed, the NY Times itself seems to think its case is so similar to the ridiculously bad Authors Guild case, that it’s looking to combine the cases.
But while there are some unique aspects to the NY Times case, I’m not sure they are nearly as compelling as the NY Times and its supporters think they are. Indeed, I think if the Times actually wins its case, it would open the Times itself up to some fairly damning lawsuits itself, given its somewhat infamous journalistic practices regarding summarizing other people’s articles without credit. But, we’ll get there.
The Times, in typical NY Times fashion, presents this case as thought the NY Times is the great defender of press freedom, taking this stand to stop the evil interlopers of AI.
Independent journalism is vital to our democracy. It is also increasingly rare and valuable. For more than 170 years, The Times has given the world deeply reported, expert, independent journalism. Times journalists go where the story is, often at great risk and cost, to inform the public about important and pressing issues. They bear witness to conflict and disasters, provide accountability for the use of power, and illuminate truths that would otherwise go unseen. Their essential work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support, as well as editors who ensure their journalism meets the highest standards of accuracy and fairness. This work has always been important. But within a damaged information ecosystem that is awash in unreliable content, The Times’s journalism provides a service that has grown even more valuable to the public by supplying trustworthy information, news analysis, and commentary
Defendants’ unlawful use of The Times’s work to create artificial intelligence products that compete with it threatens The Times’s ability to provide that service. Defendants’ generative artificial intelligence (“GenAI”) tools rely on large-language models (“LLMs”) that were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more. While Defendants engaged in widescale copying from many sources, they gave Times content particular emphasis when building their LLMs—revealing a preference that recognizes the value of those works. Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.
As the lawsuit makes clear, this isn’t some high and mighty fight for journalism. It’s a negotiating ploy. The Times admits that it has been trying to get OpenAI to cough up some cash for its training:
For months, The Times has attempted to reach a negotiated agreement with Defendants, in accordance with its history of working productively with large technology platforms to permit the use of its content in new digital products (including the news products developed by Google, Meta, and Apple). The Times’s goal during these negotiations was to ensure it received fair value for the use of its content, facilitate the continuation of a healthy news ecosystem, and help develop GenAI technology in a responsible way that benefits society and supports a well-informed public.
I’m guessing that OpenAI’s decision a few weeks back to pay off media giant Axel Springer to avoid one of these lawsuits, and the failure to negotiate a similar deal (at what is likely a much higher price), resulted in the Times moving forward with the lawsuit.
There are five or six whole pages of puffery about how amazing the NY Times thinks the NY Times is, followed by the laughably stupid claim that generative AI “threatens” the kind of journalism the NY Times produces.
Let me let you in on a little secret: if you think that generative AI can do serious journalism better than a massive organization with a huge number of reporters, then, um, you deserve to go out of business. For all the puffery about the amazing work of the NY Times, this seems to suggest that it can easily be replaced by an auto-complete machine.
In the end, though, the crux of this lawsuit is the same as all the others. It’s a false belief that reading something (whether by human or machine) somehow implicates copyright. This is false. If the courts (or the legislature) decide otherwise, it would upset pretty much all of the history of copyright and create some significant real world problems.
Part of the Times complaint is that OpenAI’s GPT LLM was trained in part with Common Crawl data. Common Crawl is an incredibly useful and important resource that apparently is now coming under attack. It has been building an open repository of the web for people to use, not unlike the Internet Archive, but with a focus on making it accessible to researchers and innovators. Common Crawl is a fantastic resource run by some great people (though the lawsuit here attacks them).
But, again, this is the nature of the internet. It’s why things like Google’s cache and the Internet Archive’s Wayback Machine are so important. These are archives of history that are incredibly important, and have historically been protected by fair use, which the Times is now threatening.
(Notably, just recently, the NY Times was able to get all of its articles excluded from Common Crawl. Otherwise I imagine that they would be a defendant in this case as well).
Either way, so much of the lawsuit is claiming that GPT learning from this data is infringement. And, as we’ve noted repeatedly, reading/processing data is not a right limited by copyright. We’ve already seen this in multiple lawsuits, but this rush of plaintiffs is hoping that maybe judges will be wowed by this newfangled “generative AI” technology into ignoring the basics of copyright law and pretending that there are now rights that simply do not exist.
Now, the one element that appears different in the Times’ lawsuit is that it has a bunch of exhibits that purport to prove how GPT regurgitates Times articles. Exhibit J is getting plenty of attention here, as the NY Times demonstrates how it was able to prompt ChatGPT in such a manner that it basically provided them with direct copies of NY Times articles.
In the complaint, they show this:
At first glance that might look damning. But it’s a lot less damning when you look at the actual prompt in Exhibit J and realize what happened, and how generative AI actually works.
What the Times did is prompt GPT-4 by (1) giving it the URL of the story and then (2) “prompting” it by giving it the headline of the article and the first seven and a half paragraphs of the article, and asking it to continue.
Here’s how the Times describes this:
Each example focuses on a single news article. Examples were produced by breaking the article into two parts. The frst part o f the article is given to GPT-4, and GPT-4 replies by writing its own version of the remainder of the article.
Here’s how it appears in Exhibit J (notably, the prompt was left out of the complaint itself):
If you actually understand how these systems work, the output looking very similar to the original NY Times piece is not so surprising. When you prompt a generative AI system like GPT, you’re giving it a bunch of parameters, which act as conditions and limits on its output. From those constraints, it’s trying to generate the most likely next part of the response. But, by providing it paragraphs upon paragraphs of these articles, the NY Times has effectively constrained GPT to the point that the most probabilistic responses is… very close to the NY Times’ original story.
In other words, by constraining GPT to effectively “recreate this article,” GPT has a very small data set to work off of, meaning that the highest likelihood outcome is going to sound remarkably like the original. If you were to create a much shorter prompt, or introduce further randomness into the process, you’d get a much more random output. But these kinds of prompts effectively tell GPT not to do anything BUT write the same article.
From there, though, the lawsuit gets dumber.
It shows that you can sorta get around the NY Times’ paywall in the most inefficient and unreliable way possible by asking ChatGPT to quote the first few paragraphs in one paragraph chunks.
Of course, quoting individual paragraphs from a news article is almost certainly fair use. And, for what it’s worth, the Times itself admits that this process doesn’t actually return the full article, but a paraphrase of it.
And the lawsuit seems to suggest that merely summarizing articles is itself infringing:
That’s… all factual information summarizing the review? And while the complaint shows that if you then ask for (again, paragraph length) quotes, GPT will give you a few quotes from the article.
And, yes, the complaint literally argues that a generative AI tool can violate copyright when it “summarizes” an article.
The issue here is not so much how GPT is trained, but how the NY Times is constraining the output. That is unrelated to the question of whether or not the reading of these article is fair use or not. The purpose of these LLMs is not to repeat the content that is scanned, but to figure out the probabilistic most likely next token for a given prompt. When the Times constrains the prompts in such a way that the data set is basically one article and one article only… well… that’s what you get.
Elsewhere, the Times again complains about GPT returning factual information that is not subject to copyright law.
But, I mean, if you were to ask anyone the same question, “What does wirecutter recommend for The Best Kitchen Scale,” they’re likely to return you a similar result, and that’s not infringing. It’s a fact that that scale is the one that it recommends. The Times complains that people who do this prompt will avoid clicking on Wirecutter affiliate links, but… um… it has no right to that affiliate income.
I mean, I’ll admit right here that I often research products and look at Wirecutter (and other!) reviews before eventually shopping independently of that research. In other words, I will frequently buy products after reading the recommendations on Wirecutter, but without clicking on an affiliate link. Is the NY Times really trying to suggest that this violates its copyright? Because that’s crazy.
Meanwhile, it’s not clear if the NY Times is mad that it’s accurately recommending stuff or if it’s just… mad. Because later in the complaint, the NY Times says its bad that sometimes GPT recommends the wrong product or makes up a paragraph.
So… the complaint is both that GPT reproduces things too accurately, AND not accurately enough. Which is it?
Anyway, the larger point is that if the NY Times wins, well… the NY Times might find itself on the receiving end of some lawsuits. The NY Times is somewhat infamous in the news world for using other journalists’ work as a starting point and building off of it (frequently without any credit at all). Sometimes this results in an eventual correction, but often it does not.
If the NY Times successfully argues that reading a third party article to help its reporters “learn” about the news before reporting their own version of it is copyright infringement, it might not like how that is turned around by tons of other news organizations against the NY Times. Because I don’t see how there’s any legitimate distinction between OpenAI scanning NY Times articles and NY Times reporters scanning other articles/books/research without first licensing those works as well.
Or, say, what happens if a source for a NY TImes reporter provides them with some copyright-covered work (an article, a book, a photograph, who knows what) that the NY Times does not have a license for? Can the NY Times journalist then produce an article based on that material (along with other research, though much less than OpenAI used in training GPT)?
It seems like (and this happens all too often in the news industry) the NY Times is arguing that it’s okay for its journalists to do this kind of thing because it’s in the business of producing Important Journalism™ whereas anyone else doing the same thing is some damn interloper.
We see this with other copyright disputes and the media industry, or with the ridiculous fight over the hot news doctrine, in which news orgs claimed that they should be the only ones allowed to report on something for a while.
Similarly, I’ll note that even if the NY Times gets some money out of this, don’t expect the actual reporters to see any of it. Remember, this is the same NY Times that once tried to stiff freelance reporters by relicensing their articles to electronic databases without paying them. The Supreme Court didn’t like that. If the NY Times establishes that merely training AI on old articles is a licenseable, copyright-impacting event, will it go back and pay those reporters a piece of whatever change they get? Or nah?
Filed Under: ai, ai training, copyright, fair use, generative ai, reading, restrictive prompts, summarizing, training
Companies: common crawl, microsoft, ny times, openai
Amazon Gives Giant Middle Finger To Prime Video Customers, Will Charge $3 Extra A Month To Avoid Ads Starting In January
from the oh-look-we've-learned-nothing dept
Thanks to industry consolidation and saturated market growth, the streaming industry has started behaving much like the traditional cable giants they once disrupted.
As with most industries suffering from “enshittification,” that generally means imposing obnoxious new restrictions (see: Netflix password sharing), endless price hikes, and obnoxious and dubious new fees geared toward pleasing Wall Street’s utterly insatiable demand for improved quarterly returns at any cost.
All while the underlying product quality deteriorates due to corner cutting and employees struggle to get paid (see: the giant, ridiculous turd that is the Time Warner Discovery merger).
Case in point: Amazon customers already pay $15 per month, or $139 annually for Amazon Prime, which includes a subscription to Amazon’s streaming TV service. In a bid to make Wall Street happy, Amazon recently announced it would start hitting those users with entirely new streaming TV ads, something you can only avoid if you’re willing to shell out an additional $3 a month.
There was ample backlash to Amazon’s plan, but it apparently accomplished nothing. Amazon says it’s moving full steam ahead with the plan, which will begin on January 29th:
“We aim to have meaningfully fewer ads than linear TV and other streaming TV providers. No action is required from you, and there is no change to the current price of your Prime membership,” the company wrote. Customers have the option of paying an additional $2.99 per month to keep avoiding advertisements.”
If you recall, it took the cable TV, film, music, and broadcast sectors the better part of two decades before they were willing to give users affordable, online access to their content as part of a broader bid to combat piracy. There was just an endless amount of teeth gnashing by industry executives as they were pulled kicking and screaming into the future.
Despite having just gone through that experience, streaming executives refuse to learn anything from it, and are dead set on nickel and diming their users. This will inevitably drive a non-insignificant amount of those users back to piracy, at which point executives will blame the shift on absolutely everything and anything other than themselves. And the cycle continues in perpetuity…
Filed Under: ads, advertisements, amazon prime, cabletv, enshittification, piracy, price hike, streaming, video
Companies: amazon
Body Cam Report Shows Fewer Agencies Are Allowing Cops To View Footage Before Making Statements
from the good-news-from-an-unexpected-source dept
The Police Executive Research Forum (PERF) published a report on body cam use by law enforcement agencies in 2014. It not only presented stats on body cam use around the nation, but also attempted to create a set of best practices for the agencies utilizing them.
Since then, body cams have become as commonplace as dash cams. While often touted as a tool to increase transparency and accountability, the truth is a bit more complicated. Many law enforcement agencies refuse to release body cam recordings to public records requesters, negating some of the hoped for transparency. Police unions helped erase some of the accountability, striking deals that allowed officers to view footage before writing reports or making statements to investigators.
What cops may have feared would become an unblinking witness of their bad behavior, capable of costing them their positions, if not their actual jobs, the reality is that body cams became cops’ best friends. Footage cleared officers wrongly accused of misconduct. Better yet, body cam recordings captured plenty of evidence to use against defendants. And when it looked like something bad might be caught on camera, officers were free to pretend they had forgotten to activate them or that the devices had simply malfunctioned.
That’s the bad news. Fortunately, PERF isn’t interested in ensuring body cam footage remains something more useful to the police and less useful to the communities they serve. It appears the forum is actually trying to increase accountability and repair relationships damaged by years of uncontrolled misconduct.
Its latest report [PDF] on body cameras contains a lot of discussion about officers’ access to recordings, especially when misconduct is suspected. But it opens up by noting how much has changed since its last report in terms of body cam uptake.
Much has changed in the ten years since that first convening. For one thing, the police use of body cameras has skyrocketed. In 2020, almost 4 in 5 (79 percent) local police officers worked in departments that used BWCs, and all departments serving 1 million or more residents reported using them. Sheriffs’ offices had similar increases in their use of BWCs, with more than two-thirds (68 percent) of sheriffs’ offices having BWCs in 2020. Even federal law enforcement agencies, such as the FBI and the U.S. Customs and Border Protection, have adopted this technology. And with high-profile police use-of-force incidents and in-custody deaths leading to demands for greater police accountability, the public has come to want—and expect—police officers to wear cameras.
PERF frames the controlled access to body cam recordings as something essential to an employer-employee relationship. Every entity with employees engages in performance reviews and disciplinary actions when an employee has screwed up. PERF argues cops shouldn’t be exempted from this normal part of employment. And it says agencies using body cams are best prepared to do this sort of thing properly… provided they’re actually willing to do the job properly. PERF suggests “Monday morning quarterbacking,” using questions like these to get to the root of observed problems:
• Is any of this consistent with how officers are trained?
• Should supervisors have been on scene, and should they have known how this specialized unit operated?
• These officers had body-worn cameras. Why might they have behaved this way when they knew they were being recorded?
• Why didn’t initial reports accurately reflect what was seen on video? Could officers’ statements have been aimed at fixing a narrative?
• What role do you think the culture of specialized units might have played in this incident?
• The agency has a “duty to intervene” policy. Why didn’t anyone intervene when they saw Mr. Nichols being beaten?
• The agency also has a policy requiring officers to render first aid. Did officers promptly render first aid in accordance with their training?
While police officials love to claim the public isn’t qualified to second-guess actions taken by officers, they — and the officers working for them — should be more than willing to engage in second-guesswork with the people they do believe are qualified to do so.
The report notes the suggestion it made in 2004 — that officers not be allowed to view footage before making reports or statements — received a lot of pushback. But, surprisingly, it appears many agencies are beginning to realize this is something that must be done to deter misconduct and improve the performance of the officers they employ.
Although nearly 90 percent of the policies PERF reviewed have been updated since BJA’s 2019 policy review, the only significant policy change over the four years has been a decline in the percentage of agencies that allow officers to view video of a critical incident before making a statement, from 92 percent in the BJA review to 56 percent in PERF’s review of 127 policies.
This is definitely a move in the right direction. It’s pretty tough to manage a workforce that’s allowed to alter its narrative before meeting with supervisors or investigators. Now, more than half of agencies utilizing body cams gather statements first before allowing officers access to their recordings.
It appears the most prominent obstacle to getting to 100% are law enforcement unions. But even those powerful entities are being recognized for what they are by the agencies whose workforce they represent: an impediment to improvement, accountability, and rebuilt community trust. Here’s what the Los Angeles Police Department experienced following its (early) adoption of the tech:
According to Commander Steven Lurie, union representatives were concerned about the impact of random audits on their membership, and the union’s support was pivotal in adopting the BWC program. As a result, LAPD does not currently permit random reviews, and it has permitted audits only to ensure compliance with activation and deactivation requirements and to monitor employees identified as high-risk by LAPD’s early-warning system.
However, pursuant to a 2023 audit that found officers were “routinely turning off their body worn cameras in violation of department policy,” Chief Michel Moore says he is “considering changing department policy to increase random review of body camera recordings that don’t involve arrests or the use of force.”
The report makes several policy recommendations for body cam use, ranging from activation to storage to access. But almost none of this is new. These were the same recommendations made a decade ago, when only a handful of agencies were beginning to implement the tech. A decade later, PERF has only altered one recommendation — a change prompted by changes made internally by nearly 40% of the agencies surveyed:
Officers involved in a critical incident should be interviewed before watching relevant BWC footage. During the “perceptual interview,” they should describe their perceptions (what they saw, heard, felt, believed, experienced before arriving, etc.) before, during, and after an incident. After the perceptual interview, officers should be given the opportunity to provide a video-informed statement by reviewing BWC footage and offering clarifications that they feel are appropriate.
This is the right way to go. And I’m glad to see several law enforcement agencies have gotten out ahead of this, rather than waiting for this to be forced on them by legislators or police accountability boards.
Another Reason Why Diamond Access Makes Sense: No Economic Barriers To Publishing Rebuttals
from the helping-science-progress dept
Walled Culture has written numerous posts about the promise and problems of open access. An important editorial in the journal Web Ecology raises an issue for open access that I’ve not seen mentioned before. It concerns the fraught issue of rebuttal articles, which offer fact-based criticism of already-published academic papers:
Critical comments on published articles vary in importance; they can simply point to an aspect absent from a published article or offer an alternative interpretation or perspective. In some cases, they can point to fundamental flaws that undermine the published conclusions. The nomenclature of these – comments, replies, rebuttals – is variable, but their importance to scientific progress is unquestionable.
Rebuttal articles are a vital part of the scientific publishing process, since they help weed out mistakes made by other researchers, usually honest errors, but sometimes not. As the Web Ecology editorial notes, writing rebuttal articles is hard enough because of their necessarily confrontational nature. But anyone wanting to publish rebuttals in open access titles that are funded through article processing charges (APCs), generally paid by the researcher’s academic institution, has to contend with an additional problem. In this case, as well as writing cogent explanations why published research is faulty, people who wish to publish a rebuttal must generally pay an APC to do so. The Web Ecology editorial gives details of a particular case where several scientists spent considerable time and effort rebutting an article in the open access journal Ecosphere, about spiders that allegedly preyed on bats:
Their rebuttal article was peer-reviewed in Ecosphere, where it was accepted for publication (Daniel Montesinos has seen copies of the submitted rebuttal and of its acceptance letter). However, the authors of this reply were requested to pay an APC of USD 2100/GBP 1300/EUR 1700 for a rebuttal article that largely disproved the original publication. The authors of the reply, who had altruistically devoted significant time to writing their rebuttal, refused to pay. They felt that they were doing the journal – and science – a service and that it was unreasonable to charge them for it.
Because these authors’ APC request was denied, the original Ecosphere article, which they claimed was flawed, remained uncontested, while the rebuttal was not published there. Instead, the editors of Web Ecology stepped in and published it themselves. As they comment:
Clearly, charging authors for brief, well-founded criticism of published articles creates a highly problematic disincentive to fruitful scientific discussion. This uncontroversial stance should enjoy universal support, but it currently does not. This might be excused as a simple oversight. Historically, this had never been an issue because most journals did not charge any publication fees. However, today more than 40 % of all Web of Science publications are open access (Basson et al., 2022). It is time to consider the damaging effects of charging authors for critical comments in open-access journals.
Drawing on their experience here, they go on to make an important point:
When a clear error is detected, it is for the best interest of all to find a reasonable and ethical solution in the shortest possible time. For platinum/diamond open-access journals, this is not an issue. Web Ecology has charged no APCs since its creation in 2000, which shows the viability of making science truly available to the whole scientific community at a moderate cost while maintaining the highest scientific and publishing standards.
As Walled Culture has written, diamond open access journals (also known as platinum open access) charge neither the people who read their papers, nor the researchers who publish them. Instead, they are funded through other sources, something made easier by the minimalist kind of publishing that they typically engage in. The fact that they can publish rebuttals quickly and without demanding a payment to do so is yet another reason they are the best form of open access available.
Follow me @glynmoody on Mastodon. Originally published to Walled Culture.
Filed Under: diamond open access, open access, rebuttals, research, science
Stupid Patent of the Month: Selfie Contests
from the can-we-have-an-online-contest-for-the-dumbest-patent? dept
Patents are supposed to be an incentive to invent. Too often, they end up being a way to try to claim “ownership” of what should be basic building blocks of human activity, culture, and knowledge. This is especially true of software patents, an area EFF has been speaking out about for more than 20 years now.
This month’s Stupid Patent, No. 8,655,715, continues the tradition of trying to use software language to capture a monopoly on a basic human cultural activity — in this case, contests.
A company called Opus One, which does business under the name “Contest Factory,” claims this patent and a related one cover a huge array of online contests. So far, they’ve filed five lawsuits against other companies that help build online contests, and even threatened a small photo company that organizes mostly non-commercial contests online.
The patents held by Contest Factory are a good illustration of why EFF has been concerned about out-of-control software patents. It’s not just that wrongly issued patents extort a vast tax on the U.S. economy (although they do—one study estimated $29 billion in annual direct costs). The worst software patents also harm peoples’ rights to express themselves and participate in online culture. Just as we’re free in the physical world to sign documents, sort photos, store and label information, clock in to work, find people to date, or teach foreign languages, without paying extortionate fees to others, we must also be free to do so online.
Patenting Contests
Claim 1 of the ‘715 patent has steps that claim:
- Receiving, storing, and accessing data on a computer;
- Sorting it and generating “contest data”;
- Tabulating votes and picking a winner.
The patent also uses other terms for common activities of general purpose computers, such as “transmitting” and “displaying” data.
In other words, the patent describes everyday use of computers, plus the idea of users participating in a contest. This is a classic abstract idea, and it never should have been eligible for a patent.
In a 2017 article in CIO Review, the company acknowledges how incredibly broad its claims are. Contest Factory claims it patented “voting in online contests long before TV contest shows with public voting components made their appearance,” and that it holds patents “associated with online contests and integrating online voting with virtually any type of contest.”
Lawsuit Over Radio Station Contest
In its most recent lawsuit, Contest Factory says that a Minneapolis radio station’s “Mother’s Day Giveaway” for a mother/daughter spa day infringed its patent. The radio station asked people to post mother-daughter selfies online and share their entry to collect votes.
Contest Factory sued Pancake Labs (complaint), the company that helped the radio station put the contest online. Contest Factory also claimed a PBS contest in which viewers created short films and voted on them was an example of infringement.
For the “Mother’s Day Giveaway” contest, the patent infringement accusation reads in part that, “the executable instructions … cause the generation of a contest and the transmission of the first and second content data to at least one user to view and rate the content.”
Contest Factory has sued over quite a few internet contests, dating back more than a decade. Its 2016 lawsuits, based on the ‘715 patent and two earlier related patents, were filed against three small online marketing firms: Vancouver-based Strutta, Florida-based Elettro, and California-based Votigo, for contests that go back to 2011. We don’t know how many more companies or online communities have been threatened in all.
Sharing user-generated content like photos—cooperatively or competitively—is the kind of sharing that the digital world is ideal for. When patent owners demand a toll for these activities, it doesn’t matter whether they’re patent “trolls” or operating companies seeking to extract settlements from competitors. They threaten our freedoms in unacceptable ways.
The government shouldn’t be issuing patents like these, and it certainly shouldn’t be making them harder to challenge.
- Opus One d/b/a Contest Factory v. Pancake Labs complaint
- Opus One d/b/a Contest Factory v. Telescope complaint
- Opus One d/b/a Contest Factory v. Elletro complaint
- Opus One d/b/a Contest Factory v. Votigo complaint
- Opus One d/b/a Contest Factory v. Strutta complaint
Originally posted to the EFF’s Stupid Patent of the Month Series.
Filed Under: contests, on a computer, online contests, patents, stupid patent of the month
Companies: contest factory, opus one
No comments:
Post a Comment