Monday, October 30, 2023

New White House executive order on AI seeks to address security risks

 Instead of preventing AI harms before deployment—for example, by making tech companies’ data practices better—the White House is using a “whack-a-mole” approach, tackling problems that have already emerged, she adds.  

The highly anticipated executive order on artificial intelligence comes two days before the UK’s AI Safety Summit and attempts to position the US as a global leader on AI policy. 

It will likely have implications outside the US, adds Bradford. It will set the tone for the UK summit and will likely embolden the European Union to finalize its AI Act, as the executive order sends a clear message that the US agrees with many of the EU’s policy goals.

“The executive order is probably the best we can expect from the US government at this time,” says Bradford.   

VIDEO


cyberscoop.com

White House executive order on AI seeks to address security risks

mbracken
7 - 9 minutes

The White House announced a long-awaited executive order on Monday that attempts to mitigate the security risks of artificial intelligence while harnessing the potential benefits of the technology. 

Coming nearly a year after the release of ChatGPT — the viral chatbot that captured public attention and kicked off the current wave of AI frenzy — Monday’s executive order aims to walk a fine line between over-regulating a new and potentially groundbreaking technology and addressing its risks.

The order directs leading AI labs to notify the U.S. government of training runs that produce models with potential national security risks, instructs the National Institutes of Standards and Technology to develop frameworks for how to adversarially test AI models, and establishes an initiative to harness AI to automatically find and fix software vulnerabilities, among other measures. 

Addressing questions of privacy, fairness and existential risks associated with AI models, Monday’s order is a sweeping attempt to lay the groundwork for a regulatory regime at a time when policymakers around the world are scrambling to write rules for AI. A White House fact sheet describes the order as containing “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”

Experts welcomed the order on Monday but cautioned that its potential impacts will depend on how it is implemented and the ability to fund various initiatives. Key provisions of the order, such as a call for addressing the privacy risks of AI models, will require Congress to act on federal privacy legislation, a legislative priority that remains stalled. 

Sen. Mark Warner, D-Va., said in a statement that while he is “impressed by the breadth” of the order, “much of these just scratch the surface — particularly in areas like health care and competition policy.”

“While this is a good step forward, we need additional legislative measures, and I will continue to work diligently to ensure that we prioritize security, combat bias and harmful misuse, and responsibly roll out technologies,” Warner said.

More broadly, the executive order represents a shift in how Washington approaches technology regulation and is informed in part by the failure to regulate social media platforms. Having failed to address the impact of social media platforms on everything from elections to teen mental health, policymakers in Washington are keen to not be caught flat-footed again in writing rules for AI. 

“This proactive approach is radically different from how the government has regulated new technologies in the past, and for good reason,” said Chris Wysopal, the CTO and co-founder of Veracode. “The same ‘wait and see’ strategy that the government took to regulate the internet and social media is not going to work here.”

This proactive approach, however, is one that some industry groups and free-market advocates caution could stifle innovation at an early stage of AI innovation.

“The administration is adopting an everything-and-the-kitchen-sink approach to AI policy that is, at once, extremely ambitious and potentially overzealous,” said Adam Thierer, a senior fellow at the free-market think tank R Street. “The order represents a potential sea change in the nation’s approach to digital technology markets as federal policymakers appear ready to shun the open innovation model that made American firms global leaders in almost all computing and digital technology sectors.”

Monday’s order takes a series of steps to address some of the most severe potential risks of AI, including its threat to critical infrastructure and its potential use as an aid to create novel biological weapons, in the design of nuclear weapons or the creation of malicious software.

To address growing concerns that AI could be used to supercharge disinformation used to influence elections — especially in next year’s presidential election — Monday’s order will require the Department of Commerce to develop guidance for “content authentication and watermarking” to show clearly labeled marks for AI-generated content.

The administration’s initiative to build cybersecurity tools to automatically find and fix software flaws builds on an ongoing competition at the Defense Advanced Projects Research Agency, and experts on Monday welcomed the focus on trying to harness AI to deliver broad improvements in computer security.  

The goal is to raise the barrier to entry in using these tools to either create malware or assist in cyber operations. “It feels like the early days of antivirus,” said David Brumley, a cybersecurity professor at Carnegie Mellon University and the CEO of the cybersecurity firm ForAllSecure. “I know it’s malicious when I see it and I can prevent that same malicious thing from occurring, but it’s hard to proactively prevent someone from creating more malware.”

Brumley cautioned that the agencies that Monday’s order relies on to implement new safety initiatives may lack the capacity to carry them out. The order, for example, calls on NIST to develop standards for performing safety tests of AI systems and directs the Department of Homeland Security to apply those standards to the critical infrastructure sectors it oversees. 

NIST will likely have to engage with outside experts to develop these standards, as it currently lacks the right know-how. “They’re relying on very traditional government agencies like NIST that have no expertise in this,” Brumley said.

DHS’ Homeland Threat Assessment recently called out AI as one of the more pertinent threats to critical infrastructure, warning that China and other adversaries are likely to use AI to develop industrial-specific malware.

“Malicious cyber actors have begun testing the capabilities of AI-developed malware and AI-assisted software development — technologies that have the potential to enable larger scale, faster, efficient, and more evasive cyber attacks — against targets, including pipelines, railways, and other U.S. critical infrastructure,” the DHS report reads.

The federal government is beginning to address these threats, as with the National Security Agency’s announcement last month of an AI Security Center that will oversee the development and use of AI. Monday’s order contains additional initiatives to address these more narrow security concerns, including the creation of an AI Safety and Security Board housed within DHS. What authority will be given to the board and its similarity to other review bodies, such as the Cyber Safety Review Board, remain to be seen.

The order also calls on the National Security Council and White House chief of staff to develop a national security memorandum that lays out how the military and intelligence community will use AI “safely, ethically, and effectively” in missions, as well as direct actions to counter adversary use of AI.


Policy

Three things to know about the White House’s executive order on AI

Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.

10 - 12 minutes

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what's coming next. You can read more from the series here.

The US has set out its most sweeping set of AI rules and guidelines yet in an executive order issued by President Joe Biden today. The order will require more transparency from AI companies about how their models work and will establish a raft of new standards, most notably for labeling AI-generated content. 

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.  

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

Nevertheless, AI experts have hailed the order as an important step forward, especially thanks to its focus on watermarking and standards set by the National Institute of Standards and Technology (NIST). However, others argue that it does not go far enough to protect people against immediate harms inflicted by AI.

Here are the three most important things you need to know about the executive order and the impact it could have. 

What are the new rules around labeling AI-generated content? 

The White House’s executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” according to a fact sheet that the White House shared over the weekend. 

The hope is that labeling the origins of text, audio, and visual content will make it easier for us to know what’s been created using AI online. These sorts of tools are widely proposed as a solution to AI-enabled problems such as deepfakes and disinformation, and in a voluntary pledge with the White House announced in August, leading AI companies such as Google and Open AI pledged to develop such technologies

The trouble is that technologies such as watermarks are still very much works in progress. There currently are no fully reliable ways to label text or investigate whether a piece of content was machine generated. AI detection tools are still easy to fool

The executive order also falls short of requiring industry players or government agencies to use these technologies.

On a call with reporters on Sunday, a White House spokesperson responded to a question from MIT Technology Review about whether any requirements are anticipated for the future, saying, “I can imagine, honestly, a version of a call like this in some number of years from now and there'll be a cryptographic signature attached to it that you know you’re actually speaking to [the White House press team] and not an AI version.” This executive order intends to “facilitate technological development that needs to take place before we can get to that point.”

The White House says it plans to push forward the development and use of these technologies with the Coalition for Content Provenance and Authenticity, called the C2PA initiative. As we’ve previously reported, the initiative and its affiliated open-source community has been growing rapidly in recent months as companies rush to label AI-generated content. The collective includes some major companies like Adobe, Intel, and Microsoft and has devised a new internet protocol that uses cryptographic techniques to encode information about the origins of a piece of content.

The coalition does not have a formal relationship with the White House, and it’s unclear what that collaboration would look like. In response to questions, Mounir Ibrahim, the cochair of the governmental affairs team, said, “C2PA has been in regular contact with various offices at the NSC [National Security Council] and White House for some time.”

The emphasis on developing watermarking is good, says Emily Bender, a professor of linguistics at the University of Washington. She says she also hopes content labeling systems can be developed for text; current watermarking technologies work best on images and audio. “[The executive order] of course wouldn’t be a requirement to watermark, but even an existence proof of reasonable systems for doing so would be an important step,” Bender says.

Will this executive order have teeth? Is it enforceable? 

While Biden’s executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced. 

The order calls on the National Institute of Standards and Technology to set standards for extensive “red team” testing—meaning tests meant to break the models in order to expose vulnerabilities—before models are launched. NIST has been somewhat effective at documenting how accurate or biased AI systems such as facial recognition are already. In 2019, a NIST study of over 200 facial recognition systems revealed widespread racial bias in the technology.

However, the executive order does not require that AI companies adhere to NIST standards or testing methods. “Many aspects of the EO still rely on voluntary cooperation by tech companies,” says Bradford, the law professor at Columbia.

The executive order requires all companies developing new AI models whose computational size exceeds a certain threshold to notify the federal government when training the system and then share the results of safety tests in accordance with the Defense Production Act. This law has traditionally been used to intervene in commercial production at times of war or national emergencies such as the covid-19 pandemic, so this is an unusual way to push through regulations. A White House spokesperson says this mandate will be enforceable and will apply to all future commercial AI models in the US, but will likely not apply to AI models that have already been launched. The threshold is set at a point where all major AI models that could pose risks “to national security, national economic security, or national public health and safety” are likely to fall under the order, according to the White House’s fact  sheet. 

The executive order also calls for federal agencies to develop rules and guidelines for different applications, such as supporting workers’ rights, protecting consumers, ensuring fair competition, and administering government services. These more specific guidelines prioritize privacy and bias protections.

“Throughout, at least, there is the empowering of other agencies, who may be able to address these issues seriously,” says Margaret Mitchell, researcher and chief ethics scientist at AI startup Hugging Face. “Albeit with a much harder and more exhausting battle for some of the people most negatively affected by AI, in order to actually have their rights taken seriously.”

What has the reaction to the order been so far? 

Major tech companies have largely welcomed the executive order. 

Brad Smith, the vice chair and president of Microsoft, hailed it as “another critical step forward in the governance of AI technology.” Google’s president of global affairs, Kent Walker, said the company looks “forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.”

“It’s great to see the White House investing in AI’s growth by creating a framework for responsible AI practices,” said Adobe’s general counsel and chief trust officer, Dana Rao. 

The White House’s approach remains friendly to Silicon Valley, emphasizing innovation and competition rather than limitation and restriction. The strategy is in line with the policy priorities for AI regulation set forth by Senate Majority Leader Chuck Schumer, and it further crystallizes the lighter touch of the American approach to AI regulation. 

However, some AI researchers say that sort of approach is cause for concern. “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms,” says Mitchell.

Democrat's initial fears about New GOP speakership were well-founded

 

www.axios.com

Israel aid maneuver fuels Democratic fears about new House Speaker Mike Johnson

Andrew Solender
4 - 5 minutes

House Speaker Mike Johnson. Photo: Ronda Churchill/Bloomberg via Getty Images.

Democrats are taking House Speaker Mike Johnson's (R-La.) efforts to offset aid to Israel with funds from one of their signature pieces of legislation as an early sign that their initial fears about his speakership were well-founded.

Why it matters: The emergency Israel funding will need bipartisan support to become law — as will bills to support Ukraine and Taiwan, shore up border security and avert a government shutdown.

Driving the news: The supplemental appropriations package would offset $14.3 billion in military assistance to Israel by rescinding an equal amount in IRS funding from the Inflation Reduction Act.

  • The rescission adds to Democratic frustration over Johnson's decision to try to pass Israel aid as a standalone bill, rather than tying it to Ukraine and Taiwan aid and border security funding.
  • Senate Majority Leader Chuck Schumer (D-N.Y.) told reporters it would make the bill "much harder to pass" in the Senate.

What they're saying: Democratic lawmakers and aides told Axios they see the offset as a sign that Johnson plans to take a firmly partisan approach to governing as he steps into the speakership.

  • "Obviously, we always want to give people the benefit of the doubt when they step into leadership positions, but this is totally ridiculous," said Rep. Jared Moskowitz (D-Fla.).
  • One senior House Democrat called the rescission "very problematic" and said it "doesn't seem like this is a good start for the new speaker."
  • "Instead of introducing a clean aid package … the new Republican Speaker has chosen to put a poison pill" in the bill, said Rep. Ritchie Torres (D-N.Y.). "The politicizing of Israel in a time of war is nothing short of disgraceful."

A spokesperson for Johnson did not respond to a request for comment.

Zoom in: Raising Democrats' ire even further is the fact that the rescission would cut against Republicans' stated goal of saving as much money as they spend by dampening tax collection enforcement and thus diminishing federal revenue. . .

The story continues

Kari Lake has plans ----

Now, Lake is using her Arizona campaign experience to catapult herself to Washington, D.C., as part of Republicans’ efforts to seize control of the upper chamber, which is narrowly held by Democrats. “The Senate is so critical right now. I mean, we don't know what's going to happen in this next election,” Lake said. 

“There's a chance maybe we won’t hold Congress. Now, I believe we're gonna get President Trump in the White House. But God forbid that doesn't happen, the Senate is what is going to be holding our country together by a thread. So we have to make sure that the Senate is firmly in the hands of Republicans."  headtopics.com  

The big picture: Lake is courting more establishment Republican support this time around, fielding meetings with allies of Senate Minority Leader Mitch McConnell and representatives from the Senate Leadership Fund and National Republican Senatorial Committee, Politico reports.

Top stories 

 

24 hours ago · I'm running for Senate because our country can't keep going in the direction Biden, Gallego & Sinema have us headed I want to make sure we are relentless in ...
21 hours ago · Republicans are at an advantage in the 2024 Senate election. Read here about which candidates are most at risk.
kari lake from news.yahoo.com
19 hours ago · Republicans are at an advantage in the 2024 Senate election. Read here about which candidates are most at risk.
People also search

 

EXCLUSIVE — Republican candidate Kari Lake is facing one of the most competitive Senate races in the country as she prepares for a possible three-way contest in Arizona against Rep. Ruben Gallego (D-AZ) and incumbent Sen. Kyrsten Sinema (I-AZ). But the staunch conservative isn’t backing down, saying many on the other side of the aisle may end up backing her bid instead. 

Kari Lake says ‘disaffected’ Democrats and independents are key to her Arizona Senate bid

dcexaminer
4 - 6 minutes

 / Source: dcexaminer

 

. . .The Senate election in Arizona is expected to be one of the most competitive races of the 2024 cycle. The election took on new significance late last year after Sen. Kyrsten Sinema (I-AZ) announced she would be leaving the Democratic Party to instead identify as an independent, opening the door for a three-way race in a vital swing state.

Gallego has already announced his bid to challenge Sinema, which could threaten to split the Democratic and independent votes should the incumbent choose to run for reelection.“I think we have a really great opportunity,” Lake said. headtopics.com 

Cami Mondeaux is a congressional reporter. She started with the Washington Examiner as a copy editor, later joining the breaking news team and eventually settling on the Congress beat. A Utah native, Cami graduated from Westminster College in Salt Lake City in 2021 and covered state government as a breaking news reporter for KSL News Radio.

RELATED 

Kari Lake's lawsuit over metro Phoenix's electronic tabulation systems has been tossed out | AP News

3 - 4 minutes

FILE - Republican Kari Lake waves to supporters as she announces her plans to run for the Arizona U.S. Senate seat during a rally, Tuesday, Oct. 10, 2023, in Scottsdale, Ariz. On Monday, Oct. 16, a federal appeals court tossed out a lawsuit brought by former Arizona gubernatorial candidate Lake that was previously dismissed, challenging use of electronic voting machines and sought to ban them in last year’s midterm elections. (AP Photo/Ross D. Franklin, File)

FILE - Republican Kari Lake waves to supporters as she announces her plans to run for the Arizona U.S. Senate seat during a rally, Tuesday, Oct. 10, 2023, in Scottsdale, Ariz. On Monday, Oct. 16, a federal appeals court tossed out a lawsuit brought by former Arizona gubernatorial candidate Lake that was previously dismissed, challenging use of electronic voting machines and sought to ban them in last year’s midterm elections. (AP Photo/Ross D. Franklin, File)

PHOENIX (AP) — A federal appeals court tossed out a lawsuit brought by former Arizona gubernatorial candidate Kari Lake that was previously dismissed, challenging the use of electronic tabulation systems and that sought to ban them in last year’s midterm elections.

Lake and failed Arizona Secretary of State candidate Mark Finchem, both Republicans, filed a lawsuit in April 2022 that alleged the ballot tabulation systems were not trustworthy.

The former Phoenix TV anchor wound up losing her race by more than 17,000 votes while Finchem lost by over 120,000 votes.

In the ruling Monday, the 9th U.S. Circuit Court of Appeals said their claims didn’t show “a plausible inference that their individual votes in future elections will be adversely affected by the use of electronic tabulation, particularly given the robust safeguards in Arizona law, the use of paper ballots, and the post-tabulation retention of those ballots.”

Messages left for lawyers for Lake and Finchem seeking comment on the appeal court’s ruling weren’t returned Tuesday.

Still pending is a ruling in another lawsuit that Lake filed this year demanding that Arizona’s most populous county release images of 1.3 million ballot envelopes signed by voters under the state’s public records law.

Lake is among the most vocal of last year’s Republican candidates promoting former President Donald Trump’s election lies, which she made the centerpiece of her campaign.

While most other election deniers around the country conceded after losing their races in November, Lake did not. She is campaigning for U.S. Senate and is regarded as a contender to be Trump’s running mate in his 2024 campaign.

___

This story has been updated to correct that Lake’s lawsuit challenged the use of electronic tabulation systems, not voting machines.

BEA News: Gross Domestic Product by State and Personal Income by State, 3rd Quarter 2025

  BEA News: Gross Domestic Product by State and Personal Income by S...