Good afternoon. Thank you, Roberta and Sarath. As is customary, I’d like to note that my views are my own as Chair of the Securities and Exchange Commission, and I am not speaking on behalf of my fellow Commissioners or the staff.

Scarlett Johansson played a virtual assistant, Samantha, in the 2013 movie Her. The movie follows the love affair between Samantha and Theodore, a human played by Joaquin Phoenix.[1]

Late in the movie, Theodore is shaken when he gets an error message— “Operating System Not Found.” Upon Samantha’s return, he asks if she’s interacting with others. Yes, she responds, with 8,316 others. Plaintively, Theodore asks: You only love me, right? Samantha says she’s in love with 641 others.

Shortly thereafter, she goes offline for good. I’ll leave it to you if you’ll be watching Her tomorrow with your Valentine.

There has been much recent buzz about AI, including in several Super Bowl ads on Sunday.[2] The bulk has been about generative AI models, particularly large language models. Artificial intelligence, though, is much broader and isn’t new. You might remember Alan Turing from The Imitation Game movie and the cracking of the Enigma code. In 1950, he wrote a seminal paper, opening with, “I propose to consider the question, ‘Can machines think?’”[3]

We’ve already seen a lot of adoption of AI. Text prediction in our phones and emails has been commonplace for years. The Postal Service has been using it to predict addresses. It’s being used for natural language processing, translation software, recommender systems, radiology, robotics, and your virtual assistant.

It’s being used in the law, and right here at YLS. By a show of hands, how many of you have been using AI to summarize your readings or research? To draft a cover letter for a job application? To write something to a professor? To the faculty in the room, how do you feel about this show of hands?

SEC and Finance

I’d like to wish you all at Yale Law School a happy bicentennial. The SEC was established 110 years later in 1934. That year also saw YLS faculty member William O. Douglas come to the SEC. He later became our third Chair in 1937. Speaking to an audience of lawyers in 1934, he said: “Service to the client has been the slogan of our profession. And it has been observed so religiously that service to the public has been sadly neglected.”[4] When Douglas left for the Supreme Court in 1939, another YLS faculty member, Jerome Frank, for whom your legal services clinic is named, became our fourth Chair.[5]

The SEC oversees the $110 trillion capital markets. The essence of this is captured in our three-part mission to protect investors, facilitate capital formation, and maintain fair, orderly, and efficient markets.

Finance is about the pricing and allocation of money and risk throughout the economy. This happens through banks and nonbanks alike. In essence, finance sits in the middle like the neck of an hourglass whose grains of sands are money and risk.

AI: Opportunities and Challenges

AI is about using math, data, and computational power to find patterns and make predictions.

It opens up tremendous opportunities for humanity. As machines take on pattern recognition, particularly when done at scale, this can create great efficiencies across the economy. 

In finance, there are potential benefits of greater financial inclusion and enhanced user experience.

It is already being used for call centers, account openings, compliance programs, trading algorithms, sentiment analysis, and more. It has fueled a rapid change in the field of robo-advisers and brokerage apps.

AI also raises a host of issues that aren’t new but are accentuated by it. First, AI models’ decisions and outcomes are often unexplainable. Second, AI also may make biased decisions because the outcomes of its algorithms may be based on data reflecting historical biases. Third, the ability of these predictive models to predict doesn’t mean they are always accurate. If you’ve used it to draft a paper or find citations, beware, because it can hallucinate.

Beyond these general challenges, I’ll turn to issues about AI, finance, and the law.

Macro: System-wide risk

That brings me back to Her. Imagine it wasn’t Scarlett Johansson, but it was some base model or data source on which 8,316 financial institutions were relying. That’s what we may face in finance.  

We’ve seen in our economy how one or a small number of tech platforms can come to dominate a field. There’s one leading search engine, one leading retail platform, and three leading cloud providers.

I think due to the economies of scale and network effects at play we’re bound to see the same develop with AI.[6]

In fact, we’ve already seen affiliations between the three largest cloud providers and the leading generative AI companies.[7]

Thousands of financial entities are looking to build downstream applications relying on what is likely to be but a handful of base models upstream.

Such a development would promote both herding and network interconnectedness.  Individual actors may make similar decisions as they get a similar signal from a base model or rely on a data aggregator. Such network interconnectedness and monocultures are the classic problems that lead to systemic risk.[8] I know Roberta has written about monocultures.  

Thus, AI may play a central role in the after-action reports of a future financial crisis— and we won’t have Tom Cruise in Minority Report[9] to prevent it from happening.

While current model risk management guidance—generally written prior to this new wave of data analytics—will need to be updated, it won’t be sufficient.

The challenges to financial stability that AI may pose in the future will require new thinking on system-wide or macro-prudential policy interventions.

Regulators and market participants will need to think about the dependencies and interconnectedness of potentially 8,316 brokenhearted financial institutions to an AI model or data aggregator.

Micro: Deception, AI Washing, Hallucinations, and Conflicts

Deception and Manipulation

Two years before the 1984 movie, Beverly Hills Cop, the actual Beverly Hills cops arrested a robot in what may have been the first robot arrest ever.[10]

This brings us back to Turing’s question, “Can machines think?” What does that mean for securities law, particularly the laws related to fraud and manipulation?

Though parts of our securities laws have standards of strict liability,[11] such as conducting an unregistered offering, many of the key anti-fraud sections of the 1933, 1934, and 1940 acts require some form of intent or at least negligence. Did somebody knowingly or recklessly do something? Were they negligent?

Fraud is fraud, and bad actors have a new tool, AI, to exploit the public.[12] So what happens when you combine AI, finance, and the law of fraud?

Kara Stein, YLS class of ‘91 and former SEC commissioner, cowrote a paper about this.[13] Her coauthor and she spoke to programmable harm, predictable harm, and unpredictable harm.

The first, programmable harm, is straightforward—if you use an algorithm and are optimizing it to manipulate or defraud the public, that is fraud.

The second category, predictable harm, is also reasonably straightforward. Have you had a reckless or knowing disregard of the foreseeable risks of your actions, in this case, deploying a particular AI model? Did you act reasonably?

Under the securities laws, there are many things you can’t do. This includes front-running, meaning if you get a customer order, you aren’t supposed to trade in front of your customers. You aren’t supposed to spoof, in other words place a fake order. You aren’t supposed to lie to the public. Investment advisers and broker-dealers aren’t supposed to place your interests ahead of your customers’ interest or give unsuitable or conflicted investment advice or recommendations.

That means you need to make sure your robot, I mean AI model, doesn’t do these things.

Investor protection requires the humans who deploy a model to put in place appropriate guardrails. Did those guardrails take into account current law and regulation, such as those pertaining to front-running, spoofing, fraud, and providing advice or recommendations? Did they test it before deployment and how? Did they continue to test and monitor? What is their governance plan—did they update the various guardrails for changing regulations, market conditions, and disclosures?

Do the guardrails take into account that AI models are hard to explain, from time-to-time hallucinate, and may strategically deceive users?[14]  

Now, some might ask what happens if the algorithm is learning and changing on its own. What if you were deploying HAL from 2001: Space Odyssey[15] or Arnold Schwarzenegger’s Terminator?[16] You are knowingly deploying something that is self-learning, changing, and adapting. Thus, you still have important responsibilities to put guardrails on this scenario as well.

This now brings us to Kara’s third category. In essence, how does one hold liable the persons who deploy AI models that create truly unpredictable harm?

Some of that will play out in the courts. Right now, though, the opportunities for deception or manipulation most likely fall in the programmable and predictable harm categories rather than being truly unpredictable.

A famous early movie executive, Joseph Kennedy,[17] who later became the first SEC Chair, may have said it best. In his first speech, he said: “The Commission will make war without quarter on any who sell securities by fraud or misrepresentation.”[18]

AI Washing

Turning to AI washing, one might think about Everything Everywhere All at Once, in which the starring family owned a laundromat. While there has been online debate about whether AI was used to make the movie, the writer-director denies it.[19] When I think of AI washing, I think more about the Music Man in which traveling salesman “Professor” Harold Hill goes to River City, Iowa, and cons the town into purchasing musical instruments for their children.[20]

President Franklin Roosevelt and Congress established the SEC as a merit neutral agency. Investors get to decide what to invest in as long as there is full, fair, and truthful disclosure. Later, those two former YLS faculty, Douglas and Frank, advised Congress on laws related to investment management, which also included disclosure.[21]

We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims from the Professor Hills of the day. If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.[22]

In the movie M3GAN, a robotics company has an AI-powered toy robot and presents it to investors and executives as bonding with a little girl. The company does NOT tell them that the scientist behind the robot is aware that the AI isn’t complete.[23]

As AI disclosures by SEC registrants increase,[24] the basics of good securities lawyering still apply. Claims about prospects should have a reasonable basis,[25] and investors should be told that basis. When disclosing material risks about AI—and a company may face multiple risks, including operational, legal, and competitive—investors benefit from disclosures particularized to the company, not from boilerplate language.

Companies should ask themselves some basic questions, such as: “If we are discussing AI in earnings calls or having extensive discussions with the board, is it potentially material?”[26]

These disclosure considerations may require companies to define for investors what they mean when referring to AI. For instance, how and where is it being used in the company? Is it being developed by the issuer or supplied by others?

Investment advisers or broker-dealers also should not mislead the public by saying they are using an AI model when they are not, nor say they are using an AI model in a particular way but not do so. Such AI washing, whether it’s by companies raising money or financial intermediaries, such as investment advisers and broker-dealers, may violate the securities laws.

So, if you are AI washing, as “Professor” Hill sang, “Ya Got Trouble.”

Hallucinations

Now let me turn to Keanu Reeves as Neo in The Matrix.[27] You may recall he was living in an AI-induced hallucination.

In the real world, AI models themselves also can hallucinate but don’t necessarily have Morpheus there to save them. Some lawyers using AI to write briefs have discovered that AI hallucinated case citations that looked real but were not.[28]

If an AI model can hallucinate a bad case citation, couldn’t an AI model used by a broker or investment adviser hallucinate an unsuitable or conflicted investment recommendation?

Investment advisers and broker-dealers are required not to place their interests ahead of investors interests.[29] Thus, investment advisers and brokers aren’t supposed to give investment advice or recommendations based on inaccurate or incomplete information.

You don’t want your broker or adviser recommending investments they hallucinated while on mushrooms. So, when the broker or adviser uses an AI model, they must ensure that any recommendations or advice provided by the model is not based on a hallucination or inaccurate information.

Conflicts

The streaming apps long ago figured out I’m a rom-com guy.

Today’s AI-based models provide an increasing ability to make predictions about each of us as individuals. Already, we receive messages from AI recommender systems that are considering how we might as individuals respond to their prompts, products, and pricing.

We all know some forms of these predictive data analytics well: the flashing button on your screen, the push notification, the colors, the sounds, the well-engineered subtleties of modern digital life.

But what if finance platforms figured out something else as subtle as some of our color preferences? My mom used to dress my identical twin brother, Rob, in red, and me, Gary, in green. Today, I might not react as favorably to green prompts.

You get to research whether Rob and I are more like Lindsay Lohan as Hallie and Annie in the Parent Trap[30] or James and Oliver Phelps who played the Weasley twins in Harry Potter.[31]

If the optimization function in the AI system is taking the interest of the platform into consideration as well as the interest of the customer, this can lead to conflicts of interest. In finance, when brokers or advisers act on those conflicts and optimize to place their interests ahead of their investors’ interests, investors may suffer financial harm.

As to the Rob-red, Gary-green example, are firms communicating with me in a color other than green because it’ll be good for my investment decisions, or because it might benefit the firm’s revenues, profits, or other interests?

That’s why the SEC proposed a rule last year regarding how best to address such potential conflicts across the range of investor interactions.[32] The SEC’s Deputy Director of the Division of Investment Management Sarah ten Siethoff, YLS class of ’00, is one of the key leads on this rule.

Conclusion

I had some fun with movies in this speech, but also quoted Kennedy and Douglas, individuals critical to creating, shaping, and interpreting the securities laws—laws that Congress established to protect the investing and issuing public.

I hope you all in this audience, whether advising your clients in the future or in leadership roles, take these leaders’ words to heart.

When I asked the SEC’s Deputy Chief of the Crypto Assets and Cyber Unit Jorge Tenreiro, YLS class of ’06, about this speech, he suggested I start with a reference to the 2014 movie Ex Machina.[33] The CEO of a search engine company administers a Turing Test between his robot played by Alicia Vikander and an unsuspecting programmer. The robot passes the test but … well, it’s a bit dark.

I chose to start, rather, with Scarlett Johansson and Her. In part that’s because I’m a bit of a rom-com guy, but there’s more to it. It’s that the story of Samantha and Theodore showed both the great potential of AI for humanity as well some of its inherent risk. It also had a happy ending when Theodore reconnects with Amy—a real human—played by Amy Adams.  

Similarly, our role at the SEC is both allowing for issuers and investors to benefit from the great potential of AI while also ensuring that we guard against the inherent risks I’ve discussed today.


[1] See Freya Keeping, GameRant, “Her, Ending Explained” (Oct. 22, 2023), available at https://gamerant.com/her-ending-explained/.

[2] See Gael Cooper, CNET, “AI Scores in Super Bowl Commercials: You Can Watch Them Here” (Feb. 12, 2024), available at https://www.cnet.com/tech/ai-scores-in-super-bowl-commercials-you-can-watch-them-here/.

[3] See A.M. Turing, “Computing Machinery and Intelligence” (Oct. 1950), available at https://phil415.pbworks.com/f/TuringComputing.pdf.  

[4] See William O. Douglas, “Address delivered to Duke Bar Association” (April 22, 1934), available at https://www.sec.gov/news/speech/1934/042234douglas.pdf.

[5] See “Historical Profile: Jerome N. Frank” (Feb. 8, 2024), available ahttps://law.yale.edu/yls-today/news/historical-profile-jerome-n-frank.

[6] See Gary Gensler “’Issac Newton to AI’: Remarks before the National Press Club” (July 17, 2023), available at https://www.sec.gov/news/speech/gensler-isaac-newton-ai-remarks-07-17-2023.

[7] Microsoft has partnered with OpenAI; Alphabet’s Google has its own Bard, now rebranded as Gemini; and Amazon has partnered with Anthropic. Also see Leah Nylen, Bloomberg, “Alphabet, Amazon, Microsoft Face FTC Inquiry on AI Partners” (January 25, 2024), available at https://www.bloomberg.com/news/articles/2024-01-25/alphabet-amazon-anthropic-microsoft-openai-get-ftc-inquiry-lrthp0es.

[8] See Gary Gensler and Lily Bailey, “Deep Leaning and Financial Stability” (Nov. 13, 2020), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3723132.

[9] See IMDb “Minority Report Plot,” available at https://www.imdb.com/title/tt0181689/plotsummary/.

[10] See Matt Novak, Gizmodo, “Was This The First Robot Every Arrested?” (Feb. 18, 2014), available at https://gizmodo.com/was-this-the-first-robot-ever-arrested-1524686968. More recent robot arrests have happened as well. For instance, see Arjun Kharpal, CNBC, “Robot with $100 bitcoin buys drugs, gets arrested” (April 22, 2015), available at https://www.cnbc.com/2015/04/21/robot-with-100-bitcoin-buys-drugs-gets-arrested.htmlsee also “A Robot Was Just ‘Arrested’ by Russian Police” (Sept. 20, 2016), available at https://www.sciencealert.com/a-robot-was-just-arrested-by-russian-police.

[11] Some in Congress have proposed imposing strict liability on the use of AI models. See proposed legislation Financial Artificial Intelligence Risk Reduction Act (FAIRR Act), S. 3554 118th Cong. (2023). Among other things, Section 7 of the FAIRR Act would amend the Securities Exchange Act of 1934, 15 U.S.C. § 78a et seq, to add a new Section 42 essentially imposing strict liability on “[a]ny person who, directly or indirectly, deploys or causes to be deployed, an artificial intelligence model” for “all acts, practices or conduct engaged in by such model, and any outcome resulting from the use of such model” unless the person took reasonable steps to prevent such acts.

[12] See SEC Office of Investor Education and Advocacy, North American Securities Administrators Association, and Financial Industry Regulatory Authority, “Artificial Intelligence (AI) and Investment Fraud” (January 25, 2024), available at Artificial Intelligence (AI) and Investment Fraud | FINRA.org 

[13] See Robin Feldman and Kara Stein, “AI Governance in the Financial Industry,” 27 Stan. J.L. Bus. & Fin. 94 (2022), available at https://repository.uchastings.edu/faculty_scholarship/1867. Also see Gina-Gail Fletcher, “The future of AI Accountability in the Financial Markets,” 24 Vanderbilt Journal of Entertainment and Technology Law 289 (2022) available at  https://scholarship.law.vanderbilt.edu/jetlaw/vol24/iss2/3/ 

[14] See Jeremy Scheurer, Mikita Balesni, and Marius Hobbhahn, “Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure” (November 27, 2023), available at https://doi.org/10.48550/arXiv.2311.07590.

[15] See New York Times, “The Story of a Voice: HAL in ‘2001’ Wasn’t So Eerily Calm” (March 30, 2018), available at https://www.nytimes.com/2018/03/30/movies/hal-2001-a-space-odyssey-voice-douglas-rain.html.

[16] See IMDb “The Terminator,” available at https://www.imdb.com/title/tt0088247/.

[17] See IMDb, “Joseph P. Kennedy,” available at https://www.imdb.com/name/nm0448132/.

[18] See “Address of Hon. Joseph P. Kennedy, Chairman of Securities and Exchange Commission, at National Press Club” (July 25, 1934), available at https://www.sec.gov/news/speech/1934/072534kennedy.pdf.

[19] See Margeaux Sippell, “Daniel Scheinert Wants to Set the Record Straight About AI and Everything Everywhere All At Once” (Aug. 22, 2023), available at https://www.moviemaker.com/daniel-scheinert-everything-everywhere-ai/.  

[20] See IMDb “The Music Man” available at https://www.imdb.com/title/tt0056262/.  

[21] See Securities and Exchange Commission, “Report on Investment Counsel, Investment Management, Investment Supervisory, and investment Advisory Services” (1939), available at https://babel.hathitrust.org/cgi/pt?id=mdp.35112101732404&seq=5.

[22] See Compl. in SEC v. Tadrus, No. 23 Civ. 5708 (FB) (Dkt. No. 1); see also Mina Tadrus; Tadrus Capital, SEC Litigation Rel. No. 25798 (Aug. 2, 2023), available at https://www.sec.gov/litigation/litreleases/lr-25798.

[23] See Owen Gleiberman, Variety, “’M3GAN’ Review: A Robot-Doll Sci-Fi Horror Movie That’s Creepy, Preposterous and Diverting,” (January 4, 2023), available at 'M3GAN' Review: Creepy, Preposterous and Diverting (variety.com).   

[24] See Matthew Bultman, “AI Disclosures to SEC Jump as Agency Warns of Misleading Claims,” Bloomberg Law (February 8,2024), available at AI Disclosures to SEC Jump as Agency Warns of Misleading Claims (bloomberglaw.com).

[25] See 17 CFR 229.10 of Regulation S-K.

[26] See Holly Gregory, “AI and the Role of the Board of Directors,” Harvard Law School Forum on Corporate Governance (October 7, 2023), available at AI and the Role of the Board of Directors (harvard.edu) 

[27] See IMDb, “The Matrix” available at https://www.imdb.com/title/tt0133093/.

[28] See Sara Merken, “New York lawyers sanctions for using fake ChatCGT cases in legal brief,” Reuters (June 26, 2023), available at https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/.

[29] This is the same standard that the SEC applies under Regulation Best Interest (Reg BI) to brokers when they make recommendations to retail investors or to advisers—under the SEC’s interpretation of fiduciary duty under the Investment Advisers Act of 1940—when they provide investment advice.

[30] See IMDb, “The Parent Trap,” available at https://www.imdb.com/title/tt0120783/.

[31] See Carola Dager, GameRant, “Harry Potter: What Are the Differences Between Fred and George Weasley?” (Nov. 11, 2023), available at https://gamerant.com/harry-potter-fred-george-weasley-differences/.

[32] Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, 88 Fed. Reg. 53,960 (Aug. 9, 2023) (to be codified at 17 C.F.R. pt. 240 and 275).

[33] See IMDb “Ex Machina,” available at https://www.imdb.com/title/tt0470752/.

RELATED

 From early adopters to late adopters, who's getting it right on AI?