Good morning. In today’s newsletter: the potential consequences of A.I. for the financial system; the trucking company Yellow files for bankruptcy; the head of the New York Fed says interest rates may have peaked; and what to watch for this week. (Was this newsletter forwarded to you? Sign up here.) |
|
A financial regulator issues a warning on A.I. |
Gary Gensler, the chairman of the S.E.C., has been studying the potential consequences of artificial intelligence for years. The recent proliferation of generative A.I. tools like ChatGPT has demonstrated that the technology is set to transform business and society. |
Gensler outlined some of his biggest concerns in an interview with DealBook’s Ephrat Livni. |
A.I. could be the next big systemic risk to the financial system. In 2020, Gensler co-wrote a paper about deep learning and financial stability. It concluded that just a few A.I. companies will build the foundational models that underpin the tech tools that lots of businesses will come to rely on, based on how network and platform effects have benefited tech giants in the past. |
Gensler expects that the United States will most likely end up with two or three foundational A.I. models. This will deepen interconnections across the economic system, making a financial crash more likely because when one model or data set becomes central, it increases “herding” behavior, meaning that everyone will rely on the same information and respond similarly. |
“This technology will be the center of future crises, future financial crises,” Gensler said. “It has to do with this powerful set of economics around scale and networks.” |
A.I. models may put companies’ interests ahead of investors’. The meme stock frenzy driven by social media and the rise of retail trading on apps highlighted the power of nudges and predictive algorithms. But are companies that use A.I. to study investor behavior or recommend trades prioritizing user interests when they act on that information? |
The S.E.C. last month proposed a rule that would require platforms to eliminate conflicts of interest in their technology.
|
Who is responsible if generative A.I. gives faulty financial advice? “Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients,” Gensler said. “And whether you’re using an algorithm, you have that same duty of care.” |
Precisely who is legally liable for A.I. is a matter of debate among policymakers. But Gensler says it’s fair to ask the companies to create mechanisms that are safe and that anyone who uses a chatbot is not delegating responsibility to the tech. “There are humans that build the models that set up parameters,” he said. |
No comments:
Post a Comment