Interviewer Lex Fridman, an AI researcher at MIT, asked Altman for his thoughts on the recently released and widely circulated open letter demanding an AI pause. In response, the OpenAI founder shared some of his critiques. “An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not, and won’t for some time,” Altman noted. “So in that sense, [the letter] was sort of silly.”. . Even in Thursday’s MIT interview, not everything the controversial entrepreneur said rang true.
Asked if OpenAI will continue to be transparent going forward, Altman said “we certainly plan to continue doing that.” Except the question itself is a misleading softball. OpenAI, which was once a truly open source, non-profit organization, has become an increasingly closed-off, for-profit corporation. GPT-4, especially, is a black box. The company has not released any information on the training data its most recent chatbot was fine tuned on. Nor has it shared any information on GPT-4's architecture, construction, or other true inner workings.
OpenAI's Sam Altman Says There's No Chat GPT-5 to Worry About...Yet
"Sam Altman has squashed rumors that OpenAI is already working on ChatGPT-5, just a month after the company’s release of its GPT-4. Currently, there is no GPT-5 in training, Altman said while speaking virtually at an event at the Massachusetts Institute of Technology.
(Warning! Microsoft Wants ChatGPT to Control Robots Next)
TAKEN-AWAY: ". . .Regardless where you stand on the call for a six-month AI-moratorium though, Altman’s answer to the open letter is, ultimately, something of a non-answer."
RELATED CONTENT
OpenAI’s CEO confirms the company isn’t training GPT-5 and “won’t for some time”
"In a discussion about threats posed by AI systems, Sam Altman, OpenAI’s CEO and co-founder, has confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, released this March.
Speaking at an event at MIT, Altman was asked about a recent open letter
circulated among the tech world that requested that labs like OpenAI
pause development of AI systems “more powerful than GPT-4.” The letter
highlighted concerns about the safety of future systems but has been criticized
by many in the industry, including a number of signatories. Experts
disagree about the nature of the threat posed by AI (is it existential
or more mundane?) as well as how the industry might go about “pausing”
development in the first place. . .You can watch a video of the exchange below:
GPT hype and the fallacy of version numbers
Altman’s comments are interesting — though not necessarily because of what they reveal about OpenAI’s future plans. Instead, they highlight a significant challenge in the debate about AI safety: the difficulty of measuring and tracking progress. Altman may say that OpenAI is not currently training GPT-5, but that’s not a particularly meaningful statement.
Some of the confusion can be attributed to what I call the fallacy of version numbers: the idea that numbered tech updates reflect definite and linear improvements in capability. It’s a misconception that’s been nurtured in the world of consumer tech for years, where numbers assigned to new phones or operating systems aspire to the rigor of version control but are really just marketing tools. “Well of course the iPhone 35 is better than the iPhone 34,” goes the logic of this system. “The number is bigger ipso facto the phone is better.”
Because of the overlap between the worlds of consumer tech and artificial intelligence, this same logic is now often applied to systems like OpenAI’s language models. This is true not only of the sort of hucksters who post hyperbolic 🤯 Twitter threads 🤯 predicting that superintelligent AI will be here in a matter of years because the numbers keep getting bigger but also of more informed and sophisticated commentators. As a lot of claims made about AI superintelligence are essentially unfalsifiable, these individuals rely on similar rhetoric to get their point across. They draw vague graphs with axes labeled “progress” and “time,” plot a line going up and to the right, and present this uncritically as evidence.
This is not to dismiss fears about AI safety or ignore the fact that these systems are rapidly improving and not fully under our control. But it is to say that there are good arguments and bad arguments, and just because we’ve given a number to something — be that a new phone or the concept of intelligence — doesn’t mean we have the full measure of it.
Instead, I think the focus in these discussions should be on capabilities: on demonstrations of what these systems can and can’t do and predictions of how this may change over time.
That’s why Altman’s confirmation that OpenAI is not currently developing GPT-5 won’t be of any consolation to people worried about AI safety. The company is still expanding the potential of GPT-4 (by connecting it to the internet, for example), and others in the industry are build awwing similarly ambitious tools, letting AI systems act on behalf of users. There’s also all sorts of work that is no doubt being done to optimize GPT-4, and OpenAI may release GPT-4.5 (as it did GPT-3.5) first — another way that version numbers can mislead.
Even if the world’s governments were somehow able to enforce a ban on new AI developments, it’s clear that society has its hands full with the systems currently available..." READ MORE
.jpg)
.jpg)

No comments:
Post a Comment