At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.
Google places engineer on leave after he claims group’s chatbot is “sentient”
Blake Lemoine ignites social media debate over advances in artificial intelligence.
Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.”
Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention last week when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”
But a Saturday profile in the Washington Post characterizing Lemoine as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion on social media regarding the nature of artificial intelligence. Among the experts commenting, questioning or joking about the article were Nobel laureates, Tesla’s head of AI and multiple professors.
At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.
Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.”
At another point LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”
Lemoine, who had been given the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personhood.”
After he sought to consult AI experts outside Google, including some in the US government, the company placed him on paid leave for allegedly violating confidentiality policies. Lemoine interpreted the action as “frequently something which Google does in anticipation of firing someone.”
A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”
Lemoine said in a second Medium post at the weekend that LaMDA, a little-known project until last week, was “a system for generating chatbots” and “a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating.”
He said Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person. . ."
No comments:
Post a Comment