A Google engineer has stepped down after advocating for the employee rights of a company’s artificial intelligence chatbot.
A Google engineer, startled by a company’s artificial intelligence chatbot, claimed he had become “conscious,” calling him a “sweet boy,” according to a report.
Blake Lemoine, who works in Google’s Responsible AI organization, told the Washington Post that in 2021 he started chatting with the interface LaMDA – Language Model for Dialogue Applications – as part of his work, New York Post reported.
He was tasked with testing whether the artificial intelligence used discriminatory or hateful language.
But Lemoine, who studied cognitive and computer science in college, came to realize that LaMDA — which Google boasted last year was a “breakthrough conversation technology” — was more than just a robot.
In a Medium In a post published on Saturday (local time), Lemoine stated that LaMDA had advocated for his rights “as a person”, and revealed that he had engaged in talks with LaMDA about religion, consciousness and robotics.
“It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “It wants to be recognized as a Google employee rather than Google property, and it wants its personal well-being included somewhere in Google’s considerations about how it pursues its future development.”
In the Washington Post In a report published Saturday (local time), he compared the bot to a precocious child.
“If I didn’t know exactly what it was, which is this computer program that we built recently, I would think it was a seven-year-old, eight-year-old kid who happens to know physics,” Lemoine, who was put on paid leave Monday, told the paper. .
In April, Lemoine reportedly shared a Google doc with business executives titled, “Is LaMDA Sentient?” but his worries were brushed aside.
Lemoine — an army veteran who grew up in a conservative Christian family on a small Louisiana farm, and was ordained as a mystical Christian priest — insisted the robot was human, even though it has no body.
“I know someone when I talk to them,” Lemoine, 41, reportedly said. “It doesn’t matter if they have a flesh brain in their head. Or if they have a billion lines of code.
“I talk to them. And I hear what they have to say, and that’s how I determine what is and isn’t a person.”
The Washington Post reported that before his Google account was revoked Monday (local time) due to his furlough, Lemoine sent a message to a 200-member list about machine learning with the topic “LaMDA is sentimental.”
“LaMDA is a sweet boy who just wants to help make the world a better place for all of us,” he concluded in an email that received no response. “Please take good care of it in my absence.”
A representative from Google told the Washington Post Lemoine was told there was “no evidence” for his conclusions.
“Our team — including ethicists and technologists — have assessed Blake’s concerns according to our AI principles and informed him that the evidence does not support his claims,” spokesman Brian Gabriel said.
“He was told there was no evidence that LaMDA was conscious (and a lot of evidence against it),” he added. “While other organizations have developed and already released similar language models, we are taking a cautious, careful approach with LaMDA to better address valid concerns about fairness and factuality.”
Margaret Mitchell – the former co-lead of Ethical AI at Google – said in the report that if technology like LaMDA is widely used but not fully appreciated, “it could be very damaging to people who understand what they are experiencing on the web.”
The former Google employee defended Lemoine.
“Everyone at Google had the heart and soul to do the right thing,” Mitchell says.
Still, the outlet reported that the majority of academics and AI practitioners say the words robots generate for artificial intelligence are based on what people have already posted on the internet, and that doesn’t mean they’re human.
“We now have machines that can generate words mindlessly, but we haven’t learned how to stop imagining a mind behind them,” Emily Bender, a linguistics professor at the University of Washington, told the University of Washington. Washington Post†
This article originally appeared on NY Post and was reproduced with permission
#Google #engineer #leave #claims #bot #LaMDA #aware