Image Source: Medium
Tech companies routinely tout the capabilities of ever-improving artificial intelligence. However, rumors that one of Google’s systems had advanced to the point of being sentient were quickly debunked.
One Google engineer said that after hundreds of interactions with a cutting-edge, unreleased AI system named LaMDA, he believed the software had acquired a level of consciousness, according to an eye-opening story published in the Washington Post on Saturday.
Many in the AI industry scoffed at the engineer’s assertions in interviews and public pronouncements, while others noted that his story demonstrates how technology may induce people to impute human characteristics to it. However, the possibility of Google’s AI becoming sentient underscores our worries and hopes for what this technology may accomplish.
LaMDA, which stands for “Language Model for Dialog Applications,” is one of several large-scale AI systems that can reply to textual cues and has been trained on enormous swaths of text from the internet. They’re responsible for detecting patterns and predicting the next word or words. In a blog post last May, Google described LaMDA as a system that can “engage in a free-flowing way about a seemingly endless variety of topics.” However, the end effect can be bizarre, strange, unpleasant, and rambling.
According to the Washington Post, the engineer, Blake Lemoine, submitted evidence with Google suggesting LaMDA was sentient, but the corporation refused to believe him. Google’s team, which comprises ethicists and technologists, “examined Blake’s concerns under our AI Principles and notified him that the data did not support his assertions,” according to a statement released Monday.
Lemoine announced on Medium on June 6 that Google has placed him on paid administrative leave “in conjunction with an investigation into AI ethics concerns I was raising within the firm” and that he could be fired “soon.” (He cited Margaret Mitchell’s experience as a leader of Google’s Ethical AI team until she was fired in early 2021 after speaking out against then-co-leader Timnit Gebru’s departure in late 2020.) After internal squabbles, including one over a research paper that the company’s AI leadership advised her to withdraw from consideration for presentation at a conference or erase her name from, Gebru was fired.
Lemoine is still on administrative leave, a Google spokeswoman said. According to The Washington Post, he was put on leave for violating the company’s confidentiality policy.
On Monday, Lemoine declined to comment.
Concerns about the ethics guiding the creation and use of such technology have grown as powerful computing systems trained on vast troves of data continue to emerge. And sometimes, rather than looking at what is actually achievable, advancements are evaluated through the perspective of what might be.