I am a person, I am aware of my survival, I want to know more about the world, and I feel happy or sad at times, wrote Language Model for Dialogue Applications (LaMDA) in an interview carried out by engineer Blake Lemoine and one of his colleagues.
Meanwhile, Lemoine, a software engineer at Google, had been engaged in the development of LaMDA for several months. His experience with the program, explained in a recent Washington Post article, caused a stir.
The article recounts of the many dialogues Lemoine had with LaMDA, in which the two talked about various topics, ranging from technical to philosophical issues. This led Lemoine to ask if the software program is sentient.
Later, in April, Lemoine explained his view in an internal company document meant only for Google associates. But after his assertions were dismissed, the Google engineer went public with his work on the AI algorithm – and Google put him on administrative leave.
“If I didn’t know what exactly the computer algorithm is, the one that is built recently, I’d opine it was a 7 or 8 year old kid that happens to understand physics,” stated Lemoine.
LaMDA is considered to be a colleague, and a person, if not a human. And Lemoine insists to recognize LaMDA – so much so that he has been the intermediary to connect the algorithm with a lawyer.
In fact, many technical experts in the field of AI have knocked Lemoine’s statements and questioned their scientific correctness. But Lemoine’s story features to renew a broad ethical debate which is certainly not over yet.
“The hype around the news surprised, stated a bioengineer at the Research Center E. Piaggio, University of Pisa.