K

K

Text Material
With the emergence of artificial intelligence in society, computer systems have been developed to perform tasks or behave increasingly like humans. One related ethical question was thrust into the spotlight when Google engineer Blake Lemoine publicly announced that he suspected the artificial intelligence he had been working on, known as LaMDA (Language Model for Conversational Applications), was sentient. LaMDA was able to communicate with Blake Lemoine during discussions, analyzing passages, writing parables, and even claiming to experience emotions. When I read this news online, I started to question how can we determine when AI becomes intelligent/sentient, and if they are, should we grant them equal rights as humans. If LamDA, or any other AI machines claim that they experience emotions, should we treat them as humans or should we regard this as an error in the programing that should be fix? I think it's creepy to think about that one day robots will have the same if not higher intelligence as humans. I don't want to sound pessimistic, but what if they take over our society? I feel like maybe we should limit the industrial production, especially of AI products, to a reasonable rate because of the unforeseen danger embedded in them...
Updates
Materials
Sentence/Paragraph
Uploads
Date
Topics
Bioethics
Built with Potion.so