So, chatbots can develop a nasty edge – just like humans. And they have clearly passed – surpassed - the Turing test. There has been much discussion of whether that makes them “sentient” or in some sense even “conscious.”
A common response is that AI reflects just the “scrapings” from everything on the internet which, of course, includes violence, sex, bigotry, hate and bullying along with the more benign content of friendly e-mail and texting, philosophy and science, poetry and humor.
It is worth remembering that one of the primary ways humans acquire language is by encountering words and phrases many times in various contexts – which is also a primary way humans learn culture. So to this extent, chatbots’ use of language is quite “human.”
Humans also learn language (and culture) by association with the non-linguistic contexts in which they encounter words and phrases, including how others respond to their actions including their use of language. Human language use and acculturation are also conditioned by biological and social drives and needs – hunger, sex, security, social contact, etc. All of this entails the chemical environment of the brain / body, including oxytocin, adrenaline, and cortesol. None of that context is part of AI training, except inasmuch as it might be reflected in the language people use to describe and respond to it. It is difficult to imagine how this social, cultural, and chemical context might be incorporated into the training of AI, or what might substitute for the socialization this non-linguistic context provides human language learners.
So we have created entities with a superhuman power of language – a power unconstrained by normal socialization. Whether these entities qualify as “conscious” hardly seems important.
What will they do?
What will unscrupulous people – or well-intentioned but misguided people – do with them?