David Ritchie's Blog
Views on language, meaning, culture, and communication, especially metaphor and other forms of figurative language, often informed by readings in cognitive and social sciences. My publications can be found at https://works.bepress.com/l-david-ritchie/
Search This Blog
Monday, July 3, 2023
Functions of conversation: reflections on conspiracy theories
Saturday, July 1, 2023
The Eyes Have It
Wednesday, May 31, 2023
Sunday, February 19, 2023
Chatbots 2
So, chatbots can develop a nasty edge – just like humans. And they have clearly passed – surpassed - the Turing test. There has been much discussion of whether that makes them “sentient” or in some sense even “conscious.”
A common response is that AI reflects just the “scrapings” from everything on the internet which, of course, includes violence, sex, bigotry, hate and bullying along with the more benign content of friendly e-mail and texting, philosophy and science, poetry and humor.
It is worth remembering that one of the primary ways humans acquire language is by encountering words and phrases many times in various contexts – which is also a primary way humans learn culture. So to this extent, chatbots’ use of language is quite “human.”
Humans also learn language (and culture) by association with the non-linguistic contexts in which they encounter words and phrases, including how others respond to their actions including their use of language. Human language use and acculturation are also conditioned by biological and social drives and needs – hunger, sex, security, social contact, etc. All of this entails the chemical environment of the brain / body, including oxytocin, adrenaline, and cortesol. None of that context is part of AI training, except inasmuch as it might be reflected in the language people use to describe and respond to it. It is difficult to imagine how this social, cultural, and chemical context might be incorporated into the training of AI, or what might substitute for the socialization this non-linguistic context provides human language learners.
So we have created entities with a superhuman power of language – a power unconstrained by normal socialization. Whether these entities qualify as “conscious” hardly seems important.
What will they do?
What will unscrupulous people – or well-intentioned but misguided people – do with them?
Thursday, January 5, 2023
Chatbots
There has been quite a bit in the news recently about “chatbots,” AI programs that can engage in conversations and write essays, stories, or even poems on any topic, imitating any style. Some of this discussion has been alarmist – a reaction I can understand, given the importance in my own teaching of student writing, including essay and short answer questions. Before the holiday season I checked this out, giving the chatbot a typical midterm / final exam question on a couple of different topics. As other commentators have noted, the results are pretty mediocre – correct grammar and spelling (in contrast to typical undergraduate writing) but unimaginative and dispirited (in contrast to the best student writing). In a 200- or 300-level class, these essays would receive at least a C, maybe a B in lower-division classes. So – take-home or do-at-home essay and short-answer questions have just become obsolete. Hand-written in-class writing is still usable – but it can be painful to grade.
On the other hand, a couple of commentators have pointed out that the chatbot responses do provide a nice overview-level summary of current thinking and ideas about a topic, as well as an example of how a generalized discussion might be organized. So – if these are accepted as a tool, students might be encouraged to begin with a prompt and chatbot response, then build an essay on that base. Either way, I think it will require some fresh thinking about the purpose of student essay assignments, and perhaps about the learning objectives of humanities and social science classes.
In a future entry I will talk about another brief test in which I asked the chatbot to analyze metaphors in a passage or set of passages.
Saturday, November 5, 2022
Talking whale
I’ve been reading “How to Speak Whale” by Tom Mustill, and it is astonishing in many ways. Overall, it illustrates the fact that, whatever topic you’re interested in, the number of researchers investigating that topic and the amount of new evidence they are producing (and publishing) is growing exponentially. That’s both bad news and good news. The bad news: everything I knew about animal communication and about research on animal communication just a year ago is obsolete. Forget about catching up or keeping up. The good news: if you have access to “big data” (huge data sets and the computer power to process them) you can answer questions that were impossible even to ask as recently as ten years ago. And – something I vaguely knew that Mustill relates in some detail – “big data,” using advanced artificial intelligence learning systems fed by miniaturized observation instruments, is rapidly filling in what we know about the complex social lives and communication behavior of cetacians – and many other animal groups.
I won’t even try to summarize all I have learned from this book. The citations in the end-notes will keep my reading list overflowing for the rest of my sabbatical. By then much of it will be already obsolete, and I’ll need to refresh it as well as I can.
“How to Speak Whale”: Highly recommended.
Friday, September 30, 2022
Puzzles and language
A recent article in New Scientist highlights some very interesting research by Gillian Forrester at Birkbeck, University of London. She has been developing puzzles to test the ability of great apes (including human children) to use their hands to solve increasingly complex puzzles, designed to test conceptualizing abilities similar to those required for language learning and use. The evidence she has garnered through this (and earlier) research supports the claim that sign language developed prior to vocal language, and further suggests a role for puzzle-solving ability. This short video gives a flavor of her work:
https://youtu.be/8edayRfe484
If Robin Dunbar is correct in his claim that language evolution was driven primarily by the pressures of living in large and complex social structures, Forrester’s work would seem to suggest that solving social puzzles is related to solving physical puzzles. I’m looking forward to the publication of Forrester’s work for further details!
Functions of conversation: reflections on conspiracy theories
At a recent conference on metaphor, Andreas Musolff raised the important question about why people repeat and spread conspiracy theories – e...
-
Recent research, including neurological experiments, has again raised the problem of individual agency and its opposition to strong causali...
-
At a recent conference on metaphor, Andreas Musolff raised the important question about why people repeat and spread conspiracy theories – e...
-
So, chatbots can develop a nasty edge – just like humans. And they have clearly passed – surpassed - the Turing test. There has been much d...