We probably all know someone who absorbs information readily, whether from textbooks or travel guides, and spews it out ad infinitum when a key word or phrase triggers their inner expert. We unceremoniously refer to them as ‘bullshitters’; they are not stupid but we begrudgingly refer to them more often as ‘annoying’ than as ‘intelligent’. They possess uncanny language processing capabilities using honed information retrieval skills that stores and retrieves nouns and (preferably) intransitive verbs from which to reconstruct intelligible sentences. They do not seem to possess any real interest in the inner workings of things themselves beyond their fluent descriptions. One frequently considers them as knowing a lot which seems to diminish the very concept of knowledge although there is often real information worth knowing in what they have to say.
The GPT part of ChatGPT stands for “generative pretrained transformer,” a chatbot with which you can converse as if with a human. It has the advantage of politely allowing redirection or of being stopped, but as a friend of mine has observed, it is a bullshitter. The media cautions repeatedly that Ai is prone to errors and that one needs to validate its statements. All you really need to know in that regard is that it is large language model (LLM), i. e., a bullshitter. That is how Ai should be treated — with ample caution and due respect. In my own experience in using it, I have found that on occasion it will make an erroneous statement; if I correct it, it may make a different incorrect statement; upon correcting it again, it will revert to the first wrong response, apologizing all the way. “I apologize for any confusion caused by the earlier information. Thank you for your understanding.” You don’t get that from your everyday bullshitter. But on another occasion I had the following interaction:
——-
Me (to ChatGPT):I’m quoting you here:
[a sequence of four quoted ChatGPT statements in its previous response, followed by four of my rebuttals based on what I knew to be true on that particular topic]
Can you demonstrate that I am incorrect in these criticisms? I am sincere in my efforts and I want valid data and issues.
ChatGPT
Our systems have detected unusual activity coming from your system. Please try again later.———–
Often ChatGPT is referred to as a ‘content generator’, and if you desire applicable verbiage around a specific noun or more extensive but previously understood concept, it does a great job of generating applicable verbiage.
But is that intelligence?
I don’t think so. Some 45 years ago I attempted to design a program that could decipher what it observed. I noted two areas of intelligence that needed to be addressed. One involved knowledge of the models that predict observed behavior of known objects. People, in fact all life forms, are really good at that. But there is that other area… the witnessing of previously unidentified, and therefore unmodeled, behavior requiring the development of a model of an entity to associate with that behavior. Some people are good at it but it is not preprogrammed capability; it requires continuous recalibration. Psychometricians identify two types of intelligence: crystalized and general. Experts may demonstrate extreme crystalized intelligence over their subject of expertise. It is ‘learned’ information and although some people are more adept at rapid or thorough learning. General intelligence is the ability to create models that may become areas of expertise.
So far I see little evidence of the latter form of intelligence with an A in front of it.
Leave a Reply