ChatGPT isn’t AI – how is that possible
AI or Artificial Intelligence has a been a big deal in several phases. The 1960s saw the rise of the first round which ended with the conclusion that it was harder than we thought and we didn’t yet have the tools or processing power to tackle it. But we did end up with some excellent ideas which led to Expert Systems and Neural Networks.
Every decade or so since there has been a resurgence with the current round of Machine Learning and correlation engines being heavily used for a wide range of purposes, mostly to try and sell more stuff to more people.
The Turing Test was originally devised to determine if a machine had become intelligent but it has been passed by a chat bot acting as a 13 year old Ukrainian boy, Eugene Goostman. So not evidence of actual or artificial intelligence but definitely evidence that the parameters of that test are insufficient given our ability to mimic the form of human conversational interaction.
Chat GPT has caused a stir because it has human like responses. The form is correct but is it really intelligent?
The answer is a resounding NO. It is a great tool and the concepts will add greatly to future interface design but it is a trained large language model based on a massive text dataset that builds answers statistically. It has no inherent understanding of the domain nor intuition. It does have a great understanding of sentence structure and grammar.
The first article by Rodney Brookes gives a great conceptual overview of the area and the second article by Stephen Wolfram explains exactly how ChatGPT formulates its answers, one word at a time. So very clever and you can expect more cleverness like this; but not Artificially Intelligent.
Rodney Brookes shows that one of the dangers of ChatGPT is the the answers are delivered with such confidence that even when they are completely wrong, you feel like they are true and are inclined to treat them like they are true.
A second big danger with ChatGPT, again according to Rodney Brookes, is mistaking performance for competence. If we see a person performing to a certain action or task with a level of capability we can extrapolate to other things we can expect them to be able to do. We move from evaluating performance to expecting competence. And this is based on our life experiences. This does not apply to Machine Learning systems. A level of performance cannot be extrapolated to competence, either in this domain or anything else we would normally be able to assume based on our understanding of how our own intelligence and competence works. Our instincts work against us. I think this conclusion is also correct.
Successful Endeavours specialise in Electronics Design and Embedded Software Development, focusing on products that are intended to be Made In Australia. Ray Keefe has developed market leading electronics products in Australia for more than 30 years.
This post is Copyright © 2023 Successful Endeavours Pty Ltd