Wednesday, March 13, 2024

Are AI Chatbots intelligent?

 A chatbot as a mouth on the screen of a mobile phone

"A chatbot as a mouth on the screen of a mobile phone"

Will chatbots based on the large language model lead to advancd general artificial intelligence?

Or are we heading towards another AI winter?

In his latest post, What if Generative AI turned out to be a Dud? Gary Marus questions the very high levels of investment in the latest developments and he suggests that the possible return on investment has been over-hyped. He quotes an article from Fortune:

Perhaps we should go back to first principles and look at the foundations of human intelligence. and see how ChatGPT relates to them. by considering the flow of cultural information.

Information is clearly held in the human brain in some kind of network and much of what any human brain knows comes from the sharing of cultural information over many generations, using a natural language. This sharing involves mapping information stored in the brain's network into a linear string of tokens for transmission to another brain where the process is reversed and the information stored in a network in the recipient's brain. Over the generations the transmission format has evolved to speed the exchange of information, including the use of printed text.  Because of the way information is shared (including the education system) the text records will include a vast amount of repetitive shared information. ChatGPT uses a big data large language model to statistically identify repeated patters in a vast collection of shared cultural information. In view of the way cultural information has grown over thousands of years it is hardly surprising that ChatGPT finds significant repetitive patterns in the natural language texts it examines, some of which can be used to make predictions, some of which unfortunately turn out to be hallucinations.  What must not be forgotten is that ChatGPT is analysing (at great expense) a highly evolved linear transmission language and not the way the human brain's processes and stores the information in a network. 

The problem is every mathematician should know that building a predictive statistical model  does not necessarily reflect the real world system that generated the raw data. The classic example dates back over 2000 years when the Greeks used epicycles as a mathematical model to describe the retrograde motion of the planets in the zodiac. The model was accurate enough for them to build a clockwork computer to predict where the planets could be seen, and when eclipses could be expected. Of course the epicycle model tells us nothing about how the planets actually move in space and now the Copernican heliocentric model is generally accepted.

If one considers the ChatGPT large language model it is apparent that it fails to adequately "understand" the information it is processing and has difficult in explaining its answers in a way that humans find transparent. This could well be (by analogy with epicycles) that large language model is a bad model for how the brain's network "understands" the information it is processing. It took nearly 2000 years for scientists to abandon the epicycle model.  How long will it take modern AI hypers to realised that if we want human-compatible general AI we need to start by modelling how the human brain's network "understands" the information it holds.  Is a new AI winter due?

At this point I will suggest that the CODIL model at least points in a promising direction, and seems to fall within the constraints of evolution. However I would not want to claim it provides all the answers and I merely want to encourage debate as to the ways foreward.

No comments: