Is 5 less than 3? Don’t know, cannot relate 5 and 3.
Is 1 less than 4? Yes, by transitive ordering.
If the answer is “No”, AI is far from being intelligent.
If the answer is “Yes”, can AI create an algorithm which can solve the “Hamiltonian Path” problem without using heuristics, and whose time complexity is not NP-hard?
If this post does not make sense, please read my presentations
I would like to know how the generating algorithm is kept ito the range of the topic. If there is a way to generate real grammatically correct sentences and a way to “discuss” topics as are indicated by the prompt of the user.
Does the Language model apply different models for these different functionalities?
If we presume the hypothesis of the talk – that LLM prompts/replies can now or soon be considered natural language VMs [NLVMs] – I believe one primary concern for both software engineers and business stakeholders alike is the formal verification of the executable outputs of those VMs.
Two questions from this:
If a primary concern is to determine whether a NLVM is doing exactly what we think it will do, how do we apply tools like SAT solvers, dependent type/category theory, linear resource management, etc. to NLVMs?
Anecdotally, I’ve found that GPT models have a very hard time “understanding” type systems and, for that matter, proper abstraction boundaries. Does this mean we need non-NLVM systems as intermediates between NLVM inputs and outputs? Where do those demarcations lie?
Very interesting presentation. Love the CoPilot references. If I had had that support in my high school days (in the 1960"s -LOL) this disparaxic would have been earning those top salaries used in the presentation example but that AI is probably going to deny my grandchildren as they grow up! Oh well, they will have to find a new field of endevour!
I had a small chat with chatGPT. 11dp7if.pdf (1.4 MB)
It refused to / cannot apply a simple transitive relation. Just read the uploaded file.
It is a bit meager for something with a measured IQ of 155, isn’t it?
I am wandering what were they measuring.
In other words how can one associate any sort of intelligence with something that cannot deduce a conclusion using two simple facts.
So far for “End of Programming” era. I think my chat with chatGPT is a good example of “Artificial Stupidity”.
Thanks for reading.
And here what it knows about “transitivity” and “transitive relations”. Nothing wrong with what it says.
But intelligence needs definitely a bit more than just being able to read and tokenize. For example understanding, which presumes being able to apply what one has understood.
Bottomline, ChatGPT is a sophisticated, natural language based user interface to an extensive database of knowledge (or say better a conglomerate of static written text). A good feature to access Wikipedia for example. But very very far from being intelligent. Not to talk about having an IQ 155. 1g0g263.pdf (346.6 KB)
I received this reply in another forum.
" @Enis i tried this question with ChatGPT 3.5 and it failed as you indicated. If you follow with the suggestion “can we apply the transitive property here” , i get the correct response for what it’s worth.
With chatgpt4 (subject of this article), it got this right from the beginning:
"Yes, based on the information provided:
D is older than A.
D is older than C.
C is older than B.
A is older than B.
Since D is older than C (statement 2), and C is older than B (statement 3), it follows that D is older than B "
"
Is it worth to start an experiment to teach GPT 4 how to solve “DNA Sequencing” problem?