Large Language Models and the End of Programming

Title: Large Language Models and the End of Programming
Date: May 9, 2023
Duration: 1 HR

SPEAKER
Matt Welsh
CEO, Fixie.ai

MODERATOR
Oana Olteanu
Partner, Venture

Registration Link

Language Models (free video for ACM members)
Introduction to Transformers for NLP: With the Hugging Face Library and Models to Solve Problems (free book for ACM members)
ChatGPT & Its Practical Use Cases (free course for ACM members)
Create a language model with Language Understanding(free course for ACM members)
ChatGPT and Generative AI: The Big Picture (free course for ACM members)

3 Likes

Very interesting topic but I’ll not partecipate as the questionary to register the event, is too requesting in sensitive and personal questions

1 Like

If I specify the following facts:

  • 1 is less than 3
  • 1 is less than 5
  • 3 is less than 4
  • 5 is less than 4

Can AI create code that can generate the answers:

  • Is 1 less than 3? Yes, by fact.
  • Is 4 less than 3? No, contradicts with fact.
  • Is 5 less than 4? Yes, by fact.
  • Is 5 less than 3? Don’t know, cannot relate 5 and 3.
  • Is 1 less than 4? Yes, by transitive ordering.

If the answer is “No”, AI is far from being intelligent.

If the answer is “Yes”, can AI create an algorithm which can solve the “Hamiltonian Path” problem without using heuristics, and whose time complexity is not NP-hard?

If this post does not make sense, please read my presentations

(PDF) Topology and Structure of Directed Acyclic Graphs - Relational Sorting

and (PDF) Topology and Structure of Directed Graphs - DNA Sequencing

I would like to know how the generating algorithm is kept ito the range of the topic. If there is a way to generate real grammatically correct sentences and a way to “discuss” topics as are indicated by the prompt of the user.
Does the Language model apply different models for these different functionalities?

If we presume the hypothesis of the talk – that LLM prompts/replies can now or soon be considered natural language VMs [NLVMs] – I believe one primary concern for both software engineers and business stakeholders alike is the formal verification of the executable outputs of those VMs.

Two questions from this:

  • If a primary concern is to determine whether a NLVM is doing exactly what we think it will do, how do we apply tools like SAT solvers, dependent type/category theory, linear resource management, etc. to NLVMs?
  • Anecdotally, I’ve found that GPT models have a very hard time “understanding” type systems and, for that matter, proper abstraction boundaries. Does this mean we need non-NLVM systems as intermediates between NLVM inputs and outputs? Where do those demarcations lie?
1 Like

where will this presentation recording be available and when?

1 Like

this questionnaire takes garbage input with no hesitation or second thoughts… :).

Very interesting presentation. Love the CoPilot references. If I had had that support in my high school days (in the 1960"s -LOL) this disparaxic would have been earning those top salaries used in the presentation example but that AI is probably going to deny my grandchildren as they grow up! Oh well, they will have to find a new field of endevour!

Because my question remained unanswered so far, I assume the era “End of Programming” is not here yet.

1 Like

I had a small chat with chatGPT.
11dp7if.pdf (1.4 MB)
It refused to / cannot apply a simple transitive relation. Just read the uploaded file.
It is a bit meager for something with a measured IQ of 155, isn’t it?
I am wandering what were they measuring.
In other words how can one associate any sort of intelligence with something that cannot deduce a conclusion using two simple facts.
So far for “End of Programming” era. I think my chat with chatGPT is a good example of “Artificial Stupidity”.
Thanks for reading.

And here what it knows about “transitivity” and “transitive relations”. Nothing wrong with what it says.
But intelligence needs definitely a bit more than just being able to read and tokenize. For example understanding, which presumes being able to apply what one has understood.
Bottomline, ChatGPT is a sophisticated, natural language based user interface to an extensive database of knowledge (or say better a conglomerate of static written text). A good feature to access Wikipedia for example. But very very far from being intelligent. Not to talk about having an IQ 155.
1g0g263.pdf (346.6 KB)

I received this reply in another forum.
"
@Enis i tried this question with ChatGPT 3.5 and it failed as you indicated. If you follow with the suggestion “can we apply the transitive property here” , i get the correct response for what it’s worth.

With chatgpt4 (subject of this article), it got this right from the beginning:
"Yes, based on the information provided:

  1. D is older than A.
  2. D is older than C.
  3. C is older than B.
  4. A is older than B.

Since D is older than C (statement 2), and C is older than B (statement 3), it follows that D is older than B "
"

Is it worth to start an experiment to teach GPT 4 how to solve “DNA Sequencing” problem?