LLMs: A New Way to Teach Programming

Title: LLMs: A New Way to Teach Programming
Date: June 21, 2023
Duration: 1 HR

SPEAKER
Daniel Zingaro
Associate Teaching Professor, University of Toronto

SPEAKER
Leo Porter
Associate Professor of Computer Science and Engineering, UC San Diego

MODERATOR
Michelle Craig
Professor, Department of Computer Science, University of Toronto

Registration Link

Create a Language Model with Language Understanding (free course for ACM members)
Hands-on GitHub Actions: Implement CI/CD with GitHub Action Workflows for Your Applications (free book for ACM members)
Beginning Git and GitHub: A Comprehensive Guide to Version Control, Project Management, and Teamwork for the New Developer (free book for ACM members)
ChatGPT & Its Practical Use Cases (free course for ACM members)
ChatGPT and Generative AI: The Big Picture (Free course for ACM members)

1 Like

I’m very curious how learning with the aid of an LLM can encourage neophyte developers to become confident at confirming that a program is dependable and also able to trouble-shoot and resolve defects that arise.

Is there a change in the level of abstraction that is introduced and are there vulnerabilities that the learner is subject to when the abstraction leaks from the mysteries below?

I just saw this commentary from Amy J. Ko, Large language models will change programming… a little | by Amy J. Ko | Bits and Behavior | Medium. I share the concerns that Ko expresses.

Many years ago, we spoke of expert systems and the need for domain knowledge on their part. This was before LLMs and the switch to statistical and neuro-net AI training. So, if there is to be an LLM that qualifies as an expert instructor of programming, how is that arranged, and how much attention is paid to the management of abstractions versus mechanical interpretations that becomes, along with understanding of what a program is for, as well as how it accomplishes that purpose, the meat and potatoes of software development?

1 Like

One question that arises when introducing programming aid with LLMs, is how plagiarism is defined for academic work. Assessing whether code is plagiarized can be done in a lot of ways. Interestingly when using chatGPT for coding assignments, the answers provided have little variance for queries related to one assignment question. So the question is, do LLMs make checking for plagiarism obsolete?

Does the students potential use of LLM and chatGPT reduce the thought processes and make the function or code into a black box. I recall my students experiencing that “Ah ha! “ moment when the understood why the if statement worked or a logical condition yielded a specific result. Having them depend on an autonomous application to write code for them reduces their much needed foundation understanding. This is like stating that since computers do much of the complex math, then there is no reason to learn calculus or differential equations, just input the data, parameters, and constraints, then stand back for the results, with no need to know how the answers are derived.

2 Likes

Randall, this showed up today (and I guess very-long-URLs are used because they can be):

This has me be wondering not about a student’s conceptual model of the AI but what sort of conceptual model of computing and programming are students going to come away with, if there are no prospects for the kind of insights that you observe as something clicks for the student. I already wonder about the quality of instruction that I see in various MOOCs (and some is superb among much noise).

Has the recording of this talk been put online? Does anyone have the URL for the recording?

I’d like to forward the talk on to other, interested people I know :slight_smile:

You can watch the video through the registration link contained in this page.

1 Like