I’m very curious how learning with the aid of an LLM can encourage neophyte developers to become confident at confirming that a program is dependable and also able to trouble-shoot and resolve defects that arise.
Is there a change in the level of abstraction that is introduced and are there vulnerabilities that the learner is subject to when the abstraction leaks from the mysteries below?
Many years ago, we spoke of expert systems and the need for domain knowledge on their part. This was before LLMs and the switch to statistical and neuro-net AI training. So, if there is to be an LLM that qualifies as an expert instructor of programming, how is that arranged, and how much attention is paid to the management of abstractions versus mechanical interpretations that becomes, along with understanding of what a program is for, as well as how it accomplishes that purpose, the meat and potatoes of software development?
One question that arises when introducing programming aid with LLMs, is how plagiarism is defined for academic work. Assessing whether code is plagiarized can be done in a lot of ways. Interestingly when using chatGPT for coding assignments, the answers provided have little variance for queries related to one assignment question. So the question is, do LLMs make checking for plagiarism obsolete?
Does the students potential use of LLM and chatGPT reduce the thought processes and make the function or code into a black box. I recall my students experiencing that “Ah ha! “ moment when the understood why the if statement worked or a logical condition yielded a specific result. Having them depend on an autonomous application to write code for them reduces their much needed foundation understanding. This is like stating that since computers do much of the complex math, then there is no reason to learn calculus or differential equations, just input the data, parameters, and constraints, then stand back for the results, with no need to know how the answers are derived.
Randall, this showed up today (and I guess very-long-URLs are used because they can be):
This has me be wondering not about a student’s conceptual model of the AI but what sort of conceptual model of computing and programming are students going to come away with, if there are no prospects for the kind of insights that you observe as something clicks for the student. I already wonder about the quality of instruction that I see in various MOOCs (and some is superb among much noise).