I’ll share my experience and the experience of my kids so far.
Aside from blindly copying and pasting a response, in which case the learner wasn’t interested in learning and probably would have plagiarized from somewhere else anyway, I have found LLM to be an incredible, endlessly patient teacher that I’m never afraid to ask a question of.
My kids who are in the tween and teenage years, are incredibly skeptical and dismissive of AI. They regard AI art as taking away creative initiative from artists and treat LLM similar to the way we treated Google growing up, if they use them at all. It’s a tool which can be helpful for answering questions that is part of the landscape of their knowledge building.
That knowledge acquisition includes school, YouTube and other short videos, their peers (online and off) Internet searches, and asking AI. Generally, I regard asking AI as one of the least problematic sources of info in that environment.
While I tend to be optimistic as a default, I truly do think that the ability to become less ignorant by asking questions is a net positive for humanity.
The only thing I truly lean on AI for right now is as an editor, helping me turn my detailed bullet points into decently crafted prose, and for generating clear and concise transcripts and takeaways from long meetings. To me that doesn’t seem like the downfall of human knowledge.
> [Writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Thank you. Luddite tendencies are as deep within humans as the desire to kill is. It is a personification of the death-drive and all self-destructive tendencies within humans.
It's also the tendency towards the "precautionary principal" AKA Nietzschian "last-man" style thinking applied to the world infinitely.
We should root this kind of thinking out aggressively, at least from the academy.
Update: I meant to compare calculators to something like a slide ruler for logarithms. I'm not from US and I tend to forget that some people use calculators to take 20% of 500.
Where I live these are not allowed in the classroom until 7th grade or so, i.e. when the kids have learned the skills and can then employ calculators mindfully.
I will occasionally do long multiplication in my minds eye just to make sure I can lol. Anything more complicated than that, most people will not be doing anyway. University students however, almost universally do need to write sometimes. Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time. It's pretty standard that higher education requires a level of intellectual rigour that is totally unecessary for day to day life. In the case of ChatGPT, it's allowing people to completely bypass that process even in those settings. Meaning you NEVER learn to do it, not just that you don't do it day to day.
>Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time
I got an engineering degree and don't remember ever being required to do math without a calculator. Of course, some things are easier if you don't need to bust out a calculator for everything.
I order to get in to the engineering degree though you would have had to do maths exams in school that required you to have proficiency in doing maths without a calculator that the average person does not have and will never need. I did not do an engineering degree but I did do a higher level maths class in school. I failed 2/3 of the exams but that still means I have learnt to do calculator free maths to a higher level than the vast majority of the population has ever even considered. And they don't need to. Probably doesn't impact their lives at all. That doesn't mean that there aren't some people who should really should learn to do that stuff. There's a reason I abandoned the idea of Physics or similar disciplines as a university degree choice - because I did not have the maths foundation to build on. Any arts and humanities degree, if not any degree really needs a foundation of writing skills in the same way.
You don't allow students to use calculators for operations they haven't personally mastered. If you don't learn how to add two numbers on your own, the rest of your learning is in serious jeopardy.
This is the author's lament. These students are skipping over personal mastery.
With a calculator, the end result is still the same: a (typically numerical) answer of some kind. Writing one's own essay vs. getting an LLM to regurgitate it results in vastly different outcomes.
Not a super big fan, honestly. I'm a bit horrified when I see high school seniors who are smart, and have been through the entire HS math sequence... dig around in their backpack for a calculator to find 5 times 1.5 or 20% of 11.
I'm glad that we have calculators and computing devices, but I'm not glad that they have made teens with basic numeracy into an endangered species. Many tools we use expand our understanding, but the calculator causes our arithmetic skills to atrophy.
From my experience, the more advanced math you learn, the worse you become at arithmetic. I knew a lot of math majors in college, and all of them used calculators all the time.
But, if symbolic manipulation is done by hand and then numbers just plopped in to get final result and estimation if answer is realistic enough. Well I think that is fair enough.
And spreadsheets are also useful when you need to add up bunch of things or multiply them.
Calculators are reliable and predictable, so losing skill at that kind of calculation is a safe, compartmentalized offloading. We offload an extremely clearly defined set of tasks, and it gets executed cheaply, immediately, and perfectly.
LLMs are different.
A closer analogy would be something like computer algebra systems, especially integration. We can offload differentiation cheaply, immediately, and perfectly, but integration will frequently have a "unable to evaluate" result. I genuinely wonder whether integral-requiring-workers are better or worse at it as a result of growing up with CAS tools. People on the periphery (a biologist, for example) are undoubtably better off since they get answers they couldn't get before, but people on the interior (maybe a physicist) might be worse at some things they wish they could do better, relative to those who came up without those tools.
I think this is overestimating the impact of LLMs.
Fact is, even if they are capable of fully replicating and even replacing actual human thought, at best they regurgitate what has come before. They are, effectively, a tutor (as another commentator pointed out).
A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...
I personally still don't see the actual value of LLMs being realized vs their cost to build anytime soon. I'll be shocked if any of this AI investment pays off beyond some minor curiosities - in ten years we're going to look back at this period in the same way we look at cryptocurrency now - a waste of resources.
> A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...
What changes is the educational history of those humans. It's like how the world is getting obese. On average, we have areas we empirically don't choose our own long term over our short term. Apparently homework is one of those things, according to teachers like in TFA. Instead of doing their own homework, they're having their "tutor" do their homework.
Hopefully the impact of this will be like the impact of calculators, but I also fear that the impact will be like having tutors do your homework and take your tests until you hit a certain grade and suddenly the tools you're reliant on don't work, but you don't have practice doing things any other way.
I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.
I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.
Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).