Blog – alanza.xyz

ROBBIE LYMAN

Thoughts on learning

Previous: return "Robbie";
Up

I’ll go ahead and begin by claiming that generative AI has plateaued, if not already peaked. I might come back to justify that claim later on, but the purpose of this post is to share some musings on generative AI as a tool for accessing expertise.

If you trace the arc of the other recent flash-in-the-pan technologies, their plateau or peak seemed to coincide with them reaching deep into the public ken. Like, by the time the conversation about NFTs was really starting to reach a fever pitch, that was when it was truly sunk. I’m observing the grift of AI, not to be too mean about it, reaching the academy. It’s possible to use AI in some contexts as just a buzzword to provide cover in your title or abstract for your actual talk topic, in the way that “topological data analysis” worked for a while within mathematics. Something interesting with AI in this regard, though, is that its buzzword power is pretty interdisciplinary. Will it replace mathematicians? Artists? How does it impact my research in the social sciences? I’m amused.

I’ll concede that generative AI does appear to provide something that NFTs never did, namely a practical use-case for a nonzero number of ordinary people. Indeed, it’s been interesting to compare the themes in conversations I’ve heard between users of generative AI who are in a “have not” relationship to subject-matter expertise with those who have the expertise.

For those in the “have not” camp, LLMs and generative AI are a win for accessibility of knowledge and skill, lowering the barrier of entry and getting to “good enough” faster. I hear this a lot with coding, where people may be working with unfamiliar libaries or languages (or just general lack of coding knowledge) and find using LLMs as a guide speeds them up. There’s also an interesting camp that I’ll mildly uncharitably throw in with this one of people reaching for the ChatGPT API as the value-add of their project. These folks have the coding chops to work with the ChatGPT API, but maybe not the subject-matter expertise to implement their dream project without a silver bullet.

For folks with the expertise, LLMs are often being framed as not worth it—for code, maybe one needs to spend more time reviewing the code than you would writing it by hand, for example. For math, maybe the arguments have a veneer of correctness but no real substance. I’ve heard even more interesting and subtle things mentioned, like the “Copilot pause” taking you out of flow state.

This semester I’m teaching for the first time since widespread availability of tools like ChatGPT. We talked briefly about it on one of the first days of my graduate class, and I was surprised to hear that most of my students find it useful and a worthwhile thing to turn to. As a teacher, I don’t really mind students using it. Certainly as an undergraduate I remember using google searches aimed at Math StackExchange constantly as a way of asking for help without asking a human for help. I have (pretty negative) feelings about energy and water and computation resource usage, unethical scraping of copyrighted data for training, and above all the minimal human understanding we have of the processes behind LLMs. But like, in some ways, these are criticisms one could have aimed at search engines or the internet when both were younger.

There are, though, two things that actively bum me out. Ultimately these are personal or cultural problems, not even really problems with generative AI or LLMs.

The first is that I believe pretty strongly that expertise and broad knowledge are worth developing, and what I’m not seeing from our interactions with generative AI is an understanding of how to use it to actually learn. Learning sucks, right? It’s messy and you often go through deep chasms of misunderstanding before emerging with a nugget of wisdom that always shares out less well than you hope. I’d love to see more reviewing of AI-generated knowledge. So like, if you use it in a coding project, one small way to start to do this would be to—rather than copy and paste—type out the code it gives you by hand. Even that small step, in my experience, engages my faculties for understanding in different ways, helping me actually learn in the process.

The other thing I’m seeing AI used for that makes me kind of sad is that it is replacing turning to an expert. Back in 2022 an acquaintance of mine wanted me to help him embed ChatGPT in a VST; the idea being I guess that you could use it as a personal music production coach. (I refused.) And like, what a waste, right? Obviously we have this cultural feeling that values expertise enough that we don’t want to bother people, but it’s funny that this ends up with us wanting to know things, but not asking the people who do. When I teach, I have office hours for my class twice weekly. As a rule, students are not coming. (Part of that is on me—I need to do better at communicating what the usefulness of office hours is.) Like, if you want to know something, I’m positive that there are people you can ask who will tell you.

If you don’t care about developing your own expertise, but you want a thing done, my strong feeling is that you should pay someone to do it.