I think anyone with an eye for what effort really looks like can probably figure out the difference between ChatGPT and a human being’s writing. The AI tool can be used as a cheating device that, when the user has enough practice and knows precisely how to prompt a decent essay, can generate (I refrain to say that it can really write) a decent essay. That said, it can easily help students edit papers they may have honestly written as well, saving time and reducing stress for many– especially those with heavier workloads. Don’t get me wrong, the art and practice of manually editing (among other tasks) is still important to understand and be able to do, but at a certain point where one has already learned how it works the manual effort is sometimes unnecessary. So it seems that Katherine Schulten poses a valid question as the title of her New York Times opinion piece, How Should Schools Respond to ChatGPT? She also poses several other important questions that not only should educators be taking into account, but their students as well.
- Have you experimented with ChatGPT, whether in school or on your own? What did you think? How promising or useful do you think it is? Why?
I’ve experimented with ChatGPT, and it really could be an extremely useful tool when it’s used with the right intent. Like I said before, it can be used to skip learning things, or it can be used as an aide to things already learned. So, teachers can potentially use ChatGPT to have something generating work for students doing better on certain topics, while they help struggling students understand what’s being taught. Or maybe how the teacher phrases things might not click for some students– in this case, maybe the student can ask the AI for an explanation of some sort and get started on starting to understand things a bit better. No class is going to have a bunch of students that learn exactly the same way or at the same pace. ChatGPT could potentially help with that.
- Why do you think many educators are worried about it? The New York City school system, for instance, has blocked access to the program for fear of “negative impacts on student learning, and concerns regarding the safety and accuracy of content.” Do you agree? What “negative impacts” can you imagine? What, if anything, worries you about this tool?
Educators seem to be more concerned about cheating than they are about ChatGPT, but the bot is just the means through which that cheating might occur. Sure, some healthy boundaries are entirely necessary to deter dishonesty in academics, and they’re also necessary to ensure that there is a legitimate measure on whether or not students are actually learning the content being taught (which means math being done manually or in-class review of their, there, and they’re). So I don’t entirely think blocking ChatGPT is the best idea, but it’s not the worst either. I think a better solution might be to create in-class, interactive, and tech-free lessons to keep students engaged when they are initially learning content. Having no boundaries on ChatGPT might create a lazy work ethic and a lack of learning content (but not a total lack of learning anything, if I’m honest– users would still learn how to use AI).
- This article argues that ChatGPT’s potential as an educational tool outweighs its risks. How do you feel? Should teachers “thoughtfully embrace” this technology? If so, what could that look like? For example, how would you imagine using the chatbot on an upcoming assignment in a way that supports your learning?
Like I’ve said before, technology like this isn’t inherently bad. As far as I’m concerned, God created everything to be good but things ended up distorted (or in other words, bad) because of sin (or how we as human beings handled things). Nothing is inherently bad, it’s just distorted when it’s dealt with the wrong ways. The case is similar with ChatGPT: when we thoughtfully and intentionally approach its use, it can be a great help and support to how we teach or learn.
- Some educators say the threat of widespread student cheating means the end of classroom practices such as assigning homework, take-home tests and essays. Do you agree? Or, do you think those activities can be reimagined to incorporate the use of chatbots? If so, would that be a good thing? Why or why not?
I would somewhat agree, but frankly that’s something that will reflect in assignments done and tests taken in class without the use of any technology. This is why having a healthy balance of technology use and lack thereof is important to modern pedagogies. Allowing the use of it will allow students to not only learn but satisfy the (often subconsciously) perceived need and (in theory) make aptly designed in-class tech-free activities more interesting and engaging.