The Association for Academic Leaders Advisory Council met last Friday, and AI was on their minds, too. Several of the members had given the site a try, and the results had impressed, dismayed, or even encouraged them. Several mentioned that the site seemed to have merit as a “jumping off point” for work to be more developed by the author. A former AP teacher proclaimed an essay “worth at least a 4.” An English teacher followed up after the conversation by saying that while we’d discussed how a student could generate an AI essay and then “back into” an outline and a rough draft or two to appear to have gone through the writing process, she believed that most students who commit academic dishonesty are trying to create shortcuts and that didn’t seem to be much of a shortcut to her. The consensus was that teachers and Academic Leaders would need to explore, understand, and address ChatGTP, as was the case with tools already available in areas such as math, world languages, and art (Wolfram Alpha, AI;R Math, DALL-E and more). During the conversation, I shared that I’d asked the AI, “How can a teacher tell if an essay is written by AI instead of by the student submitting the work?” (Find ChatGTP’s answer at the end of this post). The limits of the AI become evident pretty quickly. The results are hardly inclusive. I asked for novel suggestions and the AI generated a list of books by white authors (and only one woman). Evidence abounds that AI algorithms are biased, can be racist, and “deepen racial and economic disparities.” While the question seems to be how to make AI “better,” schools would do well to ensure that students understand these profound inequities are baked into much, if not all, of the AI they’ll encounter. And of course, there’s the fact that the user’s data “may be reviewed by our AI trainers to improve our systems.” Yup, in using these tools, you’re giving up valuable insights to the companies building the algorithms. For these reasons alone, schools need to teach students about AI, how it works, how using it might help learners, and where they should be cautious rather than ignoring it or trying to find ways to keep students from using it with blockers and other tech tools. Peter Gow and I did some further poking at the AI’s capabilities. We found that answers were often circular once the questions got philosophical. “How to end inequity in education?” ChatGTP helpfully offered that we just needed to examine the root causes of inequity in society at large and eliminate those. Problem solved. On the other hand, we played around with a book idea and the resulting outline wasn’t bad for organizing ideas into sections and chapters. AI generated text is certainly going to be a boon to the publishers of “clickbaity” pieces such as “Top 10 Tips for Tying Shoes” that one might find on a specialty laces shop. So there’s a paying gig for a human gone, and I’ll let others decide whether or not that’s a loss. Canva, a widely-used website for infographic/poster generation, has incorporated AI (dubbed Magic Studio) with the caveat that the use of AI must be credited on the final version of work. Media stories range from the Atlantic proclaiming the “end of the college essay” to the Washington Post celebrating that a dyslexic landscaper has developed new confidence in client communications. In trying to think about what this addition to the technology landscape means, both right now and going forward, I engaged in writing this blog post, an activity that helped me organize my thoughts around AI generated text. Organizing one’s thinking is part of every course; teachers can and should consider incorporating AI into this work. The art of generating the right question to elicit an answer to an assignment is worth teaching, just as we teach students how to write search queries. The natural next step is to consider whether you got useful answers. How do you know? Asking students to design powerful questions rather than regurgitate memorized answers is nothing new to independent school faculty! Educators can and should explore the impact of using AI to assist students with communication disorders. But once the questions are asked, there’s then the value of engaging in the longer writing process, and here’s where it gets trickier. Genuine engagement in writing longer text forces the writer to organize their thoughts, interrogate what’s on paper, see where more research or re-thinking is needed and revise accordingly. Paragraphs move, evidence is cited, false trails are followed, then rejected, Teachers will know that students have engaged in that level of work when they know their students well, engage in dialogue with them while they write (or solve complex math problems or compose music, etc). Further, the culture of the school (this one’s on you, Academic Leaders) needs to prioritize the inclusive work of intellectual engagement, and that of making thinking visible, over the grading of the end piece. Oh, and that response I said I’d share? My question: How can a teacher tell if an essay is written by AI instead of by the student submitting the work? Response: There are a few ways that a teacher can tell if an essay has been written by AI instead of by the student who submitted it. One way is to look for signs of artificial language, such as repetitive sentences or unnatural word combinations. Another way is to evaluate the content of the essay and see if it is relevant to the topic and meets the requirements of the assignment. The teacher can also look for inconsistencies or errors in the essay that may indicate that it was generated by a machine. Finally, the teacher can ask the student to provide additional information or clarification about the essay to verify that they are the author of the work. For more on this topic, check out this accompanying blog written by Peter Gow: Using AI's Powers for Good
4 Comments
12/17/2022 09:52:59 am
One additional answer I'd add to the concluding questions, "how can a teacher tell?" is to know your students well enough to recognize the quality and style of writing as theirs or not.
Reply
Sarah Hanawald
12/19/2022 04:49:09 pm
Such a good point! We thought about adding our take on it, but ultimately cut this paragraph that I'm adding here: We’re an online school, so One Schoolhouse’s teachers have to build in time from the beginning to ensure that they deeply know their students’ voices. We intentionally keep our courses small so that our teachers know every student quickly. The asynchronous online experience means that our teachers and students read even more of each others’ writing from the beginning of their student-teacher relationship. In addition, one of the first things our teachers do is schedule a real-time conversation with each student. The work may be asynchronous and digital, but the relationships are real in a way that an AI can’t replicate. We know that this human connection is part of the lifeblood in the schools we partner with too.
Reply
Paul Erb
12/20/2022 10:51:22 am
Good point, John. In the first week of ChatGPT's availability I demonstrated it to all my classes (two English, two French) and then we talked about how important it is to my students to be _known_. That means that Academic Leaders need to allocate time for, and restructure, the teaching of the humanities so that student can collaborate with the teacher--to Sarah's point, working through drafts, with or without the AI, vs. grading the final product. The AI tools are here to stay. We are starting the journey of learning to live with them. And I am developing frameworks that distinguish "assisted" from "unassisted" assignments, a parallel to those Appalachian Trail hikers who go to set records. My questions to the students: "Who are you when the battery dies? How does this system work? What can I do alone? What can we do together?"
Reply
1/10/2023 12:33:39 am
#thispersondoesnotexist
Reply
Leave a Reply. |
Don't miss our weekly blog posts by joining our newsletter mailing list below:AuthorsBrad Rathgeber (he/him/his) Archives
August 2024
Categories |