As part of our commitment to diversity, equity, inclusion, and justice, we work to recognize observances and holidays that center the voices and experiences of historically excluded peoples in the United States. Below, we include voices that share our commitment to learning, and to amplifying all voices and experiences.
Learn about Lunar New Year celebrations: This article by the Asia Society recognizes the many traditions in Asia that mark the new year, including Chinese New Year, Songkran (the Thai new year), and Tet (in Vietnam), among others
Recognize the Lunar New Year at your school: “Learning About Lunar New Year”, from WeTeachNYC and the New York City Department of Education provides sample lessons and activities for exploring Lunar New Year with grades K-8.
We encourage you to seek out writing about and celebrations of Lunar New Year: In “Celebrating Chinese New Year Far From Home and Family,” writer and educator Frances Kai-Hwa Wang writes about preserving and evolving traditions “Our Chinese school New Year’s celebrations may not be completely traditional, but we are together.”
The first step is to understand AI– what it is, and what it isn’t. How deeply you want to dive into the details is up to you, but take it far enough to understand that AI uses machine processing and learning to complete tasks previously associated with human intelligence. AI can now automate text and image generating processes. It does not do so by “thinking” but by copying and recombining previously absorbed text input in microseconds. It’s learned the ways that text makes sense to us from the text we’ve already written. AI doesn’t have new ideas, but it creates new text passages and images. What does this mean? It’s a progression of the idea that once a technology exists, it keeps getting faster and more powerful. You’ve known for a while that your phone can predict what you’re likely to say in a text to your mom, or that you’re likely to want to see the faces in a picture more clearly. Your email client thinks it knows how to finish your sentence in a message to a parent; how often is it right? AI “creates” text based on what it finds with its bots, which means that it can be grievously wrong, full of error and prejudice, and sometimes perfectly correct and “worthy of a four” on an exam. One of many great tutorials written with educators in mind is this one from Ditch That Textbook.
The big picture is that now AI is coming for “intellectual professions” just as mechanization and robotics replaced humans in manufacturing. Is this a good thing? Maybe. Calculators enable math-heavy professions to focus on bigger problems by handling number crunching. AI can generate the “routine creativity” elements of work done by people who draft reports or analyze data in their professions. What this means for the world of work is fascinating (and beyond the scope of this post)….and yet that’s the world of work our current students will be entering. So, Academic Leader, get your head around AI, if for no other reason than the chances are that you’ll be reading AI-generated text sooner rather than later.
After you’ve looked around a bit, it’s time to focus inward and lead your community in forging the path forward–which is another competency for Academic Leaders: make decisions with resolve and empathy. A caution: don’t focus on this as an academic integrity issue, but recognize AI for what it is, a powerful tool rapidly on its way to being ubiquitous. What are your school’s existing values and culture around assistive technology? What are the exact competencies that assignments and assessments are evaluating and how well do they do it? If these haven’t been clarified at your school, there’s work to do, and as is almost always the case, you’ll need to differentiate your “right now” needs from the longer term.
For those “right now” issues, you probably have teachers who are looking at next week’s assignments and realizing that AI can now turn in perfectly reasonable responses. Worse, they’ve got major papers and semester assessments that now need interrogation. What’s the immediate solution? One consideration is to shift what happens during in-class and out-of-class time. There are calls for teachers to only accept hand-written work that is completed under the watchful eyes of an adult; is that realistic or desirable in your community? Other voices are suggesting that pedagogy should be reconsidered so that students examine or critique the AI-generated response as part of their learning. Many wonder how teachers can design assignments so that the conversations and feedback around writing are more visible and present. No matter what approach, your school will need to teach students about AI, because they’ll certainly be learning about it from somewhere, and you want your values to be part of that conversation.
Teachers have questions, and Academic Leaders can help them arrive at answers by drawing upon the expertise of those who have been through some of this thinking already. You have internal experts on campus. Engage with your world languages department leader and ask how they’ve handled the development of translation software that operates in real-time. (And maybe ask how they’re planning to handle voice-mimicking AI, too.) Math leaders have pretty significant experience managing the intentional use of problem-solving technology tools! Other in-house experts undoubtedly include library/media specialists, the technology team, digital arts, and learning support staff. Gather the right folks around a table (real or virtual) and have a practical conversation about what to do right now and what to postpone for a thoughtful professional exercise later this spring/summer. Don’t forget the school’s communications team–AI text and image generation is exploding in their world, too. And you can certainly bet the college counselors are talking about AI-generated essays!
Once you have a plan, it’s time to put another one of our competencies into action: communicate effectively across multiple domains. In your voice, share and clarify how you’ve been researching AI, along with your expectations, understandings, plans, and yes, penalties. Share that this is work in progress, and you’re watching the technology evolve. You certainly learned how to communicate during an evolving landscape over the last few years– use that competency now. Conduct demos for those who need or request them so that they can build their own understanding of AI. Include uses such as “write an email to a parent or guardian about a student’s successful project.” Share links to articles you found helpful, but make sure your primary communications pieces (emails, videos) are authentic to your voice and community.
You’ve got this!
For more on this topic, check out these accompanying blogs:
As we look ahead to Martin Luther King, Jr. Day next week, we want to lift up the words of others who share our commitment to learning, and amplify all voices and experiences. In our newsletters, we work to recognize observances and holidays that center the voices and experiences of historically excluded peoples in the United States as a part of our commitment to diversity, equity, inclusion, and justice.
Learn about the history of Martin Luther King Jr. Day. The National Museum of African American History & Culture details the 15 year battle to make Martin Luther King, Jr. Day a holiday.
Recognize Martin Luther King Jr. Day in your school and community. Learning for Justice details the ways that teachers can avoid “a sanitized narrative” about Dr. King’s work, and accurately represent and teach his “more radical approach to justice” which requires antiracist action and not colorblind neutrality.
We encourage you to seek out the many voices speaking and writing about Martin Luther King Jr. Day. On the National Civil Rights Museum website, Dr Ibram X. Kendi writes, “If King’s well-known dream symbolized the glorious march of racial progress over the last five decades, then King’s unknown nightmare symbolized the inglorious march of racist progress over the last five decades.” In a New York Times interview, bell hooks reflects on philosophy and race: “I always think, “What does Martin Luther King want me to do today?” Then I decide what Martin Luther King wants me to do today is to go out into the world and in every way that I can, small and large, build a beloved community."
The other day my friend David Cutler posted a short video that demonstrated with crystal clarity the possibilities of AI as a tool for composition, and a colleague shared with me some of their own experiments. It works really well, possibly TOO well if you worry that it will just become a tool for academic dishonesty and general short-cutting of the learning process. But great minds at great universities seem already to have developed apps for spotting AI-generated text, so perhaps that menace may be brought under control, sort of, at some point.
But David’s excellent video inspired my inner optimist to follow a line of thought that I had been musing on for a while already: How to make a virtue of necessity; how to find ways to harness the benefits of AI’s power to stimulate, rather than replace, student thinking and learning. In the video, David challenges an AI engine to generate some essays on a text, his prompts sufficiently nuanced to require that the AI demonstrate its capacity (Or is it an ability? How far may I anthropomorphize here?) to draw upon important textual content and even specific elements (page numbers, no less!) of the book under analysis as it creates, almost instantly, very well-crafted essays in response to the prompts. Yikes!! The thing is good—really, really good.
It occurred to me, and I am shamelessly going to quote and paraphrase here the comment I left on David Cutler’s video post, that perhaps a fruitful and and affirming approach to this awesome new technology might be to focus more on the way we approach the matter through the development by human students of questions and prompts—of the disposition to and skills of inquiry—and less on AI's ability (there; I did it) to compile astonishingly competent answers from the gazillion human-created sources upon which it can draw. What if a teacher were to assign students the task of crafting, oh, say, three very different essay prompts for the AI engine to answer, with the point being that the responses would illuminate the major themes of the work and the ways in which the author explores and expresses these themes? That’s kind of the easy part. The AI bangs out its three lovely essays, voilá! The student has posed questions and can now read expert responses that will likely contain connections and nuances the student may not have previously noted. A learning experience in itself, this!
BUT THEN the task of the student, hopefully in a medium that is somehow “AI-proof” (some have suggested that making students handwrite work as a way to circumvent AI, but that’s a whole other discussion), would be to reflect on the response essays in relation to the prompts, on the value of the prompts themselves in light of the responses, and then on their own learning in the process. This work, accomplished in whatever ways seem apt—in class? oral? workshopped collaboratively in small groups?—would in itself provide quite a bit of illumination on any text, historical era or event, or other cultural artifact/production under study.
In creating such clever artificial intelligence engines, we—and once these things are ubiquitous it’s “we,” not “they”—have created and are unleashing a tool with enormous new and as yet not fully understood power. We’re educators, and we need to find ways to apply this kind of power and not simply spend our time trying to thwart its misuse—though this must also be on our minds. By asking students to reflect on what “AI says” relative to their own understandings, we can create critical dialogues instead of resigning ourselves to the imminence and inevitability of a “singularity” in which machines subsume or become us.
Our human task, as educators who have committed ourselves to helping create a better future by helping to expand the perspectives and hone the habits of mind of rising generations, is to think deeply and clearly about what we ask such powerful tools to do and how we ask them to accomplish the tasks we set for them—not just by assigning constraints and parameters but even, very intentionally, engaging with the real stuff of life: emotions, ethics, and larger concerns like equity, justice, and human survival.
AI is great power, and we have to learn for ourselves and to teach our children how to use such power for positive, humane, moral ends.
For more on this topic, check out this accompanying blog written by our Senior Director, Sarah Hanawald: AI Can Write Essays: What Does This Mean for Educators?
The Association for Academic Leaders Advisory Council met last Friday, and AI was on their minds, too. Several of the members had given the site a try, and the results had impressed, dismayed, or even encouraged them. Several mentioned that the site seemed to have merit as a “jumping off point” for work to be more developed by the author. A former AP teacher proclaimed an essay “worth at least a 4.” An English teacher followed up after the conversation by saying that while we’d discussed how a student could generate an AI essay and then “back into” an outline and a rough draft or two to appear to have gone through the writing process, she believed that most students who commit academic dishonesty are trying to create shortcuts and that didn’t seem to be much of a shortcut to her. The consensus was that teachers and Academic Leaders would need to explore, understand, and address ChatGTP, as was the case with tools already available in areas such as math, world languages, and art (Wolfram Alpha, AI;R Math, DALL-E and more).
During the conversation, I shared that I’d asked the AI, “How can a teacher tell if an essay is written by AI instead of by the student submitting the work?” (Find ChatGTP’s answer at the end of this post).
The limits of the AI become evident pretty quickly. The results are hardly inclusive. I asked for novel suggestions and the AI generated a list of books by white authors (and only one woman). Evidence abounds that AI algorithms are biased, can be racist, and “deepen racial and economic disparities.” While the question seems to be how to make AI “better,” schools would do well to ensure that students understand these profound inequities are baked into much, if not all, of the AI they’ll encounter. And of course, there’s the fact that the user’s data “may be reviewed by our AI trainers to improve our systems.” Yup, in using these tools, you’re giving up valuable insights to the companies building the algorithms. For these reasons alone, schools need to teach students about AI, how it works, how using it might help learners, and where they should be cautious rather than ignoring it or trying to find ways to keep students from using it with blockers and other tech tools.
Peter Gow and I did some further poking at the AI’s capabilities. We found that answers were often circular once the questions got philosophical. “How to end inequity in education?” ChatGTP helpfully offered that we just needed to examine the root causes of inequity in society at large and eliminate those. Problem solved. On the other hand, we played around with a book idea and the resulting outline wasn’t bad for organizing ideas into sections and chapters.
AI generated text is certainly going to be a boon to the publishers of “clickbaity” pieces such as “Top 10 Tips for Tying Shoes” that one might find on a specialty laces shop. So there’s a paying gig for a human gone, and I’ll let others decide whether or not that’s a loss. Canva, a widely-used website for infographic/poster generation, has incorporated AI (dubbed Magic Studio) with the caveat that the use of AI must be credited on the final version of work. Media stories range from the Atlantic proclaiming the “end of the college essay” to the Washington Post celebrating that a dyslexic landscaper has developed new confidence in client communications.
In trying to think about what this addition to the technology landscape means, both right now and going forward, I engaged in writing this blog post, an activity that helped me organize my thoughts around AI generated text. Organizing one’s thinking is part of every course; teachers can and should consider incorporating AI into this work. The art of generating the right question to elicit an answer to an assignment is worth teaching, just as we teach students how to write search queries. The natural next step is to consider whether you got useful answers. How do you know? Asking students to design powerful questions rather than regurgitate memorized answers is nothing new to independent school faculty! Educators can and should explore the impact of using AI to assist students with communication disorders.
But once the questions are asked, there’s then the value of engaging in the longer writing process, and here’s where it gets trickier. Genuine engagement in writing longer text forces the writer to organize their thoughts, interrogate what’s on paper, see where more research or re-thinking is needed and revise accordingly. Paragraphs move, evidence is cited, false trails are followed, then rejected, Teachers will know that students have engaged in that level of work when they know their students well, engage in dialogue with them while they write (or solve complex math problems or compose music, etc). Further, the culture of the school (this one’s on you, Academic Leaders) needs to prioritize the inclusive work of intellectual engagement, and that of making thinking visible, over the grading of the end piece.
Oh, and that response I said I’d share?
My question: How can a teacher tell if an essay is written by AI instead of by the student submitting the work?
Response: There are a few ways that a teacher can tell if an essay has been written by AI instead of by the student who submitted it. One way is to look for signs of artificial language, such as repetitive sentences or unnatural word combinations. Another way is to evaluate the content of the essay and see if it is relevant to the topic and meets the requirements of the assignment. The teacher can also look for inconsistencies or errors in the essay that may indicate that it was generated by a machine. Finally, the teacher can ask the student to provide additional information or clarification about the essay to verify that they are the author of the work.
For more on this topic, check out this accompanying blog written by Peter Gow: Using AI's Powers for Good
Don't miss our weekly blog posts by joining our newsletter mailing list below:
Brad Rathgeber (he/him/his)