|
Illustration by Bianca Bagnarelli
Austin Elias-de Jesus
Newsletter editor
Artificial intelligence has, with sudden and crushing speed, seeped into the fabric of our everyday existence. It’s in our love lives, our reading material, our health care. It is also, increasingly, in our schools. As the staff writer Jessica Winter reports, in an alarming new column, many school districts around the country have adopted A.I. tools for elementary-school classrooms, and the practice is quickly spreading. With it comes a growing number of parents, educators, and cognitive scientists who are expressing anxiety about the ubiquity and seeming inevitability of this technology’s usage in K-12 education. Does the “efficiency” these tools offer undermine the premise and the promise of learning? What happens when we impose cognitive offloading on kids who have yet to do much cognitive onloading? Did anyone stop to ask whether we should have A.I. in schools at all? I recently caught up with Winter, who covers family and education, to discuss what she learned.
This conversation has been edited and condensed.
How widespread is A.I. use in schools these days?
The vast majority of U.S. school districts use Google Chromebooks, and these devices are often equipped with A.I. tools. But there is little in the way of federal or state regulations governing responsible use of A.I. in K-12 classrooms, so the question of how to employ those tools is left up to individual superintendents, principals, and teachers. That might sound good from the perspective of allowing teachers to chart their own course, but they are receiving almost no formal guidance and, often, no choice over whether their students should be exposed to A.I.
Do those teachers see any benefit to using A.I.?
I was dismayed when a proponent of A.I. in education told me that A.I. will save teachers time on things like writing feedback to students or e-mails to parents. I think that mistakes the whole purpose of teaching as a profession! If a teacher is leaving A.I.-generated feedback on a student essay, the student might as well compose the essay with A.I. Then what are we doing here? It’s just bots talking to each other.
How did we so suddenly get to a place where the majority of schools are using A.I.?
A.I. has been creeping into public education for a couple of years now. The biggest inflection point came when Google decided to capitalize on the ubiquity of Chromebooks in schools by rolling out an all-ages version of the Gemini A.I. suite. All of a sudden, a lot of young students not only had access to Gemini but actually couldn’t avoid it.
What does this A.I. saturation mean for kids? You point to an M.I.T. study that cautioned that A.I. could contribute to “cognitive atrophy.” Should parents be worried?
We probably shouldn’t be foisting A.I. on kids when we don’t have solid evidence that it’s safe, and we do have a lot of evidence to suggest that it has negative effects on cognitive growth and socialization.
Researchers at Wharton coined a phrase, “cognitive surrender,” for how people tend to defer to large language models over their own reason and judgment. Now imagine kids surrendering their critical thinking to A.I. before they’ve even developed critical-thinking skills in the first place. A.I. use, and overuse of screens in general, also has an impact on the development of motor skills, working memory, language processing—the list goes on and on. A.I. also outsources kids’ social-emotional growth. Children need to be learning through building relationships with their teachers and with each other.
It seems A.I. is being treated the way something like typing used to be: If you don’t learn it, you’re going to be left behind. Does the importance of digital literacy outweigh the potential developmental risks?
That’s funny, because even though kids are on Chromebooks all day, most of them aren’t learning how to type! Most A.I. proponents make some version of the digital-literacy argument, but I don’t think it bears scrutiny. No one knows what this technology is going to look like once the students of today enter the workforce. It’s absurd to say that a third grader urgently needs to talk to a chatbot for the sake of his future, when really it’s for the sake of the tech companies that profit when A.I. is embedded in schools.
Do these companies actually know how A.I. could be better used in schools?
The tech companies will emphasize that they are not the education experts and are simply providing schools with the best tools to amplify education. At the same time, the tech world seems to take as a given that A.I. tools are fabulous and necessary, and its leaders have unlimited resources to convince school districts that they’re correct. But just as tech guys aren’t education experts, educators aren’t tech experts. It’s hard for educators to negotiate these decisions in real time. The seemingly inevitable entry of A.I. into K-12 education strikes me as, potentially, another form of cognitive surrender. If we’re going to slow or reverse this trend, now is the time.
|