Chloe Beckett, M.A., Nightingale College, South Dakota, US
As I grade my Cultural Anthropoloy class’s Emic and Etic Perspectives of Halloween essay, two things strike me: 1. How often I write the comment “Capitalize proper nouns,” and 2. How the Turnitin AI scores keep creeping higher and higher.
For anyone who has been teaching anthropology over the last two years, the latter will be of no surprise to you. (As for the former, perhaps someone who has been teaching thirty years can weigh in – were students always so careless? Has spell-check ruined checking with our actual minds?)
I hear a lot of “hows” from fellow faculty about generative AI use by students. How do we get them to use AI ‘responsibly?’ How do we distinguish between them using AI the ‘wrong way?’ How do I interpret this Turnitin report, anyway?
While AI has simply not been in the hands of students long enough to have longitudinal data on its impacts, there is a growing slew of research that touts it as a learning tool for non-traditional students (such as Dai et al., 2023, and Ouyang et al., 2022, among many). Even with this growing fan club for “correct” AI use, educators seem to universally want to prohibit “bad” AI use. Loaded-definitions aside, I think we are focusing on the wrong thing here.
” class=”wp-image-2262″ style=”width:239px;height:auto” srcset=”https://teachinganthropology.org/wp-content/uploads/2025/02/machine-learning-book-algorithm-artificial-intelligence-machine-1639735-pxhere.com_-1024×693.jpg 1024w, https://teachinganthropology.org/wp-content/uploads/2025/02/machine-learning-book-algorithm-artificial-intelligence-machine-1639735-pxhere.com_-300×203.jpg 300w, https://teachinganthropology.org/wp-content/uploads/2025/02/machine-learning-book-algorithm-artificial-intelligence-machine-1639735-pxhere.com_-768×520.jpg 768w, https://teachinganthropology.org/wp-content/uploads/2025/02/machine-learning-book-algorithm-artificial-intelligence-machine-1639735-pxhere.com_-1536×1040.jpg 1536w, https://teachinganthropology.org/wp-content/uploads/2025/02/machine-learning-book-algorithm-artificial-intelligence-machine-1639735-pxhere.com_-2048×1386.jpg 2048w” sizes=”(max-width: 1024px) 100vw, 1024px” />
As we all teach in our Introduction to Anthropology classes, the emic perspective is essential for understanding a cultural practice. So why would AI be any different?
As faculty and support specialists, we are all outsiders to the student experience of using AI. Combined with our academic training, we take an etic perspective that focuses on the “big picture” of AI use in higher education. But what about the individuals?
What is motivating students to use AI? Has anyone actually asked them? (There have been course-grained sociological studies on students’ perceptions of AI (e.g. Chan & Hu, 2023), but no detailed ethnographic work.)
Are they overworked? Do they just not care? Are they trying to “perform” at a level higher than their abilities?
I, for one, refuse to believe that humans, from whatever socioeconomic background, are suddenly unable to succeed without a modern technological innovation.
Over and over, we tell our students how important it is to take both the emic and etic perspectives into account. We cannot truly understand the cultural significance of any behavior without the emic view. So why would this be any different?
We are the discipline of anthropology. Our unique strengths are a personal, individualized approach that spans across space and time. I propose a reality check for how we think about students using generative AI – as in, let’s utilize the strengths of our discipline, and ask some ethnographic questions about the reality of students using AI.
Perhaps their answers would surprise us.
Since I am doling out lessons from Introduction to Anthropology, I think it is fitting to consider dear old Malinowski and functionalism. I present functionalism to my students as an understanding that all aspects of culture are interwoven – if there is a change somewhere in the cultural fabric, this will affect other aspects.
I think we can all agree that there have been many social and technological changes in recent years. Do we really think these won’t impact student behaviors? Of course not! This is exactly why there are so many webinars about “How to Connect Meaningfully in a Virtual Classroom.”
There are more factors to generative AI use than just its now-widespread availability. Instead of rifling through statistics about outcomes and trying to hobble together reactive responses, we should find out about the cultural norms and motivations surrounding AI use for students. Do they even realize that Grammarly is AI? Are they concerned about being seen as “fitting in” in the classroom environment as an ESL student? Does the teaching environment itself contribute to how students view AI?
If so, addressing these are different conversations than addressing ‘How do we detect and deal with ‘bad’ AI?’
We must not underestimate our discipline’s ability to solve our own problems.
Chloe Beckett, M.A., Nightingale College, South Dakota, U.S.
References
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43.
Dai, Y., Liu, A., & Lim, C. P. (2023). Reconceptualizing ChatGPT and generative AI as a student-driven innovation in higher education. Procedia CIRP, 119, 84-90.
Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011 to 2020. Education and Information Technologies, 27(6), 7893-7925.