By Terance Espinoza, Ph.D.
Updated, October 2025
“Artificial Intelligence” is a broad and multifaceted field that is constantly evolving. In context of pedagogy in higher education, much of the discussion focuses on generative AI large language models (LLM) such as ChatGPT (OpenAI), Claude (Anthropic), DeepSeek (DeepSeekAI), Grok (xAI), Gemini (Google), Copilot (Microsoft) and others in their role as a research tool, as well as their roles as a tool for plagiarism.
As the technology driving AI is beyond my field of expertise (Ph.D. in Theology), this will serve as a landing page for resources that I have found useful when thinking through AI in higher education (AIEd). I am open to suggestions though not from the same companies who are selling AI products (e.g., Grammarly webinars on how great their product is). My particular concern is whether there is any role for the use of AI in Higher Education that does not come at the human cost of cognitive off-loading, intellectual theft, algorithmic bias, job displacement, mental health, data security, economic feasibility, and environmental impact. At the moment it seems that the cognitive debt that comes with using AI, even just for brainstorming, makes you worse at the very skills that are fundamental to a humanities education.
What is Generative AI?
°George Lawton, “What is GenAI? Generative AI Explained,” TechTarget. Techtarget.com. https://www.techtarget.com/searchenterpriseai/definition/generative-AI
°“What id Generative AI?” McKinsey & Company. April 2, 2024. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai
°Museum of Science. “What is AI” YouTube. https://youtu.be/NbEbs6I3eLw?si=LRnRdyYHtWXECXck
°CGP Grey. “How Ais, like ChatGPT, Learn” YouTube. https://youtu.be/R9OHn5ZF4Uo?si=hdEkTI05mqaYFNIy
°CGP Grey. “How AI, Like ChatGPT, *Really* Learns,” YouTube. https://youtu.be/wvWpdrfoEv0?si=MWJ4azgWOLL3NWsW
Impacts of AI
°Lorena O’Neil, “These Women Tried to Warn Us About AI” Rolling Stone 12 Aug 2023. https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/)
°From the UK House of Lords: Communications and Digital Select Committee: OpenAI-written evidence (LMM0113) [https://committees.parliament.uk/writtenevidence/126981/pdf/. “Because copyright today covers virtually every sort of human expression – including blog posts, photographs, form posts, scraps of software code, and government documents-it would be impossible to train today’s leading AI models without using copyrighted material. Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” -OpenAI in 2023 to the UK House of Lords.
°Melumad, Shiri and Jin Ho Yun. “Experimental Evidence of the Effects of Large Language Models versus Web Search on Depth of Learning (January 20, 2025). The Wharton School Research Paper. https://dx.doi.org/10.2139/ssrn.5104064. Abstract: “The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. Results from four online and laboratory experiments (N = 4,591) lend support for the predictions that when individuals learn about a topic from LLMs, they tend to develop shallower knowledge than when they learn through standard web search, even when the core information in the results is the same. This shallower knowledge accrues from an inherent feature of LLMs—the presentation of results as syntheses of information rather than individual search links—which makes learning more passive than in standard web search, where users actively discover and synthesize information sources themselves. In turn, when subsequently forming advice on the topic based on what they learned, those who learned from LLM syntheses (vs. standard search results) feel less invested in forming their advice and, more importantly, create advice that is sparser, less original—and ultimately less likely to be adopted by recipients. Implications of the findings for recent research on the benefits and risks of LLMs are discussed.”
Kosmyna, Natalia, et al. “Your Brain on ChatGPT: The Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” PrePrint Paper. Jun 10, 2025. https://doi.org/10.48550/arXiv.2506.08872, https://arxiv.org/pdf/2506.08872. Abstract: “As the educational impact of LLM use only begins to settle with the general population, in this preliminary study we demonstrate the pressing matter to explore further any potential changes in learning skills based on the results of our study. The use of LLM had a measurable impact on our participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 sessions, which took place over 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.”
A preliminary finding is, “When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking. Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.”
°Shojaee, Parshin, et al., “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” Machine Learning Research, Apple. June 2025. https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf. Abstract: “Through extensive experimentation across diverse puzzles, we show that frontier LRMs [Large Reasoning Models] face a complete accuracy collapse beyond certain complexities.” Conclusion: “Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds. We identified three distinct reasoning regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at moderate complexity, and both collapse at high complexity… These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.”
°”How to use ChatGPT and other AI tools as a college student…WITHOUT CHEATING.” George Fox Digital. YouTube. https://www.youtube.com/watch?v=VR9X9kRdgbk.