AI in education threatens student autonomy, business expert warns

NEWCASTLE UPON TYNE, ENGLAND — Integrating AI into education is leading students to outsource their critical thinking to algorithms, fundamentally shifting control over knowledge itself from educational institutions to algorithms, warns Kimberley Hardcastle, a Northumbria University Business and Marketing professor.
Business Insider reports that this change, Hardcastle argues, risks creating a generation of dependent learners and eroding society’s capacity for independent judgment.
Student critical thinking affected by AI
The pervasive use of generative AI for academic tasks is leading to a dangerous decline in students’ fundamental cognitive skills. Anthropic data demonstrates how widespread this addiction is: in April, 39.3% of student communication with its AI was related to the creation or refinement of educational materials, and 33.5% was directly addressed to the chatbot to answer homework.
This problem transcends the convenience factor and takes a deeper cognitive angle, viewing it as a bypassing of the processes of synthesis and critical evaluation.
Hardcastle calls this an “atrophy of epistemic vigilance”—an important skill for independently verifying and questioning information.
By using AI not only to find answers but also to determine what a good answer should be, she warns, “This affects job prospects not through reduced ability, but through a shifted cognitive framework where validation and creation of knowledge increasingly depend on AI mediation rather than human judgment.”
This framework affects job prospects not through reduced ability, but by making the validation of knowledge dependent on AI mediation rather than human judgment.
Universities face challenge over AI knowledge control
A more profound structural risk is the transfer of epistemic authority from educational institutions to the commercial entities that build and control AI systems.
Hardcastle cautions that when AI becomes the primary mediator of knowledge, companies that run these AI systems effectively gain control over what is considered valid knowledge.
“When we consistently defer to AI-generated summaries and analyses, we inadvertently allow commercial training data and optimization metrics to shape what questions get asked and which methodologies appear valid,” she explained.
The consequence is the entrenchment of corporate influence over the very creation and validation of knowledge, quietly shifting authority to algorithmic logic. Hardcastle states that the critical question for education is whether it will consciously shape AI integration to preserve human epistemic agency—the capacity to think and reason independently.
Unless universities move beyond operational fixes, she warns, AI could erode independent thought while AI companies profit from controlling how knowledge is constructed.

Independent




