Teaching with AI: Research-Backed Strategies for Educators
Teaching with AI: Evidence, Challenges, and Research-Supported Practice
Recent empirical work has begun to clarify both the promise and the limitations of generative artificial intelligence in education. Rather than treating AI as a novelty or threat, the research now allows educators to evaluate its effects on learning outcomes, academic integrity, and pedagogy with increasing precision.
AI in Education: What the Research Shows
A meta-analysis of 51 experimental and quasi-experimental studies conducted between late 2022 and early 2025 reports large positive effects on student learning performance (Hedges’ g = 0.867) and moderate gains in learning perception and higher-order thinking. The authors conclude that “ChatGPT use can significantly improve learning performance when instructors provide structured prompts and align AI-supported tasks with higher-order learning objectives” (Wang & Fan, 2025). These outcomes were strongest when activities required evaluation or synthesis and were integrated with instructor feedback. This reinforces prior evidence that technology’s educational value depends not merely on adoption but on its alignment with robust instructional design—see TeachThought’s pedagogy hub for related guidance.
Faculty concerns remain significant. Academic integrity is among the most frequently cited challenges, with instructors reporting apprehension that students will bypass the intellectual work of writing, problem-solving, or analysis by submitting unedited AI outputs. Surveys confirm that this concern is widespread, though direct measures of misuse are limited. Instructors also express unease about intellectual passivity: when students are not explicitly taught to interrogate machine-generated responses, they may accept them as authoritative, diminishing opportunities for critical thinking. As Kasneci and colleagues note, “LLMs should be introduced with a focus on critical use, encouraging students to challenge and verify outputs rather than simply consume them” (Kasneci et al., 2023). These risks are compounded by the known limitations of large language models, which can produce biased, incomplete, or entirely fabricated information.
Equity considerations further complicate adoption. RAND’s 2025 survey of U.S. teachers indicates that AI use is more prevalent in lower-poverty schools, suggesting that access to devices, training, and institutional support still mediates student opportunity. These findings highlight the importance of designing AI initiatives that are inclusive and provide scaffolding for both students and instructors.
Current best practice emphasizes thoughtful integration rather than prohibition or uncritical enthusiasm. Instructors who adopt AI successfully tend to embed it within activities that require students to critique, revise, or extend machine-generated work. This might include comparing multiple AI responses to a single prompt and evaluating them against disciplinary criteria, or generating examples that students must fact-check and refine before submission. Research suggests that such structured use can enhance metacognition and transfer, helping students develop the literacies necessary to engage critically with algorithmic outputs. For inquiry-aligned routines you can adapt to AI tasks, see TeachThought’s Questioning & Inquiry hub.
Practical Strategies for Teaching with AI
Align every AI-related task with a clear learning objective—preferably objectives that target analysis, evaluation, or creation. When students use AI to generate drafts, code, or problem solutions, require visible thinking: annotated revisions, justification of changes, or a brief methodological note that explains how AI was used and where human judgment improved the output.
Use AI to strengthen metacognition. Ask students to request multiple AI explanations of the same concept, compare them, and write a reflection on clarity, accuracy, and bias. In STEM courses, have students prompt AI to produce alternative problem contexts, then solve, debug, and generalize. In writing-intensive courses, pair AI-generated feedback with peer and instructor feedback so students learn to discriminate between shallow and substantive advice.
Address academic integrity through design and transparency. Prefer process-focused assessments (draft portfolios, oral defenses, live problem-solving) and require AI-disclosure statements so use is visible rather than hidden. Provide brief norms on acceptable vs. unacceptable use keyed to your discipline, and link to institutional policy where applicable.
Invest in professional development and equity. Faculty benefit from practicing prompt design, calibrating expectations for quality, and reviewing data privacy implications. Institutions should ensure students have device and connectivity access, and provide low-cost options when possible, so AI-supported learning does not widen existing gaps.
The emerging consensus in the literature is that generative AI can enhance learning when its use is transparent, scaffolded, and aligned with higher-order thinking goals. Institutions and instructors that treat AI as an object of inquiry—something to be examined, evaluated, and debated—rather than a hidden shortcut or forbidden tool are most likely to see gains in student engagement and performance.
Works Cited
Kasneci, E., Sessler, K., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., … & Kasneci, G. (2023). ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learning and Individual Differences, 103, 102274.
RAND Corporation. (2025). Uneven Adoption of Artificial Intelligence Tools Among U.S. Teachers: Results from the 2023–2024 American Educator Panels.
Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Humanities and Social Sciences Communications, 12, 621.
