Transformer Attention for Personalized L2 Feedback in CALL
DOI:
https://doi.org/10.69760/gsrh.0260301006Keywords:
transformer attention, personalization, CALL, written corrective feedbackAbstract
The rapid diffusion of transformer-based natural language processing has renewed long-standing ambitions in computer-assisted language learning (CALL): scalable, timely, and instructionally meaningful second language (L2) feedback that is sensitive to individual learners’ needs rather than “one-size-fits-all” correction. Contemporary studies on chatbots and large language models (LLMs) suggest measurable promise for L2 vocabulary development, writing support, and learner engagement, while also revealing variability in feedback quality, inconsistency across prompts, and unresolved questions about pedagogical validity and accountability. This article synthesizes research from CALL, feedback theory, and transformer interpretability to propose an attention-informed framework for personalized L2 feedback. The framework treats transformer attention as a computational mechanism for aligning learner language with contextual cues and as a design resource for building inspectable feedback workflows, while explicitly recognizing that attention weights are not automatically faithful explanations of model behavior. Methodologically, we conduct a targeted, citation-grounded synthesis seeded by recent CALL chatbot/LLM research and complemented by foundational work on formative feedback and corrective feedback effectiveness, learner modeling, and transformer visual analytics. Results are presented as design-relevant findings: what current evidence implies for personalization targets (what to correct, how, when), how attention signals may support fine-grained diagnosis and recommendation, and which safeguards are required for responsible deployment in educational settings. Implications are discussed for research design (comparative trials against teacher feedback, delayed posttests, process data), system architecture (learner models + transformer feedback generators), and governance (privacy, transparency, and human oversight).
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Global Spectrum of Research and Humanities

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.