Transformer Attention for Personalized L2 Feedback in CALL

Authors

DOI:

https://doi.org/10.69760/gsrh.0260301006

Keywords:

transformer attention, personalization, CALL, written corrective feedback

Abstract

The rapid diffusion of transformer-based natural language processing has renewed long-standing ambitions in computer-assisted language learning (CALL): scalable, timely, and instructionally meaningful second language (L2) feedback that is sensitive to individual learners’ needs rather than “one-size-fits-all” correction. Contemporary studies on chatbots and large language models (LLMs) suggest measurable promise for L2 vocabulary development, writing support, and learner engagement, while also revealing variability in feedback quality, inconsistency across prompts, and unresolved questions about pedagogical validity and accountability. This article synthesizes research from CALL, feedback theory, and transformer interpretability to propose an attention-informed framework for personalized L2 feedback. The framework treats transformer attention as a computational mechanism for aligning learner language with contextual cues and as a design resource for building inspectable feedback workflows, while explicitly recognizing that attention weights are not automatically faithful explanations of model behavior. Methodologically, we conduct a targeted, citation-grounded synthesis seeded by recent CALL chatbot/LLM research and complemented by foundational work on formative feedback and corrective feedback effectiveness, learner modeling, and transformer visual analytics. Results are presented as design-relevant findings: what current evidence implies for personalization targets (what to correct, how, when), how attention signals may support fine-grained diagnosis and recommendation, and which safeguards are required for responsible deployment in educational settings. Implications are discussed for research design (comparative trials against teacher feedback, delayed posttests, process data), system architecture (learner models + transformer feedback generators), and governance (privacy, transparency, and human oversight).

Author Biographies

  • Zarifa Sadigzade, Nakhchivan State University, Azerbaijan

    Zarifa Sadiqzade, lecturer, Nakhchivan State University, zarifasadig@gmail.com, ORCID: https://orcid.org/0009-0007-1179-1214

  • Hasan Alisoy, Nakhchivan State University

    Alisoy, H. Lecturer in English, Nakhchivan State University, Azerbaijan. Email: alisoyhasan@ndu.edu.az. ORCID: https://orcid.org/0009-0007-0247-476X

Downloads

Published

2026-02-11

Issue

Section

Articles

How to Cite

Sadigzade, Z., & Alisoy, H. (2026). Transformer Attention for Personalized L2 Feedback in CALL. Global Spectrum of Research and Humanities , 3(1), 44-53. https://doi.org/10.69760/gsrh.0260301006

Similar Articles

11-18 of 18

You may also start an advanced similarity search for this article.