Exploring Malaysian Undergraduates’ Experiences with Speechace for Speaking Test Preparation
DOI:
https://doi.org/10.36777/Keywords:
Artificial Intelligence (AI), speaking assessment, speaking skills, Speechace, Technology Acceptance Model (TAM)Abstract
This study explores how Malaysian undergraduate students use Speechace, an AI-powered speech
recognition tool, to prepare for speaking tests. It examines students' perspectives on Speechace’s
effectiveness, ease of use, and challenges through the lens of Technology Acceptance Model (TAM)
framework. A qualitative case study approach was used to gather in-depth exploratory data comprising speaking scores, feedback, and semi-structured interviews with seven third-year students from an Intermediate English course at a Malaysian university. The findings indicate that Speechace supports improvements in pronunciation, fluency, grammar, and vocabulary, with students progressing from CEFR B1 to B2 levels. Thematic analysis of the interviews identified six key themes: (a) the tool’s effectiveness and features, (b) speech and feedback issues, (c) peer influence, (d) user attitudes, and (e) challenges. Students found Speechace engaging due to its interactive learning experience, instant feedback, and user-friendly interface. However, they also pointed out some drawbacks, such as limited preparation time, non-specific feedback, and occasional speech recognition errors. The study emphasises the role of teachers and peers in enhancing AI-assisted language learning. The research highlights Speechace’s potential as a self-guided learning tool for speaking practice outside the classroom, while identifying areas for improvement, particularly in feedback quality and adaptability. It contributes to discussions on AI in language education and assessment.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 The Author(s)

This work is licensed under a Creative Commons Attribution 4.0 International License.
CC Attribution 4.0