Date: 11th May 2018
Time: 11:15 – 12:00
Room: Parkview 1
This session draws on current research to explore what the automated scoring of spoken language for learning and testing means, along with the challenges and benefits of the use of automated scoring of speech in both the classroom and in large scale assessments.
The session will also investigate the human factor in the development of automated scoring tools for speech and in managing the risks of using those tools.
Issues such as the design and development of automated tools along with ensuring score quality require dedicated human resources in creating and monitoring tools to assess language for communicative purposes.
The perceptions of teachers and test-takers regarding automated scoring tools for speech assessment will also be considered, along with the usefulness of automated student feedback to promote learning from such scoring assessments.