The Effects of Bi-Modal Input on Fostering L2 Japanese Speech Segmentation Skills

2019-05-15T14:08:55Z (GMT) by Natsumi Suzuki
The purpose of this study was to investigate to what extent bi-modal input improves the word segmentation ability of L2 learners of Japanese. Accurately identifying words in continuous speech is a fundamental process for comprehending the overall message, but studies show that second language (L2) learners often find this task difficult, even when all individual words are familiar to them (e.g. Field, 2003; Goh, 2000). This is where the combination of written and audio input (bi-modal input), like when providing captions in the target language, could be helpful because it can provide orthographical image of the sound they hear, which in turn makes the input more intelligible (Charles & Trenkic, 2015). This study was implemented through a single-case design (SCD), where 12 third-year Japanese learners at a public university in the Midwestern United States underwent a semester-long pre-post design experiment. Participants watched a series of Japanese documentary with sound and captions (bi-modal input) throughout the semester. Before and after viewing each video, participants took Elicited Imitation Tasks (EIT) as the pre-post-tests, as well as at the beginning and at the end of the semester. The result showed that most participants improved their EIT scores throughout the semester, even to utterances from videos and speakers to which they had not been exposed. This study provided evidence that bi-modal input has the potential to help learners’ internal phonological representations of lexical items to become more stable and sophisticated, which would in turn contribute to L2 Japanese learners’ speech processing efficiency.