Archived Projects

HEMISPHERIC PROCESSING OF PITCH ACCENT IN JAPANESE BY NATIVE AND NON-NATIVE LISTENERS

Research Team: Xianghua Wu, Jung-yueh Tu, Saya Kawase, Yue Wang

Using the dichotic listening paradigm, this study investigates the hemispheric processing of Japanese pitch accent by native and non-native listeners. The main questions addressed include the extent to which temporal window and functional load of speech prosody, as well as listeners’ linguistic experience affect the hemisphere specialization of Japanese pitch accent. Specifically, this study examines: (1) how native and non-native speakers process a type of lexical prosody (pitch accent) imposed on disyllabic words, (2) if processing of the disyllabic prosody is different from that of a monosyllabic one, such as lexical tones in Mandarin, and (3) for non-native listeners, if their tone/stress language background affects the processing of Japanese pitch accent. We are currently investigating how learners of Japanese process pitch accent patterns.

EFFECTS OF LINGUISTIC AND MUSICAL EXPERIENCE ON NON-NATIVE PERCEPTION OF THAI VOWEL DURATION

Research Team: Angela Cooper, Richard Ashley (Bienen School of Music, Northwestern University), Yue Wang

The present study investigates the influence of linguistic and musical experience on non-native perception of speaking-rate-varied Thai phonemic vowel length distinctions. Utilizing identification and AX discrimination tasks, we hypothesized that native Thai listeners would be more accurate at identifying and discriminating these native vowel length contrasts than the English group, across speaking rates. Furthermore, native group was not predicted to be as sensitive to within‐category differences (such as long vowels at fast and normal rates) as the non-native group. Finally, Given that musicians are trained to discern temporal distinctions in music, English musicians were predicted to be more accurate at identifying and discriminating non-native vowel length distinctions than the English non-musicians, particularly at faster rates of speech.

ELECTROPHYSIOLOGICAL STUDY OF LINGUISTIC AND NON-LINGUISTIC PITCH PROCESSING

Research Team: Yang Zhang (University of Minnesota), Dawn Behne (Norwegian University of Science and Technology), Angela Cooper, Yue Wang

Using high-density ERP, this research examines speech and non-speech pitch processing by both tone and non-tone language speakers. We intend to investigate the extent to which early perceptual sensitivities and late categorization abilities are influenced by linguistic and/or musical experience with pitch, and whether such experience is transferrable for tonal pattern processing between speech and non-speech. Additionally, we examine learning-induced brain plasticity through training nonnative tone language learners to perceive linguistic tones.

ACOUSTIC-PERCEPTUAL PROPERTIES OF CROSS-LANGUAGE LEXICAL-TONE SYSTEMS

Research Team: Jennifer Alexander, Yue Wang

Lexical-tone systems use pitch to signal word meaning; they exist in 70% of languages but are under-studied compared to segmental (consonant/vowel) systems. We extend a well-studied model of second-language sound-structure perception (the Perceptual Assimilation Model, Best and Tyler, 2007), which has traditionally focused on segments, to lexical tones. In doing so, we aim to determine the effect of native-language tone experience on perception of novel lexical tones.

We first aim to evaluate whether and how experience with a tone language affects the organization of non-native tones in acoustic-perceptual space. Listeners will use a free classification paradigm (Clopper, 2008) to classify native- and non-native lexical tones. We then will examine how perceptual proximity affects identification of non-native tone categories: listeners are expected to more quickly and accurately identify tones belonging to contrastive categories present in their native inventories. Finally, we investigate how perceptual proximity affects discrimination of non-native tones. We predict that listeners will more quickly and accurately discriminate, and will be more sensitive to differences between, tones judged to be highly dissimilar (relative to tones judged to be highly similar).

EFFECTS OF LINGUISTIC AND MUSICAL TRAINING EXPERIENCE ON THE PERCEPTION OF LEXICAL AND MELODIC PITCH INFORMATION

Research Team: Daniel Chang, Yue Wang, Nancy Hedberg

This research examines how tone-language experience influences the perception of music. Native Cantonese speakers, native English speakers, and early English-Cantonese bilinguals will be asked to participate in a Relative-pitch task and an Absolute-pitch task. The present study wants to explore whether early exposure to a tone language, Cantonese for example, facilitates the musical ability of absolute pitch and relative pitch. That is, this study will enable us to know whether speaking a tone language is beneficial to music perception.

The Processing and Learning of Prosody

Using EEG and behavioral testing methods, this project addresses how linguistic prosody is processed in the brain, and how its neural organization may be affected by linguistic and non-linguistic experience and learning such as musical training experience. The goal of this study is to investigate the extent to which neural processing in second language (L2) learning is influenced by linguistic experience, or reflects a human hardwired ability to process general physical properties.

AUDITORY AND ARTICULATORY PRIMING EFFECTS ON THE PERCEPTION AND PRODUCTION OF SPEECH SOUNDS

Research Team: Lindsay Walker, Trude Heift, Yue Wang

This research investigates how auditory and articulatory priming segments affect the production and perception of speech sounds, respectively. Specifically, this study looks at late learners of English whose native language is Cantonese and who have difficulty perceiving and producing English voiced obstruents. Auditory primes of these difficult segments are followed by a production task in order to assess whether priming facilitates more accurate pronunciation. Additionally, articulatory primes are followed by a perception task in order to assess whether articulating a segment can facilitate better perception. It is expected that priming in either domain will be facilitative given that previous research has shown a strong connection between speech production and speech perception.

CAN CO-SPEECH HAND GESTURES FACILITATE THE LEARNING OF NON-NATIVE SPEECH SOUNDS?

Research Team: Allard Jongman, Joan Sereno, Katelyn Eng, Beverly Hannah, Keith Leung, Yue Wang

This project predicts that the incorporation of hand gestures indicating lexical tone directionality during training will result in higher post-test tone identification accuracy than a second group of trainees who received no hand gestures or third group who received no face information. Training for each of the 54 English-speaking participants will look at the four Mandarin tones using video trials.

THE EFFECTS OF VISUAL INFORMATION ON PERCEIVING ACCENTED SPEECH

Research Team: Saya Kawase, Yue Wang, Beverly Hannah

This study is to examine how visual phonetic information in nonnative speech productions affects native listeners’ perception of foreign accent. Native English listeners are asked to judge stimuli spoken by non-native Japanese speakers in an accent rating task.  The Japanese speakers are also matched with a group of native (e.g., English) controls. Given that native listeners perceive errors of L2 production both visually and auditorily, audiovisual stimuli are expected to be perceived as having a stronger foreign accent, especially for the more visually salient ones.

EXAMINING VISIBLE ARTICULATORY FEATURES IN CLEAR AND CONVERSATIONAL SPEECH

Research Team: Lisa Tang, Ghassan Hamarneh (Computing Science,SFU), Allard Jongman, Joan Sereno (University of Kansas), Beverly Hannah, Keith Leung, Yue Wang

This project examines the effects of speech style (conversational and clear) and modality (auditory and visual) on articulatory and acoustic characteristics as well as intelligibility of speech sounds. Using state-of-the-art-computer-vision and image processing techniques, we examine videos of speakers' faces and extract movements in different speech styles. Their acoustic correlates are examined through detailed acoustic measurements. We also examine how native and nonnative perceivers use visual articulatory information in their perception of these speech sounds differing in style.

NEURAL CORRELATES OF MATHEMATICAL PROCESSING IN BILINGUALS

Research Team: Ping Li , Shin-Yi Fang (Pennsylvania State University), Yue Wang

This research investigates the roles of working memory capacity, strategies to solve mathematic questions, and level of proficiency in modulating neural response patterns during mathematical processing in English-Chinese bilinguals.

ROLE OF LINGUISTIC EXPERIENCE IN AUDIO-VISUAL SYNCHRONY PERCEPTION

Research Team: Dawn Behne (PI), Yue Wang, and members of the Speech Lab (Norwegian University of Science and Technology) and Language and Brain Lab (SFU)

The temporal alignment of what we hear and see is fundamental for the cognitive organization of information from our environment. Research indicates that a perceiver´s experience influences sensitivity to audio-visual (AV) synchrony. We theorize that experience that enhances sensitivity to speech sound distinctions in the temporal domain would enhance sensitivity in AV synchrony perception. With this basis, a perceiver whose native language (L1) involves duration-based phonemic distinctions would be expected to be more sensitive to AV synchrony in speech than for an L1 which has less use of temporal cues. In the current study, simultaneity judgment data from participants differing in L1 experience with phonemic duration (e.g., English, Norwegian, Estonian) were collected using speech tokens with different degrees of AV alignments: from audio preceding the video (audio-lead)to the audio and video being physically aligned (synchronous) to video preceding the audio (video-lead). Findings of this research contribute to understanding the underpinnings of experience and AV synchrony perception.

FACESCAN: VISUAL PROCESSING OF PROSODIC AND SEGMENTAL SPEECH CUES: AN EYE-TRACKING STUDY

Funding: Social Sciences and Humanities Research Council of Canada (SSHRC)

Research Team: Yue Wang (SFU Linguistics), Henny Yeung (SFU Linguistics), and members of the Language and Brain Lab (SFU) and Language and Development Lab (SFU).

Facial gestures carry important linguistic information and improve speech perception. Research including our own (Garg, Hamarneh, Jongman, Sereno, and Wang, 2019Tang, Hannah, Jongman, Sereno, Hamarneh, and Wang, 2015) indicates that movements of the mouth help convey segmental information while eyebrow and head movements help convey prosodic and syllabic information. Perception studies using eye-tracking techniques have also shown that familiarity with a language  influence looking time at different facial areas (Barenholtz, Mavica, and Lewkowicz, 2016; Lewkowicz and Hansen-Tift, 2012). However, it is not clear the extent to which attention to different facial areas (e.g., mouth vs. eyebrows) differ for prosodic and segmental information and as a function of language familiarity. Using eye-tracking, the present study investigates 3 questions. Firstly, we focus on differences in eye gazing patterns to see how different prosodic structures are processed in a familiar language versus a non-familiar language. Secondly, we focus on monolingual processing of segmental and prosodic information. Thirdly, we compare the results of segmental differences and prosodic differences in familiar versus non-familiar languages. Results of this research have significant implications in improving strategies for language learning and early intervention.

FACTORS INFLUENCING CANTONESE LEXICAL TONE AND TONE WORD ACQUISITION

Research Team: Angela Cooper and Yue Wang

This research examines how linguistic and musical experience influence non-native Cantonese tone perception and word learning. Native Thai and English listeners, subdivided into musician and non-musician groups, engaged in a perceptual training program. They learned words distinguished by five Cantonese tones during training, also completing a musical aptitude task and pre and post-training lexical tone identification tasks. This study looks to investigate how these two factors interact, and what impact the combination will have on the acquisition of non-native tone words, as well as which factor facilitates the acquisition of new lexical items to a greater degree.

Additionally, we will also compare a group of native English non-musicians undergoing lexical tone training before completing the word learning program to the English musicians and non-musicians who received no tone training. This will enable us to examine how raising listeners’ tonal awareness transfers to the acquisition of tone words.

Perceived Nativeness of Temporal Adjustments in Speech

Research Team:Yue Wang, Dawn Behne (Norwegian University of Science and Technology, Norway) and Qi Dong (Beijing Normal University, China)

Native Mandarin Chinese speakers’ productions of English consonant-vowel (CV) syllables have shown syllable-internal temporal adjustments in the direction of native (English)-like CVs (Wang & Behne 2004). This research investigates whether these temporal adjustments affect perceived nativeness.