Lexical Access in ASL

With Ariel M. Cohen-Goldberg

Most of the major theories of the human language faculty have focused primarily on spoken languages. This presents two problems: little is known about sign language processing, and language and modality are frequently confounded in psycholinguistic literature. This confound is apparent in the area of lexical access, the process by which we perceive and produce signs or words. On the surface words and signs appear to be quite different, but the function they serve is similar. The current research attempts to tease apart the role of language and modality in word/sign processing. Properties that differ between modalities are modality-dependent, and properties that are shared may be language-general.  In this project, I focus on the two stages of word/sign processing that deal most directly with the forms of words (phonological and phonetic processing), because these stages are the most likely to differ across modalities. The goal of this project is to address the following question: To what extent are the mechanisms of phonological and phonetic processing similar across modalities, despite apparent differences in the forms of words and signs?

This project enlists a combination of computational simulations and behavioral experiments. The first goal is to create a computational model of sign perception that is identical to models of word perception in spoken language, except that the auditory elements will be replaced with manual/visual elements. This model will be designed to match existing behavioral data on sign perception. The next step will be to test the predictions of the model behaviorally. The first behavioral experiment will look at the role of phonological representations in sign perception. A second behavioral experiment will look at the role of phonetic representations in sign production.