Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access by adapting a computational model of spoken and written word processing to sign perception. We have shown that a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language.
Deaf people who learned language late retrieve signs from long term memory differently than people who acquired language from birth. Late learners tend to focus on the forms of signs, while early learners focus on the meanings of signs. By altering properties of our cognitive architecture to make it more like late learners, we are able to explain some of the differences in late and early learners.