Mapping the Lexicon of American Sign Language
When we perceive language, how do we identify words? In spoken language, this requires listening to a series of sounds in a certain order (the forms of words), and gathering enough evidence about the word to identify the particular word and then accessing the meaning of the word. If we base our theory of language perception only on spoken language, we are missing half the story: how does word recognition occur in signed language? In sign language, rather than sounds, the forms of signs are built from four fundamental features: handshapes, locations, movements, and palm orientations (Stokoe, 1972). Unlike spoken languages that are built from strings of multiple sounds, signs generally only have one (possibly two) of each of these features. Presumably, in sign perception the ‘listener’ gathers evidence from these features until it is possible to identify the sign and access the meaning.
Interestingly, it seems that these features of signs play qualitatively different roles when it comes to identifying words. The number of signs that are similar to a target sign, the sign’s neighborhood density, influence how difficult it is to identify the target sign. This influence depends on the whether the neighborhood is defined by location, location neighbors are signs that share the same location such as MOTHER and WRONG which are both produced on the chin, or if the neighborhood is defined by handshape, handshape neighbors are signs that share the same handshape such as POLICE and CHOCOLATE which both use a handshape that looks like the letter “c”. Signs that have many location neighbors are harder to identify than signs with few location neighbors (Baus, Quer, & Carreiras, 2008; Carreiras, Gutiérrez-Sigut, Baquero, & Corina, 2008). However, signs that have many handshape neighbors are easier to identify than signs that have few handshape neighbors.
One reason that handshape neighbors facilitate sign recognition and location neighbors inhibit sign recognition could be that these neighborhoods are qualitatively different. To better understand the composition of the neighborhoods, we have mapped out the lexicon using of a sample of 250 signs from the ASL Handshape Dictionary (Tennant & Gluszak Brown, 2010). Each node represents a sign, and the nodes are connected to a central hub. The hub is either the location or the handshape used in the sign. The transcription system we used only coded for one handshape or location per sign, so in the visualization non-overlapping clusters of signs begin to form. Each cluster represents a neighborhood of signs that share a given handshape or location, and the colors of the nodes help to highlight the individual neighborhoods. This visualization demonstrates that handshape and location neighborhoods are qualitatively different. Specifically, there are many possible handshapes (37 in this data set), and only few possible locations (13 in this data set). The result is that there are many small handshape neighborhoods, and a few large location neighborhoods. The difference in the structure of these neighborhoods might explain why handshape and location play different roles in sign recognition. In the future, we plan to test explore this possibility both experimentally and through computational modeling.