Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Apr 1;49(4):e70051. doi: 10.1111/cogs.70051

Playing With Language in the Manual Modality: Which Motions Do Signers Gradiently Modify?

Casey Ferrara 1,, Jenny C Lu 1, Susan Goldin‐Meadow 1,2
PMCID: PMC12136017  PMID: 40166957

Abstract

Language is traditionally characterized as an arbitrary, symbolic system, made up of discrete, categorical forms. But iconicity and gradience are pervasive in communication. For example, in spoken languages, word forms can be “played with” in iconic gradient ways by varying vowel length, pitch, or speed (e.g., “It's been a loooooooong day”). However, little is known about this process in sign languages. Here, we (1) explore gradient modification in three dimensions of motion in American Sign Language (ASL), and (2) ask whether the three dimensions are equally likely to be modified. We asked deaf signers of ASL (n = 11, mean age = 49.3) to describe an event manipulated along speed, direction, or path, and observed their use of gradient modification in lexical and depicting signs. We found that signers alter the forms of both types of signs to enhance meaning. However, the three motion dimensions were not modified equally in lexical signs, suggesting constraints on gradient modification. These constraints may be linguistic in nature, found only in signers. Alternatively, the constraints could reflect difficulties in using the hands to convey particular modifications and, if so, should be found in speakers as well as signers.

Keywords: Sign languages, Iconicity, Gradience, Depiction

1. Introduction

1.1. Using gradience to play with language

After a day of endless meetings, you might mention to your coworker that the day feels slow. In fact, you might think to yourself, slow does not fully capture it. Perhaps you want to highlight just how endless this day has felt. Rather than opting for a different word, you may instead choose to “modify” the word slow to convey your intended meaning more forcefully. One way you might go about this is by using an external modifier such as an adjective or an adverb. These words modify what we might think of as the dictionary definition of slow, as shown in the examples below.

  1. Today feels extremely slow.

  2. Thursdays are always incredibly slow.

Another way to achieve the same effect is to “play with” the structure or sounds of the word itself.

  • 3.

    Today feels slooooooow.

In this example, the speaker is dragging out the sounds in the word—specifically the vowel length. In so doing, the word “slow” is, in fact, produced more slowly than is typical. This vowel lengthening is not a phonemic feature of the word “slow,” nor would the lengthening be considered a morphological process. However, the transformation is clearly meaningful. Dragging out the sound is not just drawing attention to the word—it is adding the meaning “really slow.” This instance of language play allows the speaker to express the modified meaning without using external modifiers.

Language play is not unique to English and is found in many spoken languages. For example, Akita (2020) describes vowel lengthening in Japanese ideophones (words that depict sensory imagery, Dingemanse, 2012). The vowel in the ideophone guruQ [ɡɯɾɯʔ], which means “turning around,” can be lengthened gradiently. The resulting form [ɡɯɾɯːːːːʔ] means “making a long turn.”

We see similar modifications in sign language. Signers can play with their signs by modifying elements of manual form; for example, by repeating a phonological parameter or altering its length to capture changes in degree, intensity, number, size, and so on (Fuks, 2014, 2016). Fig. 1 presents examples of a signer of Israeli Sign Language, gradiently modifying the handshape in her sign for WALKING IN HEELED SHOES. In Fig. 1a, the signer uses a conventional handshape, with thumb and pinky extended, to indicate that she walked in stiletto heels. In Fig. 1b, she uses a handshape that is not part of the ISL inventory (the thumb is no longer extended) to indicate that she walked in shorter heels (adapted from Fuks, 2014).

Fig. 1.

Fig. 1

The sign for WALKING IN HEELED SHOES in Israeli Sign Language produced (a) with a conventional handshape indicating that she is walking in stiletto heels and (b) with a modified handshape (not part of the inventory of ISL) which indicates that she is walking in shorter heels.

These examples from both spoken and signed languages demonstrate the meaningful use of gradience in language. The modifications are gradient in form and capture gradient differences in meaning. This phenomenon challenges traditional notions of language as a system in which meaning is conveyed through discrete symbols perceived categorically (Hockett, 1960). These gradient modifications are effective in conveying meaning because they rely on an iconic relationship between the form of the modification and its intended meaning.

Iconicity is a motivated relationship between the features of real‐world referents and their linguistic forms (Frishberg, 1975; Taub, 1997). The phenomenon we explore here is based on imagistic iconicity,1 which involves a direct resemblance (in appearance, sound, etc.) between some quality of the form and what it refers to (Hiraga, 1994; Hodge & Ferrara, 2022; Peirce, 1897). The lengthening of the word “slow” in example 3 and the distance between the thumb and pinky in the handshape in Fig. 1b both have a direct resemblance to the referents (or more precisely, to changes in the referents) that they represent—the word slow becomes slower, and the handshape representing shorter heels becomes shorter.

There are, however, iconic relationships between form and meaning that are not imagistic and are, instead, more abstract and metaphorical. For example, English‐speakers can modify the word “hot” by extending its vowel to create “hooooooooot.” This modification intensifies the meaning of the word, which is typically interpreted as “very hot.” But note that the word “hooooooooot” is not hotter than the word “hot” in the way that “sloooooooow” is slower than the word “slow.” Akita (2020) draws a similar distinction between expressive features that serve an emphatic function and those that serve an elaborating function.2 Certain features and transformations can be applied to a word to emphasize its meaning or enhance its emotional power; others elaborate or further specify additional new meaning. It is this latter category that we explore here.

We see parallel examples of this distinction in sign language. The American Sign Language (ASL) sign SLOW is produced by moving the palm of one hand up the other hand and arm (Fig. 2). Signers can modify the speed of the sign so that the hand moves more quickly up the arm; surprisingly, this modification is used to mean “more slowly” in ASL. Quickening the speed of the sign intensifies its meaning, which has an iconic component (e.g., Wilcox, 2004, suggests that the metaphoric conceptualization of intensity can be seen in the quickening movement, which represents a release of pent‐up energy). However, as in the “hot” example, the sign meaning “more slowly” is not slower than the sign meaning “slow” and, in fact, is produced more quickly. The modification captures an abstract metaphorical relationship between form and meaning and not the direct imagistic relationship that we require for gradient modification in our study.

Fig. 2.

Fig. 2

The ASL sign SLOW produced in its citation form (a) and with a morphological speed change (b). The meaning “very slow” is conveyed by producing the sign more quickly. (Figure reprinted with permission from Klima and Bellugi, 1979).

As a final point about what counts as gradient modification in our study, note that speakers and signers can gradiently modify word/sign forms in ways that do not correspond to changes in meaning at all. For example, speakers may vary their speaking rate because they are nervous or in a hurry, not to convey gradations in meaning. Signers can increase their signing space when communicating at a distance or modify their sign forms to increase their size when communicating with nonfluent signers or children. These modifications are meant to heighten the salience of the sign, not to modify its meaning.3 Our focus here is not on these types of gradient modifications, but rather on gradient modification designed to capture imagistic aspects of meaning.

There is a second requirement for the notion of gradient modification that we explore here—it must work within the phonological/morphological system of the language. In Okrent's (2002) discussion of gradient modifications in speech, which she calls “vocal gesture,” she notes that there will inevitably be restrictions on how this gradience combines with the categorical forms in a linguistic system. For example, although the modification shown in example 4 still involves the gradient manipulation of speed/lengthening in an imagistic way, it would be unlikely that an English speaker would produce it.

  • 4.

    *Today feels ssssssssssslow.

Certain sounds or parts of a word (in this case, the vowel) are better suited for the stretching out and slowing down transformation than others. As another example of restrictions on when a gradient form can be used in speech, manipulating pitch in a sentence like Her voice went up [high pitch] and down [low pitch] works in English but would be less effective in a tonal language like Mandarin, where tone is lexically specified and has phonemic value (Okrent, 2002). Gradience in speech is constrained by the linguistic system into which it fits.

Little work has explicitly explored restrictions on gradience in sign language, but there is reason to expect restrictions. Supalla (2003) argues that, even in creative and playful language such as jokes, plays‐on‐words, or artistic renditions, the modifications signers make to handshape must conform to constraints on phonological formation (see also Fuks, 2014, 2016). As an example, Duncan (2005) asked signers of Taiwanese Sign Language (TSL) to describe a cartoon that has been used to study hand gestures in speakers of many different spoken languages (McNeill, 1992). The TSL signers used their hands to capture the same visual aspects of the cartoon characters’ movements as speakers capture with their manual gestures (e.g., a cat slinking up the inside of a drainpipe). But the signers accommodated their gestures to the handshape required for small animals in TSL—all of the signers overlaid their gestures onto this same handshape; in contrast, the hearing speakers used a variety of handshapes in their gestures.

Our goal here is to explore signers’ use of imagistic gradience to modulate motion meanings, and to ask whether there are restrictions on how that gradience is used.

1.2. Gradient modification in speech

The modifications we have been discussing are depictive rather than descriptive and, in this sense, differ from typical morpho‐syntactic transformations. Pierce (1897) proposed a three‐way distinction among symbols, icons, and indexes to describe meaningful communication. Clark and Gerrig (1990) later introduced the terms description, depiction, and indicating (pointing) to capture similar distinctions. Depiction and description are two modes of representation whose semiotic distinction lies in the way meaning is mapped onto form and how that meaning is understood. Depiction involves gradient mapping of meaning onto form; its meaning can be grasped even by those who do not know the language. In contrast, description involves mapping discrete meaning categories onto discrete form categories; there is typically an arbitrary element to the mapping.

For the most part, studies of language consider only descriptive communication. As noted by Dingemanse and Akita (2016), “a common shorthand for the distinction [between description and depiction] is ‘word’ versus ‘image’, reflecting a traditional view of language as a system of arbitrary words fully in the descriptive mode, with the depictive method of communication at best playing a secondary role in the gestures and bodily aspects of ‘paralanguage’.” In many arenas, language has come to be synonymous with descriptive (de Saussure, 1915; Hockett, 1960; Newmeyer, 1992; Peirce, 1897; Whitney, 1874).

Depiction has been excluded from traditional accounts of language because of its gradient and iconic nature. This exclusion is due, in part, to the widely adopted notion that an arbitrary relation between a form (signifier) and its referent (signified) is an essential feature of language (de Saussure, 1915). Others have dismissed the importance of depiction in spoken language on the grounds that its role is limited to small portions of the lexicon, for example, onomatopoeia (Newmeyer, 1992; Whitney, 1874; for review, see Perniss, Thompson, & Vigliocco, 2010; Schmidtke, Conrad, & Jacobs, 2014). However, cross‐linguistic investigations of depiction suggest that it is more pervasive than once thought. Rich inventories of sound‐symbolism in the form of ideophones (marked words depictive of sensory imagery; Dingemanse, 2012) have been found in an array of spoken languages spanning sub‐Saharan Africa (Msimang & Poulos, 2001; Schaefer, 2001), Australia (Alpher, 2001; McGregor, 2001), South‐eastern Asia (Watson, 2001), and South America (Nuckolls, 2001). Moreover, many languages do not restrict their depictive forms to representations of sound (i.e., onomatopoeia), but also include depictions of other sensory perceptions (Akita, 2009; Dingemanse, 2012).

Studies of ideophones find that a variety of gradient transformations can be applied to these forms. Akita (2020) describes “expressive features” of form (voicing, pitch modulation, reduplication, as well as lengthening, mentioned earlier) that can flexibly extend the forms of an ideophone to elaborate its meaning (see also Akita, 2017; Childs, 1995; Dingemanse, 2017; Dingemanse & Akita, 2016; Nasu, 2002; Rhodes, 1995). Example 5 contains instances of depictive modifications, including the vowel lengthening example mentioned earlier, based on the Japanese motion ideophone guruQ [ɡɯɾɯʔ] (Akita, 2020, p. 11).

  • 5.
    Expressive features modifying the Japanese ideophone guruQ [ɡɯɾɯʔ] “turning around”
    • a.
      Vowel lengthening:    [ɡɯɾɯːːːːʔ]  “making a long turn”
    • b.
      Mora augmentation:  [ɡɯɾːɯːːʔ]   “turning around energetically”   (gemination)
               [ɡɯnɾɯːːʔ]   “turning around energetically”  (nasal insert)
    • c.
      Prosodic foregrounding: Inline graphic    “turning around quietly”       (voiceless)

The vowels and consonants in these examples are modified gradually in Akita's terms, gradiently in ours, which leads to a mimetic depiction of duration, intensity, or loudness (see also Kawahara & Braver, 2013, 2014 as well as Hinton, 1994).

1.3. Gradient modification in sign

We turn now to gradient modification in sign language. The potential for imagistic gradience in a language will inevitably interact with the affordances of the modality of that language. Spoken languages have the potential to readily depict sound information. Signed languages (and visual gesture) can be highly expressive of visual information. The movement of a sign has the potential to depict the movement of referents in the world in a directly imagistic way (Akita, 2020; Dingemanse, Blasi, Lupyan, Christiansen, & Monaghan, 2015).

In this study, we focus on modification of movement in signs for two reasons. First, movement, as it is referred to in sign phonology, is made up of several different dimensions, each of which may be more or less restricted in how it can be used for gradient depiction. Second, actions can be conveyed by two different kinds of signs in ASL: lexical signs have a specified, categorical movement; classifier or depicting signs flexibly depict movement in an analog way. This aspect of ASL allows us to explore how overlaying gradience onto a categorical form to depict meaning compares to overlaying it onto an already‐gradient form.

Lexical or “frozen” signs have fully lexically specified forms found in a sign language dictionary and are described in terms of four parameters: Handshape, Movement, Location, and Orientation (Stokoe, 1980). In lexical signs, these parameters are generally considered meaningless subparts of a sign, which become meaningful in a specific way only when combined. For example, on their own, a “tapping” movement, “on the chin” location, and an F handshape are meaningless subunits. It is only when combined that these linguistic units become meaningful as the ASL sign translated as SOON,4 much like the sounds /s/ /u/ and /n/, which are not individually meaningful, become meaningful when combined into the English word “soon.”

In contrast to lexical signs, in depicting signs, the four parameters are considered morphological and thus meaningful (Johnston & Ferrara, 2012; Stokoe, 1980; Supalla, 1986; Zwitserlood, 2012). For example, in ASL, a one‐handshape (the index finger extended with the remaining fingers closed in a fist) represents a person; a three‐handshape (the thumb, index, and middle fingers extended, with the ring finger and pinky closed) represents a vehicle. To describe a moving object with a depicting sign, a signer selects a handshape from a limited set of specified handshapes that best represents the moving object; the signer combines that handshape with a motion that represents the moving act. Although handshapes in depicting signs are drawn from a limited set of categories, motion in depicting signs is believed by some to be created in the moment by the signer (Cormier, Smith, & Sevcikova Sehyr, 2013; Emmorey & Herzig, 2003; Liddell, 2003; Lu & Goldin‐Meadow, 2018). The movement through space in a depicting sign is determined by properties of the actual event, mapping directly onto the referent's location and movement in an analog way; in this sense, the movement may be considered gestural (see Supalla, 1982, for a different view).

Fig. 3 presents an example of a lexical sign and a depicting sign conveying the same meaning. To describe a person running quickly in a circle using lexical signs, a signer might produce the signs PERSON, RUN, FAST, CIRCLE (see Fig. 3a). However, to describe a person running quickly in circles using a depicting sign, a signer might produce the lexical sign for PERSON, and then move the 1‐handshape (denoting a person) in a fast, circular motion to iconically depict the running event (see Fig. 3b).

Fig. 3.

Fig. 3

The proposition “a person runs quickly in circles,” expressed via lexical signs (a), and via a depicting construction (b). The depicting construction simultaneously expresses the “run,” “quick,” and “speed” information; the lexical signs represent these concepts using discrete units (Figure created using images from www.lifeprint.com).

A signer's willingness to gradiently modify a sign to convey subtleties in meaning could also be influenced by the iconicity of the sign itself. Some spoken language researchers have claimed that gradient modifications are commonly found in iconic words, such as ideophones (Akita, 2020; Dingemanse & Akita, 2016; Zwicky & Pullum, 1987; see the Japanese ideophone guruQ in example 5). However, much of the research that finds iconic words to be prone to acoustic and prosodic manipulations (particularly in child‐directed language) focuses on the unique and marked ways these forms are modified, not on whether the modifications correspond to changes in meaning (e.g., Laing, Vihman, & Keren‐Portnoy, 2017; Sundberg & Klintfors, 2009).

Our focus here is on gradient modifications of form in sign language that capture gradient aspects of meaning. We expect depicting signs to naturally support this type of modification. But many lexical signs also display some degree of visual iconicity (Caselli, Sevcikova Sehyr, Cohen‐Goldberg, & Emmorey, 2017). To the degree that the presence of an iconic mapping between a sign's form and its meaning supports gradient manipulation of that form to capture subtleties in meaning, we might expect gradience more often in lexical signs that have iconic components than lexical signs that do not.

1.4. The design of our study and predictions

In our study, we present signers with two scenes at a time. The first scene is designed to elicit the citation (neutral) form of a sign (e.g., a man running in a straight line); the second scene has a change in one of three dimensions of movement—speed, direction, or path (e.g., the same man running in a straight line but more quickly). We ask signers to describe both scenes and note, first, whether they mention the change in the event (speed, in this case) and, if so, what devices they use to convey the change—by gradiently modifying the citation form of the sign to capture the change in speed (direction, or path), or by adding lexical items that capture the change.

Some research suggests that movement in classifier verbs of motion in sign may be gestural in nature. Singleton, Morford, and Goldin‐Meadow (1993) analyzed signers’ and nonsigners’ performance on the Verbs of Motion Production task (Supalla et al., 1995). Participants view a motion event and then describe that event using signed language (for signers) or gesture without speech (for hearing nonsigners). They found that when nonsigners were asked to describe the events using silent gesture, their use of movement and location was strikingly similar to ASL signers’ (88% match); their handshapes were far less similar (22% match). Schembri, Jones, and Burnham (2005) compared deaf native signers of either Australian Sign Language or TSL to Australian hearing nonsigners. Again, they found that nonsigners did not resemble signers in their handshapes but were surprisingly similar to both sets of signers in their movements. The large overlap between signers and silent gesturers in direction and manner of movement suggests that these two dimensions may have a gestural component, which has the potential to support gradient modification. Although the third dimension, speed, has not been varied in studies of sign, it has been varied in studies of speech. English‐speakers were asked to describe an animated dot moving at different speeds from left to right or from right to left, using the phrase It is going left or It is going right. The speakers spoke faster when the dot moved more quickly even though the propositional content of the utterance did not refer to speed (Shintel, Nusbaum, & Okrent, 2006). Follow‐up studies by Perlman (2010) and Perlman, Clark, and Johansson Falck (2015) found evidence for speed‐related modulations of speech rate in unconstrained naturalistic speaking tasks. All three dimensions (path, direction, and speed) thus have the potential to be gradiently modified.

At the same time, path, direction, and speed can be used categorically in ASL to distinguish between lexical signs. Fig. 4 (top) displays two signs, CHRISTIAN5 and COMMITTEE,6 which are identical except for direction; note that the difference in direction between the two signs does not capture the imagistic aspects of the referents. Fig. 4 (bottom) displays two signs, SCHOOL7 and PAPER,8 which are identical except for path; here again, the difference in path between the two signs does not capture the imagistic aspects of the referents. Finally, speed can also be used to distinguish between referents in ways that are not imagistically iconic (see the example of SLOW vs. VERY‐SLOW in Fig. 2, discussed in Section 1.1).

Fig. 4.

Fig. 4

Two examples of minimal pairs. The signs CHRISTIAN and COMMITTEE both involve a straight path of movement, directed either diagonally down from shoulder to hip (a) or horizontally from one shoulder to the other (b). The signs SCHOOL and PAPER are both articulated in neutral space with the palms contacting each other, either with a straight path (c) or a curved path (d). Handshape, palm orientation, and initial location are the same for each pair (Figure adapted from www.lifeprint.com).

The fact that each of the three dimensions can be used in ASL to capture categorical distinctions may limit signers’ ability to use these dimensions gradiently. For example, prosodic features spell out the underlying types of movement of a sign in Brentari's (1998) phonological model of sign. Path features are dominated by the path node in the prosodic branch of this model and indicate the shape or direction of a movement. Signs may or may not be specified for direction, but speed is not specified in the model.9 If signers are sensitive to these phonological differences, they might treat the three dimensions differently with respect to gradient modification. In other words, they may modify signs along some dimensions but not others—in the same way that speakers can manipulate their rate of speech to describe a fast‐paced event but may not be able to manipulate voicing because it will produce a new word (e.g., buzzbus) or render the word meaningless (e.g., buzzpuzz).

To summarize, our goal is to explore when signers creatively play with their sign forms to capture variations in meaning. As Fuks (2016) notes, “Very few studies have reviewed the interactions between entries which are part of the established SL core‐lexicon and gradient manual features in signing.” Our goal is to explore signers’ willingness to apply modifications to the movement of lexical signs, which are part of the core‐lexicon of ASL and commonly considered in the literature to be relatively immune to such modifications (Cormier et al., 2013, p. 372; Emmorey, 1999, p. 145).

We present signers with videos of pairs of motion events that vary in path, direction, or speed, and we give them either a lexical sign or a depicting sign to use in describing the event pairs. Our question is whether signers imagistically alter the forms of their lexical and depicting signs to capture variations in motion meanings. Based on previous literature, we expect that signers will gradiently modify lexical signs to capture relevant distinctions in our stimuli, but will be less willing to modify lexical signs than depicting signs not only because lexical signs are likely to be less iconic, but also because the components of a lexical sign are phonemes rather than morphemes (Supalla, 1982, 1986) or gestures (Schembri et al., 2005). We then explore whether signers are equally willing to modify each of the three dimensions of motion. If not, we will have identified restrictions on modifiability that may have a linguistic basis. Finally, we ask whether modification is influenced by the iconicity of the base sign; in particular, whether items judged to be more iconic are treated as more gradiently modifiable than items judged to be less iconic.

2. Methods

2.1. Participants

Eleven deaf participants (seven female, four male) who use ASL as their primary language were recruited from the Chicago area. The mean age of participants was 49.3 years (SD = 17.5; range: 20.2–71.7). Eight of the participants were exposed to ASL from birth and come from deaf families. The remaining three report no deaf family members (at the time of their birth). One of the three began receiving regular ASL exposure starting at 8 months and enrollment in a deaf signing school at 5 years old. The remaining two report learning ASL at approximately 10 years. All listed ASL as their preferred form of communication.

Signers were paid $75 for their participation and travel. Sessions were typically10 conducted by a deaf signer fluent in ASL, and all instructions were given in ASL.

2.2. Task and stimuli

The stimulus set contained videos representing different actions (here referred to as “events”). Each event could be described in ASL by a lexical sign and also by a depicting sign. The lexical and depicting signs presented to the participants are shown in Fig. 5. Recall that depicting signs are not lexically specified for movement or path. Rather, the motion is created by the signer in the moment in a way that maps directly to the referent being depicted. This raised the question of whether the experimenter would incorporate a motion into the depicting sign when presenting it to the signer in the prompted conditions, or whether the signer would be simply shown a classifier handshape. We opted to incorporate movement into the depicting signs we presented in order to make the depicting sign condition as comparable as possible to the lexical sign condition. The movements used in the depicting sign prompts were designed to “match” the movement in the initial neutral video, and could be considered “unmarked” (e.g., a straight forward path) in the context of our neutral videos. In the majority of cases, multiple phonological parameters (including handshape and motion) distinguished the lexical sign for an event from the depicting sign for the same event; however, handshape was the only distinguishing feature for CLEAN, COMB, OPEN, PAINT, and THROW11. Video examples of the signs that were presented to participants by the experimenter are publicly available on the online repository.

Fig. 5.

Fig. 5

All target lexical and depicting signs shown to participants as prompts.

Three types of elicitations were presented to all participants in a within‐subject design. In the first elicitation, signers were asked to watch the video pairs and then describe what happened in each pair, without any guidance about which signs to use in their descriptions. In the second and third elicitations (which were counterbalanced), participants watched the same video pairs and once again were asked to describe what happened in each, but this time they were given a particular sign to incorporate into their descriptions, either a frozen lexical sign or a productive depicting sign. The data from the first elicitation were not analyzed for modification but were instead used to ensure that all participants correctly interpreted the events shown in the videos, and as a sample of each participant's signing style that could be used as a reference point during coding in case of ambiguities. The data presented in our analyses come only from the second and third elicitations.

Each participant saw videos of 14 different actions in each of the three elicitations. Each of the actions in an elicitation was presented in three event manipulations: once with an event that varied in speed, once in an event that varied in direction, and once in an event that varied in path. Each participant thus saw 126 stimuli (14 actions × 3 elicitation conditions × 3 event manipulations).

Participants were shown pairs of videos displaying two contrasting versions of the same action. One of the videos in each pair showed the action performed in such a way that it mirrored the citation form of the lexical sign for that action. For example, a common lexical sign for RUN in ASL uses both hands in L handshapes (with the thumb and index finger extended in the shape of an L), with the index finger of the nondominant hand resting against the thumb of the dominant hand. The hands move forward together in a straight line, while the index fingers flex into open‐X shapes (where the index finger is curved with the thumb still extended), again with the nondominant index finger hooking around the dominant thumb (see Fig. 6a). The event video designed to match this citation form in speed, direction, and path contained a woman running forward in a straight line across a field (Fig. 6b). The second video in the pair displayed an action that deviated from the first action; if a signer were to fully preserve the imagistic mapping of the sign to that action, they would need to modify some aspect of that sign. Each altered event was designed to potentially elicit one of three types of modifications in the target sign:

Fig. 6.

Fig. 6

An example of video pairs presented to participants. The two images on the top display the lexical sign RUN (which the participant did not see), and the running event that mirrors this form (the participant did see the event). The three images underneath display the running event sped up (left), moving in the opposite direction (middle), and moving in a different path (right). The three images below the altered events display sign forms that capture the alterations in their form, speed on the left, direction in the middle, and path on the right. Participants saw the altered events, but not the modified signs.

Path—Altered events intended to elicit path modifications were designed so that the shape of the movement path would have to be modified for the sign to imagistically depict the altered event. For example, in the citation form of the sign RUN, the path of the hands is a straight line and, in the unaltered event video, the person runs in a straight line (Fig. 6a,b). The event video altered for path shows a person running in a zigzag (Fig. 6e). To imagistically depict this altered event, the signer would need to modify the movement path to a zigzag trajectory (Fig. 6h).

Direction—Altered events intended to elicit direction modifications were designed so that the direction of the movement would have to be modified for the sign to imagistically depict the altered event. For example, the event video of RUN altered for direction shows a person running from left to right, rather than right to left (Fig. 6d). To imagistically depict this altered event, the signer would need to modify the direction of the movement or the orientation of the hands and arms in the signing space (Fig. 6g).

Speed—Altered events intended to elicit speed modifications were designed so that the speed of the movement would have to be modified for the sign to imagistically depict the altered event. For example, the event video of RUN altered for speed shows a person running faster than the unaltered event (Fig. 6c). To imagistically depict this altered event, the signer would need to produce the movement more quickly than they did for the unaltered event (Fig. 6f).

Each stimulus event was acted out and video recorded (rather than being digitally altered). Pairs of videos were presented side by side, with the action that mirrored the citation form of the lexical sign on the left, and the action that varied in either speed, direction, or path on the right. Thus, each action appeared as a video‐pair three times: once for each type of alteration. Participants were instructed in ASL to watch the video on the left (the action that mirrored the lexical sign) and then the video on the right (the action that varied in either speed, direction, or path). They were then asked to describe to the experimenter what happened in the two videos. Henceforth, we will use “trial” to refer to these presentations of video pairs. Participants were given three practice trials to ensure they understood that they were expected to provide descriptions of both the first and the second video, rather than a general description such as “two videos of someone running.”

In the second and third elicitation conditions, participants were encouraged to use the target lexical sign or depicting sign they were given in their descriptions. However, they were also told that if the sign was not one they normally would use or did not seem like an appropriate label for the event, they were free to describe the event as they normally would. Participants chose not to use the target sign for a small proportion of the trials (74/462, 16.0% for the lexical condition, 30/462, 6.5% for the depicting condition across participants). These responses were considered when assessing whether the participants noticed the relevant distinctions between the videos but, because the responses do not contain the target signs, we did not count them as either modified or nonmodified signs.

Fig. 7 presents a visualization of the task structure.

Fig. 7.

Fig. 7

Experimental task structure. Participants watch each video pair three times: first in the “unprompted elicitation round,” where participants describe the pairs with no experimenter input; a second and third time in the “lexical prompt” and the “depicting prompt” rounds (also referred to as the lexical condition and the depicting condition).

All sessions were video recorded using a digital camera on a tripod, positioned so that the participants were fully in frame while signing (from the top of their head to their lap while seated, with space on either side). Participants directed their responses directly to the camera or to the experimenter (seated across from them, next to or in front of the camera). There were no instances where participants’ signs were out of view of the camera. There were seven trials in the depicting condition where data was lost due to technical issues with the video recordings freezing. In our analyses, the values for these trials were replaced with the mean modification proportion for that trial among the remaining participants so as not to skew the distribution of the data.

Participants were told that another deaf signing participant would later watch the video of their responses and be asked to identify which videos the response referred to. These instructions were used to give participants a sense of how much detail to include in their descriptions, as well as to encourage them not to focus on whether the experimenter approved of their choice of signs. Once the task was completed, the experimenter debriefed the participants on the goal of the study.

2.2.1. Categorizing verbs

One potential limitation of our study is the lack of agreed‐upon criteria for categorizing signs as lexical or depicting (Johnston & Schembri, 1999; Schembri, 2003; Supalla, 1982, 1986; Zwitserlood, 2008, 2012). With this difficulty in mind, we developed our list of stimuli by first identifying actions that could be described by two distinct signs in ASL which would both be glossed as the same verb in English. The two signs needed to differ phonologically and in a way that was not the result of an existing morphological inflectional process (e.g., agreement). Of those verbs, we selected lexical signs that had a clear and identifiable citation form and could be found in an ASL dictionary (all are listed in both ASL‐lex, Caselli et al., 2017, and ASL SignBank, Hochgesang, Crasborn, & Lillo‐Martin, 2017). The parallel depicting sign differed from the lexical sign along at least the handshape parameter (but often along multiple parameters), participated in productive constructions, and could not be found in our dictionaries. Decisions regarding the lexical or depicting status of a sign were also informed by our deaf co‐author and a sign language linguist, both fluent in ASL. The difference in modification patterns between the lexical signs and depicting signs used in our study provides post‐hoc evidence for our division of signs into lexical and depicting signs.

2.2.2. Typicality ratings of the events shown in the stimulus videos

To ensure that our stimulus videos were good exemplars of the events used in the task, we recruited a sample of hearing, English‐speaking participants via Amazon Mechanical Turk to rate the videos. Between 28 and 30 hearing participants rated each video. Participants were shown an action video alongside the English verb that best labeled it. They were asked whether the action shown in the video exemplified that verb (yes or no) and, if yes, to rate the action for how well it exemplified the verb on a 7‐point Likert scale. These data were used to ensure: (1) that all of the actions shown in our video stimuli were good instances of the target verbs; and (2) that the variations of the three dimensions that were manipulated were equally identifiable.

2.2.3. Iconicity ratings of the lexical and depicting signs used in the study

To assess the iconicity of the lexical signs and depicting signs used in this task, we collected ratings for each target lexical sign and depicting sign (14 lexical signs + 14 depicting signs = 28 items in total). Ratings were collected in person from an additional group of 11 hearing nonsigners.12 Participants used a 1–7 Likert scale to judge how much the sign looked like what it meant. This iconicity task has been used by other research groups and elicits reliable ratings from both signing and nonsigning individuals (Caselli et al., 2017; Sevcikova Sehyr & Emmorey, 2019; Vinson, Cormier, Denmark, Schembri, & Vigliocco, 2008). In general, signers’ iconicity ratings are highly correlated with nonsigners’ ratings, and ratings from hearing nonsigners are often used as a measure of sign iconicity (Bosworth & Emmorey, 2010; Caselli et al., 2017; Sevcikova Sehyr & Emmorey, 2019). We, therefore, combined the ratings from our deaf and hearing participants. Fig. 8 presents a sample trial from the iconicity survey.

Fig. 8.

Fig. 8

Example item from the iconicity survey.

Iconicity scores collected from in‐person participants were normalized within‐subject in order to correct for possible individual biases in Likert responses (e.g., a central tendency bias), and to permit participants’ scores to be compared on the same scale. For each data point (verb iconicity rating), we took the raw score, subtracted that individual's mean overall score, and then divided the difference by that individual's standard deviation. This within‐subject z‐scoring procedure was used in the iconicity studies on which our measure is based (Baus, Carreiras, & Emmorey, 2013; Bosworth & Emmorey, 2010; Caselli et al., 2017; Sevcikova Sehyr & Emmorey, 2019). These z‐scores will be used throughout the iconicity analyses. The online sign database ASL‐lex 2.0 reports iconicity scores for the lexical signs used in this task (collected from 28 hearing nonsigners), but not for the depicting signs. We, therefore, report the in‐person ratings collected from our participants to ensure that the ratings came from the same source for lexical and depicting signs. For our lexical signs, the in‐person iconicity ratings were highly correlated with those reported on ASL‐lex 2.0 (r = .85, p < .0001). We repeated our iconicity analyses (Section 3.3) using the ASL‐lex 2.0 ratings and found no difference in the reported patterns.

2.3. Coding

Video data from both the lexical and the depicting conditions were coded by the first author and an RA, both of whom are hearing and have used ASL for >10 years. The RA coder was blind to the hypotheses at the time of coding. All coding decisions were reviewed by both parties, and any disagreements were discussed with a deaf signing collaborator and a sign language linguist. Post‐hoc reliability was calculated on 10% of the data (92 trials) sampled across participants, conditions, verbs, and dimensions. Inter‐rater and intra‐rater reliability were high; the Cronbach's Alpha score when comparing Coder 1, Coder 2, and the final coding was .94 (Coder1–Coder2 α = .88; Coder1–final coding α = .96; Coder2–final coding α = .88). Responses were annotated on a sign‐by‐sign basis using the ELAN software program (from the Max Planck Institute for Psycholinguistics), which enables annotations to be time‐aligned to the relevant video frames (Wittenburg, Brugman, Russel, Klassmann, & Sloetjes, 2006).

All participants structured their responses by first describing the video on the left (which matched the sign that described it, the unaltered event) and then describing the video on the right (which varied from the first event in speed, direction, or path, the altered event). The transition from describing the first to the second video was typically marked with a body shift and/or use of labels such as “first” and “second” or “left video” and “right video.” Dividing trials into descriptions of the unaltered versus altered events did not pose a problem for the coders, who agreed on all trial boundaries. Further details about the coding system can be found in the coding manual available on the online repository.

We identified the portion of each response that described the altered event (second video in the pair). In each participant's description, we calculated the number of times that the target sign (the sign presented to the signer at the start of the trial) was modified so that the speed, direction, or path of the sign was changed to gradiently depict the altered event. To calculate the proportion of modifications for a trial, we divided this number by the total number of times the target sign was produced by the signer (modified or unmodified) in their descriptions of the altered event. Instances of the target verb in the description of the unaltered event (the first video in the pair) were not included in this calculation. For example, for a trial where the THROW verb is manipulated for path (throwing underhand) in the lexical condition, one signer produced the following description (approximately translated):

The first video shows a woman throwing a ball normally. The second video shows the same woman throwing[unmodified] a ball but throwing[modified] it underhand.

To calculate the score for this response, we focus on the description of the altered event (the second sentence in this example) and divide the number of times the target sign is modified (once) by the total number of times it appears (twice), yielding a score of 0.5 for this trial.

We opted for this proportion of total uses measure over a binary coding of modification as either present or absent in order to capture the fact that signers may feel a modification is “licensed” or allowed only if an unmodified version of the sign is also produced; the need to produce an unmodified form along with a modified form may signal a reduced willingness to modify. In other words, we wanted a measure that allowed for the possibility that some modifications could “stand on their own,” whereas others might not. This possibility may be particularly relevant for modifications that are very low‐frequency or that impact comprehensibility. In addition, the proportion measure allows us to take advantage of all of the data we collected. As a result, the data presented here are based on the proportion measure. However, using the binary coding measure produces a nearly identical pattern of results (an R markdown file, which includes the results calculated in terms of the binary measure, is available on the OSF repository).

To determine whether a lexical sign was modified, coders first assessed whether the dimension of interest (speed, direction, path) was captured in the sign. If so, the coders then asked whether the sign was identical to the citation form in all respects except the modified dimension. For example, the stimulus for the altered path for the lexical sign CLEAN contained a circular movement (as opposed to the back and forth movement in the unaltered event). Lexical signs were coded as modifications only if the signer introduced a circular movement into the citation form for CLEAN and maintained all of the other aspects of the sign form (i.e., an open‐B handshape and a palm‐up nondominant hand). If the signer used a handling handshape, or only one hand without a base hand (which in this case would make the sign more like the actual event), the response would be coded as a depicting construction rather than a modified lexical sign.

For depicting signs, recall that we incorporated movement into each sign presented to the participants even though depicting verbs are not specified for movement or path. The movement used in the depicting signs presented as stimuli mirrored the movements in the unaltered target events (see Fig. 5, and the videos available in the online repository, for information on the movements used in the depicting prompts). A depicting sign was considered not modified if the signer copied the form of the sign presented for the unaltered event. For example, if the signer produced the depicting sign for RUN with a straight movement (the movement in the sign presented to them for the unaltered event) to describe running in a zigzag, the sign would be considered unmodified. If the signer produced it with a zigzag movement comparable to the movement in the altered video, the depicting sign would be considered modified.

For speed trials, we judged the signs describing the altered event as either faster or slower than the signs describing the unaltered event. Since changes in speed can be difficult to measure, we performed a post‐hoc check of our speed coding. For each speed trial, we calculated the difference in duration between the sign describing the unaltered event and the absolute value of the sign describing that event altered for speed. This difference was significantly larger for signs that we coded as modified for speed (M = 532.46 ms) than for signs that we coded as unmodified for speed (M = 276.6 ms, t(89) = 4.10, p < .0001).13 Our speed coding thus has face validity.

2.4. Analysis

We first compared the proportion of modification in lexical signs to the proportion of modification in depicting signs, and explored whether the three manipulated dimensions (speed, direction, path) were modified to the same degree. We conducted a two‐way repeated‐measures ANOVA, testing for a main effect of condition (lexical signs vs. depicting signs), dimension (speed, direction, path), as well as an interaction between the two (Section 3.1). We used two‐tailed t‐tests to determine whether and how the three dimensions differ in their modifiability within each condition (Section 3.2). We then asked whether the iconicity of the base sign is correlated with gradient modification. We tested for correlations between the iconicity ratings for a given sign's citation form and the proportion of modification for that sign across participants (Section 3.3). We conducted this analysis across all three dimensions, and within each individual dimension. Lastly, we analyzed the linguistic context in which modifications occur to investigate how signers package and distribute information across an utterance (Section 3.4).

3. Results

3.1. Evaluating the overall success of the task

In order to compare if and how signing participants captured the three dimensions manipulated in this task, we need to know that participants reliably noticed the difference between the unaltered and altered videos in each trial and saw this contrast as relevant to include in their description. Participants included information on the manipulated dimension in almost all of their responses (98.5%, 910/924). Thus, the stimuli used in the task captured the manipulated dimensions well, and participants understood this information to be relevant to include in their descriptions.

A second important check on whether our data accurately reflect signers’ intuitions regarding the modifiability of our target signs is assessing the extent to which our target signs were reasonable labels for these events and would be spontaneously used by our signers without prompting. Frequency of use data from ASL‐lex 2.014 indicates that our target lexical signs had a mean frequency rating of 4.75 on a 7‐point Likert scale (range = 3–6.1), indicating that our target signs were not particularly low frequency. Moreover, an analysis of the responses elicited in our unprompted condition (where signers had not yet been shown any of our target signs and were simply asked to freely describe our stimuli pairs) indicates that signers frequently use both our lexical and depicting target signs as their unprompted labels for the events. When looking by participant, the median number of target signs (out of 14) produced by a participant unprompted was 11 for lexical targets and 12 for depicting targets. When looking by verb, the median number of participants (out of 11) to spontaneously produce a given target sign unprompted at least once was 9 for both lexical and depicting targets. Each of our target signs was produced spontaneously by multiple signers, and each of our signers spontaneously produced many of our target signs. Moreover, in the prompted conditions, participants failed to use the verb they were given by the experimenter in only 16.0% (74/462) trials for the lexical condition, and in 6.5% (30/462) trials for the depicting condition. Ten out of our 11 participants used the prompted option at least once, suggesting that our prompted forms were not unnatural for the signers.

Taken together, these results indicate that, despite the inherent unnaturalness of an experimental context, signers were using familiar signs in contexts that felt appropriate to them.

3.2. How does modification in lexical signs compare to modification in depicting signs?

The mean proportion of signs that were modified was 0.40, (SD = 0.45) in the lexical condition, and 0.90 (SD = 0.19) in the depicting condition. We used a two‐way repeated‐measures ANOVA to test for main effects of condition (lexical sign/depicting sign), dimension (speed, direction, path), as well as an interaction between the two. There was a significant main effect of condition F(1) = 44.21, p = < .001. Signers modified signs significantly more often in the depicting condition than in the lexical condition.

There was also a second significant main effect of dimension F(2) = 12.72, p = < .001, and a significant interaction between condition and dimension F(2) = 4.13, p = .032. We examine the effect of dimension and the interaction between condition and dimension in the following sections.

3.3. Are the three motion dimensions modified to different degrees?

Participants modified all three dimensions frequently in the depicting condition; the difference between the most modified dimension (speed) and the least (path) was only 0.08. In contrast, participants modified the three dimensions at very different proportions in the lexical condition; the difference between the most modified dimension (speed) and the least (path) was 0.30. Table 1 presents the means and standard deviations for proportion of modification, broken down by condition and dimension.

Table 1.

How often signs were iconically modified along each dimension

Lexical signs Depicting signs
Altered speed M = 0.55, SD = 0.47 M = 0.93, SD = 0.19
Altered direction M = 0.39, SD = 0.45 M = 0.92, SD = 0.21
Altered path M = 0.25, SD = 0.39 M = 0.85, SD = 0.25

We used post‐hoc two‐tailed t‐tests to assess whether the proportion of modification differed significantly between pairs of dimensions. In the lexical condition, the proportion of modification for path (M = 0.25) and for speed (M = 0.55) was significantly different t(10) = −4.28, p = .002) after a Bonferroni correction for multiple comparisons at alpha of 0.05. Differences between path‐direction and direction‐speed did not survive correction, nor did differences between any two dimensions in the depicting condition (although it is possible that the high proportion of modification for depicting signs may have produced a ceiling effect, obscuring any differences between dimensions in this condition). The boxplot in Fig. 9 displays the main effect of condition on proportion of modification, as well as the dimension effect in the lexical condition.

Fig. 9.

Fig. 9

Proportion of modification in the lexical and depicting conditions, broken down by dimension.

We also examined proportion of modification within individual signers to determine whether individuals differed in how likely they were to gradiently modify their signs. At the participant‐level, we found a large range of modification rates for lexical signs (range = 0.19−0.88 in overall proportion of modification). However, all signers produced modifications for all three dimensions at least once. Modification was more likely for speed than for path at the individual level, with all but one participant modifying speed more often than path (Z = 10, p = .01, sign test). The relationship between direction and the other two parameters was less consistent. The majority of signers modified speed more than direction (7 out of 11), and direction more than path (8 out of 11), but neither result was significant (Z = 8, p = .23; Z = 7, p = .55).

We also examined verb differences in rate of modification in lexical verbs (range = 0.82−0.98 in overall mean modification). Modification was found in all lexical verbs, and all verbs but one (MEASURE) were modified along each of the three dimensions. As in our subject‐level effects, there is evidence for our dimension pattern at the verb‐level: Speed was modified more often than path in 12 of the 14 verbs (Z = 12, p = .01, sign test). Modification of direction fell in between path and speed for 10 of the 14 verbs, but this pattern was not significant (Z = 10, p = .18; Z = 10, p = .18).

The target signs we used were of multiple types (see table in the Supplementary Materials available on the OSF repository): one‐handed signs, which are formed using a single hand (e.g., THROW); symmetric two‐handed signs, in which both hands move in the same way (e.g., OPEN); asymmetric two‐handed signs, in which the dominant hand is active, while the nondominant hand is held static (e.g., KICK). These signs may also be body‐anchored produced on the signer's body (e.g., HUG), or in neutral space produced at chest level in front of the signer (e.g., CLEAN). Lastly, signs may be directional, in which they move in the direction of one or more of its arguments (e.g., CHASE), or nondirectional which do not (e.g., MEASURE). Each of these types has linguistic constraints that could affect how often and how they are modified (see Brentari, 1998; Mandel, 1981; Napoli & Wu, 2003; van der Hulst, 1993). For example, two‐handed symmetric signs are subject to the Symmetry Constraint, which specifies that both hands in a symmetric two‐handed sign must have the same or a mirrored configuration, orientation, and movement (Battison, 1974, 1978); signs that reflect across the midsagittal plane are the simplest to perform from a motor‐coordination point of view (Ferrara & Napoli, 2019; Napoli & Wu, 2003). Modifiability across the types of signs ranged from .48 to .86, M = 0.62, SD = 0.16. However, the important result for our question is that, for every sign type, signers were more willing to modify for speed than for path: body‐anchored signs (.85 speed, .62 path), neutral space signs (.73 speed, .57 path), one‐handed signs (.88 speed, path .77), two‐handed signs (.71 speed, .52 path), symmetric signs (.70 speed, .50 path), asymmetric signs (.72 speed, .54 path), directional signs (.63 speed, .34 path), and nondirectional signs (.51 speed, .21 path). Sign type, therefore, does not interact with our basic phenomenon regarding dimension.

3.4. Does iconicity influence modifiability?

We now ask whether signers are more willing to iconically modify signs rated as iconic than signs rated as noniconic. Fig. 10 is a scatterplot comparing iconicity and modification rate for the 28 signs in this study (each of 14 verbs represented both as lexical signs and depicting signs), overlaid with each condition's best fit line. Note that each verb label appears twice–‐once for the lexical condition (triangles) and once for the depicting condition (circles).

Fig. 10.

Fig. 10

Correlation between iconicity and proportion of modification for verbs in the lexical and depicting conditions. The open triangles represent the depicting condition; the filled circles represent the lexical condition.

There was a significant correlation between iconicity and modification proportion for the lexical signs (r = .64, p = .01) but not for the depicting signs (r = −.08, p = .8). The correlation in the lexical signs remains significant after removing the high leverage point RUN. The iconicity scores shown here represent the ratings collected from our participants, as described in Section 2.2.3. However, when we repeat this analysis using the ASL‐Lex 2.0 iconicity ratings for the lexical signs (our depicting signs are not available on ASL‐Lex 2.0), the correlation remains significant (r = .76, p = .002).

It is difficult to look for correlation effects when we break the data down into different sign types. Because the different sign types are unequally represented in our data (with some having very few observations), it is difficult to interpret the presence or absence of our iconicity main effect within these types. With that said, when we look at the relationship between iconicity and modification within the different sign types, we find that the correlation between iconicity and modification holds for signs produced in neutral space (r = .47, p = .02), for two‐handed signs (r = .51, p = .01), and for nondirectional signs (r = .89, p < .001), and is marginally significant for symmetrical signs (r = .57, p = .05). The correlation is in the correct direction but is not significant for asymmetrical signs (r = .40, p = .25) and for directional signs (r = .71, p = .29), and the correlation reverses direction (and is also not significant) for body‐anchored signs (r = −.70, p = .30) and for one‐handed signs (r = −.29, p = .57). However, this pattern is difficult to interpret as there are only four body‐anchored signs and six one‐handed signs.

Each dot in Fig. 10 represents a sign's modification rate averaged across its three dimension manipulations. We next asked whether the correlation with iconicity is present in all three dimensions in the lexical condition. Fig. 11 presents the data.

Fig. 11.

Fig. 11

Correlation between iconicity and modification rate in the lexical condition, broken down by dimension.

Although iconicity is related to modification in all three dimensions, the strength of the relation differs across the dimensions. There is a significant relation of moderate strength between iconicity and direction (r = .56, p = .035) and between iconicity and speed (r = .59, p = .03). However, the relationship to iconicity is weaker for path, and is not significant (r = .42, p = .13).

3.5. What additional productions accompany modified and unmodified forms?

The final analyses examine in more detail how signers package the relevant information in their utterances. Modification of a sign may occur on its own or alongside other types of structures. To explore this question, we characterized responses as belonging to one of several categories. To exemplify the categories, we consider a trial from the lexical condition for the verb CLEAN intended to elicit a path modification. In this trial, the unaltered video shows a person cleaning a surface side‐to‐side in a horizontal motion, which matches the movement in the lexical sign CLEAN (see Fig. 12, right image). The altered video manipulates the path by showing a person cleaning using a circular motion. Fig. 12 (left image) displays a sign that has been modified to capture this circular cleaning motion.

Fig. 12.

Fig. 12

Visualizations of the modified (left) and unmodified (right) forms for the verb CLEAN. The unmodified form is the citation form for CLEAN found in ASL dictionaries.

The target verb can appear in its unmodified form or in a modified form.15 A modified or unmodified target sign can also appear with additional descriptions of the manipulated dimension (e.g., with an adjective/adverb “in a circle” or with a classifier construction displaying the circular motion). Fig. 13 presents the unmodified form with and without additional description (right column) and the modified form with and without additional description (left column). Note that the top right cell, in which the unmodified sign is produced on its own, is underinformative as it fails to mention the manipulated dimension. As mentioned at the beginning of the results section, underinformative responses were rare (∼ 1.5% of the data). Responses falling into the bottom left cell provide information about the manipulated dimension in both the target verb and the additional descriptions; the dimension of change is thus marked redundantly.

Fig. 13.

Fig. 13

Example responses to the CLEAN videos. The column on the left displays the modified target sign, and the column on the right displays the unmodified target sign. Note that responses in the bottom left cell are redundant in that they include the relevant contrast information more than once; responses in the top right cell are underinformative in that they do not comment on the relevant contrast.

When the target sign was not modified, participants expressed the relevant difference between the unaltered and altered stimuli using a variety of structures, including adjectives or adverbial modifiers, such as FAST, SLOW, LEFT, RIGHT, and so on; fingerspelled words, such as Z‐I‐G‐Z‐A‐G; and classifier constructions. Example 6 shows an unmodified lexical sign accompanied by additional descriptions (the lower cell on the right in Fig. 13). For examples 6–10, the participant's first utterance (describing the initial un‐altered video), and their second (describing the altered video) are listed in sequential bullet points.

  • 6. An unmodified lexical sign with additional descriptions:

  • RUN[lexical sign citation form] CASUAL

  • RUN[lexical sign citation form] FAST

Example 7 presents a response with a modified lexical sign with no additional descriptions (the top cell on the left in Fig. 13).

  • 7. A modified lexical sign without additional descriptions:

  • PERSON BALL THROW[lexical sign citation form]

  • PERSON BALL THROW[lexical sign modified for path].

Example 8 presents a response containing both a modified lexical sign and additional descriptions (the bottom cell on the left in Fig. 13). The signer produced an unmodified lexical sign RUN in the first utterance, followed by an utterance containing RUN modified for speed along with the adverb FASTER; in other words, a redundant response containing an internally modified lexical verb along with an external adverb conveying the same information.

  • 8. A modified lexical sign with additional descriptions:

  • P‐A‐R‐K[finger spelled] WOMAN RUN[lexical sign citation form]

  • WOMAN RUN[lexical sign modified for speed] FASTER

Depicting signs were rarely unmodified but they were produced with and without additional descriptions. Example 9 presents a modified depicting sign without additional descriptions. The signer produced an unmodified depicting sign for chase in the first utterance, followed by a second utterance in which the depicting sign was modified for direction with no additional descriptions.

  • 9. A modified depicting sign without additional descriptions:

  • CHASE[depicting sign unmodified]

  • OTHER16 CHASE[depicting sign modified for direction]

Example 10 shows a redundant response for a depicting sign—a modified depicting sign accompanied by additional descriptions. In the first utterance, the signer uses a depicting sign with a handling handshape to represent the action, and in the second utterance, the signer produces the depicting sign markedly slower to depict the change in speed, as well as marking the speed change via the use of an adjective (SLOW).

  • 10. A modified depicting sign with additional descriptions:

  • CHASE[depicting sign unmodified]

  • CHASE[depicting sign modified for speed] SLOW.

The data that fall into the four categories displayed in Fig. 13 are plotted in Fig. 14 for lexical signs (top graphs Fig. 14a,b) and depicting signs (bottom graphs, Fig. 14c,d). Modified signs are in the left graphs (Fig. 14a,c), unmodified signs are in the right graphs (Fig. 14b,d); target signs without description are in black and target signs with description are in gray. The data are divided into the three manipulated dimensions: path, direction, and speed. The proportions do not sum to 1.00 because responses that contained both a modified and unmodified form, or failed to include the target sign at all, are not displayed in the graph.

Fig. 14.

Fig. 14

Distribution of response categories in the lexical condition (top) and in the depicting condition (bottom), classified according to the dimension modified and whether the target sign was alone or accompanied by additional descriptions.

The basic phenomenon for lexical signs can be seen in Fig. 14a,b—modification (with or without additional descriptions) is most likely on trials in which the event's speed was manipulated, and least likely on trials in which the event's path was manipulated (the bars increase in height in the left graph and decrease in height in the right graph).

Looking first at the unmodified forms in the lexical condition (Fig. 14b), we find that the majority of these unmodified signs were produced with descriptions (gray portions) for all three manipulated dimensions. This result makes it clear that signers were noticing the differences between the pairs of videos, and used additional signs to describe those differences if they did not encode it in the target sign. Looking at the modified forms in the lexical condition (Fig. 14a), we find that additional descriptions (indicated in gray in each bar) were proportionally least likely for the path manipulation, increasing for direction manipulations, and most likely for speed manipulations. Redundancy was thus least likely for path manipulations, most likely for speed.

Fig. 14c,d (bottom) displays comparable data for the depicting sign condition. Target depicting signs were modified (i.e., mapped to their referent in an analog way) very frequently. There was also little variation among the three manipulated dimensions, with the exception that the light sections (target sign plus description) increased from path to direction to speed, as in lexical signs. In other words, redundancy was again least likely for path manipulations, most likely for speed. Although the differences are not significant in either lexical or depicting signs, redundancy (i.e., within‐sign modification accompanied by additional modifiers) was more frequent for speed manipulations than for direction modifications, and more frequent for direction manipulations than path manipulations.

4. Discussion

Okrent (2002) identified restrictions on how gradience combines with categorical forms when both are expressed in the oral modality (i.e., in speech and vocal gesture). We build on this work by investigating whether there are restrictions on how gradience combines with categorical forms when both are produced in the manual modality. We explored how gradience is used to imagistically modify meaning in ASL, which contains two types of signs that vary in productivity: Lexical signs are frozen forms, composed of categorical phonemic movements that are not meaningful when isolated from the sign; Depicting signs are productive forms, composed of movements that analogically map onto the event it describes (these movements are considered morphemes by some, Supalla, 1986, gestures by others, Schembri et al., 2005). This feature of ASL thus allows us to examine how overlaying gradience onto a categorical form to depict meaning compares to overlaying it onto an already‐gradient form. We found that signers gradiently modify the forms of both types of signs to enhance meaning but do so more frequently for depicting signs than for lexical signs.

Okrent (2002) also observed that a parameter in a spoken language will not be freely used to gradiently modify a meaning if that parameter has phonemic value, that is, if it is contrastive in the language (e.g., the use of pitch in a tonal language). To explore this issue in ASL, we examined signers’ willingness to gradiently alter the form of lexical and depicting signs to capture three dimensions of a motion event—speed, direction, and path. Each of these dimensions has the potential to be contrastive in ASL, but they differ in the extent to which they are phonologically specified. The dimensions thus allow us to explore language‐specific restrictions on gradient modification in ASL. We found that signers capture variations in the speed, direction, and path of an event equally often in their depicting signs but are more likely to gradiently modify their lexical signs to capture variations in speed than variations in path.

Although research suggests that iconic signs and words are more prone to gradient modification than noniconic forms, previous work, for the most part, has not distinguished between modifications that imagistically capture changes in meaning and modifications that serve other functions (e.g., increasing salience). Here, we focused uniquely on gradient modifications that imagistically capture variations in meaning, and we asked whether the iconicity of the sign affects how willing signers are to gradiently modify that sign to alter its meaning. We found an effect of sign iconicity on the likelihood of gradient modification for lexical signs—the more iconic a lexical sign, the more likely it is to be gradiently modified to capture changes in meaning. Modification in depicting signs was not influenced by iconicity, although this null effect may be because the proportion of gradient modification in depicting signs was close to ceiling. In addition, iconicity had an impact on how often a lexical sign was modified to capture speed and direction but was not significantly related to how often the sign was modified to capture path. The limitations on how likely a lexical sign is to be gradiently modified for path cannot be fully explained by the sign's iconicity.

We first discuss the impact of iconicity on gradient modification in the manual modality, and then turn to the limitations on gradient modification in lexical signs.

4.1. Iconicity affects gradient modification

The difference in how often lexical signs versus depicting signs are gradiently modified could reflect the fact that depicting signs are, in general, highly iconic, whereas lexical signs vary in their degree of iconicity, with many lexical signs being completely arbitrary. However, our data suggest a more nuanced account.

The depicting signs and lexical signs we used in our task occupy the same range on an iconicity scale (see the x‐axis of Fig. 10). Nevertheless, depicting signs were consistently modified, independent of their iconicity rating. In contrast, lexical signs were more likely to be modified the higher their iconic rating. In addition, within lexical signs, iconicity interacted with the physical dimension that the modification captures. Gradient modification was positively and significantly correlated with iconicity for speed and for direction, but not for path (see Fig. 11). Again, the influence of iconicity appears to be subject to limitations, in this case, limitations based on the dimension of change. Taken together, these patterns suggest that the role iconicity plays in a sign language is at least partially constrained by features of that language.

The relationship between iconicity and gradient modification that we report here builds on previous research on both spoken and signed languages showing that iconic words/signs are particularly prone to gradient modification. For example, speakers often modify the forms of ideophones (which, by definition, are modified in iconic ways) using gradient processes such as stem repetition, partial multiplication, mora augmentation, vowel lengthening, and gemination; these processes are applied less frequently to noniconic or “plain” words than to iconic words (Akita, 2009; Dingemanse & Akita, 2016; Hamano, 1988; Nasu, 2002). As another example, adults gradiently modify words (e.g., changing pitch, duration, repetition, vocal quality, etc.) and signs (e.g., changing size, length, repetition, etc.) used with children more often when the forms are iconic than noniconic (Laing et al., 2017; Perniss, Lu, Morgan, & Vigliocco, 2018; Sundberg & Klintfors, 2009). However, the modifications described in this previous work are often not imagistic. For example, much of the work on ideophones focuses on the emphatic function of these expressive features, which serve to enhance the intensity or emotional power of ideophones, rather than to imagistically depict changes to meaning (Bolinger, 1986; Dingemanse, 2017; Dingemanse & Akita, 2016; see Akita, 2020, for discussion). Similarly, findings from child‐directed language illustrate how gradient modification when applied to onomatopoeia and iconic signs makes these words/signs more salient to children; but the changes do not necessarily imagistically alter the meaning of the sign.

Our study focused on gradient modification designed to change the meaning of a sign. Recall that changes to the sign form that did not imagistically depict the altered meaning (e.g., changing the speed, duration, size, direction, etc. of a target sign in a trial where path was the relevant contrast) were not counted as gradient modifications in our data. We found that the more iconic a sign is, the more likely signers were to apply gradient modification to that sign's form to capture changes in its meaning. Our findings thus show that, as in spoken language (cf. Akita, 2020), gradient modification designed to elaborate the meaning of a sign is also preferentially applied to iconic forms in sign language.

Much of the work on gradient modification in spoken language draws a categorical distinction between iconic and noniconic words (i.e., onomatopoetic/ideophonic vs. nononomatopoetic/nonideophonic words). Our work here suggests that a scalar view of iconicity may allow for a more nuanced analysis of how gradient modification works in language—an approach that has been adopted by many sign language researchers (e.g., Caselli et al., 2017; Thompson, Vinson, Woll, & Vigliocco, 2012; D. Vinson, Thompson, Skinner, & Vigliocco, 2015; D. P. Vinson et al., 2008), as well as some researchers in spoken language (e.g., Dingemanse & Thompson, 2020; Perlman, Little, Thompson, & Thompson, 2018; Perry, Perlman, & Lupyan, 2015; Winter, Perlman, Perry, & Lupyan, 2017).17

4.2. Lexical signs are gradiently modifiable, but the modifications have limits

Gradient depiction can represent meanings that are difficult to encode in a categorical form (Bolinger, 1986; Kendon, 1980; McNeill, 1992) and can, as a result, enhance and enrich messages transmitted through categorical units in language (Fuks, 2014). The task design we used in our study emphasized a single difference in movement between otherwise identical scenes and prompted signers with a specific sign to use in their descriptions. Our goal was to create contexts where gradiently modifying a sign to depict a subtle variation in meaning would be an efficient and effective way to convey the variation. The fact that the signers in our study gradiently modified movement in depicting signs (even in the initial round of unprompted trials where they used whatever verb they wanted) indicates that our design was successful—the signers found gradient depiction to be an appropriate and useful way of capturing the events in our stimuli. However, they took advantage of this strategy much less frequently in their lexical signs, often opting to capture the distinctions by adding other signs and phrases. In addition to using gradient modification more often in depicting signs than in lexical signs, signers also used the device selectively for different dimensions of motion in lexical signs but not in depicting signs: They were more willing to gradiently modify their lexical signs to capture changes in the speed of an event than to capture changes in the path of the same event (changes in direction fell in between path and speed).

Our task was designed to provide contexts in which signers might opt to capture meaning distinctions between events through gradient modification of the target sign form. However, signers were free to express those distinctions using whatever constructions they preferred. For example, when distinguishing between running and running fast in the lexical condition, a signer could produce the target sign RUN and express the difference in speed via the adjective FASTER. In instances where the target sign RUN is not modified for speed, the speed information is expressed only in the use of FASTER; if the target sign is modified for speed, the information is expressed “redundantly” (via the modification as well as the adjective). Although the differences are not significant in either lexical or depicting signs, redundancy (i.e., within‐sign modification accompanied by additional modifiers) was more frequent for speed manipulations than for direction manipulations, and more frequent for direction manipulations than for path manipulations. Signers might produce external modifiers alongside a sign that already has internal modification because they are not confident that their addressee will detect their within‐sign modifications—the additional markers increase the chances that the addressee will grasp the intended message. Internal speed modifications may be harder for addressees to see than internal direction modifications which, in turn, are harder for addressees to see than internal path modifications. The fact that, in both depicting and lexical signs, external modifications are added to already‐modified signs for speed and direction more often than for path lends weight to this pragmatic explanation.

Our data are thus consistent with previous findings showing that gradient modification can occur not only in depicting signs, but also in lexical signs. Our findings take the literature one step further by showing that gradient modification in lexical signs has systematic limitations.

What our findings do not (and cannot) address is whether the limitations we have found on gradient modification in lexical signs are linguistic in nature. On one hand, signers may treat the dimensions of path and speed in a lexical sign as differentially modifiable because of the phonological or morphological rules and patterns of ASL. The fact that signers are less likely to gradiently alter the path of a sign than its speed to capture details of a described event provides support for Okrent (2002)—features that are contrastive in a language (like path in ASL) will have more restrictions on how often they are gradiently modified than features that are not contrastive in the language (like speed in ASL). Although each of our dimensions of interest can be used contrastively in ASL, path maps most closely to what linguists refer to as the “movement” parameter—it is phonologically specified and is frequently distinguished between minimal pairs of signs. The speed of movement, on the other hand, although potentially contrastive in ASL (see Fig. 2), is not considered phonologically specified. Thus, the degree to which ASL signers modify path and speed may mirror the extent to which that dimension is phonologically (cf., Brentari, 1998) or perhaps morphologically specified. If so, whether an aspect of a sign can, or cannot, be gradiently modified may be linguistic knowledge acquired through learning the language.

On the other hand, the patterns we have found may reflect nonlinguistic pressures that would be there even if the participant did not know ASL. For example, signs that reflect across the midsaggital plane are the simplest to perform from a motor‐coordination point of view (Napoli & Wu, 2003; Ferrara & Napoli, 2019), a finding that is supported by work on general hand movement (Kelso, Southard, & Goodman, 1979). Path modifications might, therefore, be more phonotactically complex than speed modifications because of biomechanical influences on the realization of these modifications. If so, one would not need to know ASL to modify speed more often than path.

The best way to determine whether the limitations we have identified are linguistic is to ask individuals who do not know ASL to describe the videos in our study using signs that we give them. There is a high degree of similarity between signers’ and nonsigners’ judgments of the perceived iconicity of a sign, suggesting that the form‐to‐meaning relationship in iconic signs is at least partially accessible to those who do not know the sign language. Nonsigners may display the same patterns we have found in our signers but even here, knowledge of the language may intrude—subtle differences in the perception of iconicity may be mediated through linguistic knowledge. For example, when shown examples of signs from their own sign language and a foreign sign language, signers consistently rate signs from their own sign language as more iconic, suggesting that perception of iconicity is mediated by a signer's known mappings of form and meaning (Brentari, 1998; Mandel, 1981; Napoli & Wu, 2003; van der Hulst, 1993). In addition, although signers’ and nonsigners’ iconicity ratings are highly correlated, the groups differ in their ratings of certain subclasses of signs. Signers tend to rate verbs and nouns as equally iconic, but nonsigners rate verbs as more iconic than nouns. Similarly, only signers are sensitive to one‐ versus two‐handed signs and rate the former as less iconic than the latter; nonsigners rate these one‐ and two‐handed signs as equally iconic (Sevcikova Sehyr & Emmorey, 2019).

The next step in this line of research will be to explore whether the limitations on gradient modification in lexical signs are linguistic in nature. By collecting data from hearing nonsigners on the task we gave to signers and providing them with the same lexical and depicting signs, we will be able to test whether the patterns found in our data are language‐specific effects. It is likely that hearing nonsigners will be influenced by the iconicity of the signs and modify iconic signs more frequently than noniconic signs. However, if the dimension effect we observed is the result of linguistic knowledge—specifically, knowing that path is gradiently modified less often than direction or speed—then nonsigners should not display this pattern. Similarly, if knowing the linguistic status of a sign as either lexical or depicting is what determines the overall difference in gradient modification between the two, nonsigners should not show this condition effect. If, on the other hand, what distinguishes these two groups is a feature of the forms themselves—either iconicity or a feature such as biomechanical complexity—nonsigners should treat these two types of signs differently in terms of how often they are modified, just as signers do.

In sum, our findings show that gradiently modifying a lexical sign for speed is far more likely in ASL than gradiently modifying a sign for path. In other words, gradient patterns are not always freely applied but rather are systematically restricted as a function of the movement dimension involved. It is this systematic restriction that suggests we cannot ignore the distinction between categorical linguistic units and gradient gestures. Studying hearing nonsigners will allow us to investigate the source of the knowledge underlying gradient modification, providing insights into the linguistic nature of this phenomenon.

Our findings on gradient modification in sign language open the door to further investigations of gradient modification in spoken language. We know that gradient modification in a spoken language is subject to the phonotactic regularities in that language (e.g., Okrent, 2002). Having found that gradient modification of movement dimensions is more restricted in lexical signs than in depicting signs, we can now ask whether the same holds true of spoken language. For example, Akita (2020) lists a number of expressive features (e.g., partial multiplication, vowel lengthening, mora augmentation, prosodic foregrounding) in Japanese that can be applied to the ideophone guruQ to modify its meaning, resulting in turning around and around, turning around energetically, turning around quietly, making a long turn, and so on (see example 5). Are these gradient modifications possible in plain words in Japanese? If so, are speakers equally willing to apply the modifications to plain words (akin to lexical signs in our study) and ideophones (akin to depicting signs in our study)? Our findings set the stage for exploring the role that gradient modification plays in language, signed or spoken. We can no longer relegate gradient modification to the sidelines. It is time to figure out how gradient depiction and categorical description work together to create human language.

5. Conclusion

Our study is the first systematic exploration of the degree to which lexical and depicting signs can be gradiently modified to enhance meaning. Our data provide evidence that signers do indeed play with the forms of their lexical signs, but (unlike how they play with depicting signs) they do so selectively. Depicting signs are highly modifiable without any evidence of constraint in our data (although some restrictions on the production of depicting signs have been identified) (Emmorey & Herzig, 2003; Liddell & Johnson, 1987). In contrast, gradient modification of lexical signs is influenced by both the iconicity of the sign (the more iconic a lexical sign, the more likely it is to be gradiently modified) and the movement dimension that is modified (path is gradiently modified infrequently, direction more often, and speed most often).

We have created a paradigm to study gradient modification in sign language and found that some signs are more flexible than others, and that different aspects of a sign can be modified more or less often. Previous work has explored gradient depiction in depicting signs (Cormier et al., 2013; Duncan, 2005; Emmorey & Herzig, 2003; Fuks, 2014, 2016; Liddell, 2003; Lu & Goldin‐Meadow, 2018), but little research has focused on how gradient modification is realized in lexical signs (an exception is Fuks, 2014, 2016) and whether its application is restricted. Our data suggest an account of gradient modification that integrates both linguistic and nonlinguistic knowledge—which dimension is likely to be modified may depend on knowledge of the language, but the ability to use imagistic resources to gradiently modify signs may arise from world experience with action and an ability to access structural analogy beyond language. Our ongoing work using nonsigners in a silent gesture paradigm identical to the paradigm we use here will shed light on this question. Our findings thus broaden our understanding of the interaction between categories and gradience in sign language and set the stage for asking the same questions in spoken language.

Open Research Badges

This article has earned Open Data and Open Materials badges. Data and materials badges are available at https://osf.io/ksd37/?view_only=6d4188601de148e68b0bb3b1858e8985.

Notes

1

This type of iconicity is referred to by a variety of terms, such as “absolute iconicity” (Gasser, Sethuraman, & Hockema, 2010) or “primary” iconicity (Ahlner & Zlatev, 2010). See Hodge and Ferrara (2022) and footnote 4 of Akita (2020) for additional discussion.

2

Note that these categories are not mutually exclusive, as “emphasis involves elaboration of intensity, and depictive elaboration may involve a gradable dimension (e.g., duration, rate)” (Akita, 2020, p. 11).

3

Perniss, Lu, Morgan, and Vigliocco (2018) found that signers enlarge, lengthen, and repeat their signs when imagining communicating with a child. When these modifications are applied to iconic signs (e.g., the sign DRIVE in British Sign Language, which resembles hands holding and turning a steering wheel), they often increase the salience of features in the iconic mapping (for DRIVE, they enlarge the shape and movement of the steering wheel). These modifications are not intended to modify the meaning of DRIVE, but rather to highlight features of the sign to increase their salience (presumably to make them easier for a child to process).

9

In Sandler's (1989) Hand Tier model, fast movement may be the phonetic realization of a manner feature such as [+restrained].

10

On two occasions, scheduling constraints required that two experimental sessions be run simultaneously, so two participants were tested by a hearing experimenter. Experimental sessions were always conducted in ASL.

11

Video examples of the signs that were presented to participants by the experimenter are publicly available on the online repository https://osf.io/ksd37/?view_only=b95ca51fe0f345ec8b4da1debe9c230f

12

We also attempted to collect iconicity ratings from our deaf signing participants. However, because the experimental task was long, some of our signing participants were unable to provide iconicity ratings at the end of the session, or only completed a portion of the survey. We were able to collect iconicity ratings from five of our signing participants for our lexical signs, and from seven participants for our depicting signs.

13

We did the post‐hoc speed check only for the lexical signs since there were only two instances where a depicting sign was not modified for speed.

14

Based on ratings from 25 to 30 deaf signers.

15

Responses that contained both forms were relatively rare in the lexical condition (48 out of 462 trials; ∼10%) and in the depicting condition (67 out of 462 trials; ∼15%). Because such responses do not allow for a direct comparison between the content of utterances containing a modified form and utterances containing an unmodified form, and because responses of this type were infrequent, they are not included in Fig. 13 for clarity. However, these responses are retained in our statistical analyses. Additional visualizations where these data are included can be found in the Supplementary Materials.

16

“Other” as in the other video in the pair.

17

See Dingemanse, Perlman, and Perniss (2020) for a discussion different construals of iconicity.

References

  1. Ahlner, F. , & Zlatev, J. (2010). Cross‐modal iconicity: A cognitive semiotic approach to sound symbolism. Sign Systems Studies, 38(1/4), 298–348. 10.12697/SSS.2010.38.1-4.11 [DOI] [Google Scholar]
  2. Akita, K. (2009). A grammar of sound‐symbolic words in Japanese: Theoretical approaches to iconic and lexical properties of mimetics. Kobe University. [Google Scholar]
  3. Akita, K. (2017). The typology of manner expressions: A preliminary look. In Ibarretxe‐Antuñano I. (Ed.), Motion and space across languages: Theory and applications (Vol. 59, pp. 39–60). John Benjamins Publishing Company. 10.1075/hcp.59.03aki [DOI] [Google Scholar]
  4. Akita, K. (2020). Modality‐specificity of iconicity: The case of motion ideophones in Japanese. In Perniss P., Fischer O., & Ljungberg C. (Eds.), Operationalizing iconicity (pp. 4–19). John Benjamins Publishing Company. [Google Scholar]
  5. Alpher, B. (2001). Ideophones in interaction with intonation and the expression of new information in some indigenous languages of Australia. In Voeltz E. F. K. & Kilian‐Hatz C. (Eds.), Ideophones (pp. 9–24). 10.1075/tsl.44.03alp [DOI] [Google Scholar]
  6. Battison, R. (1974). Phonological deletion in American Sign Language. Sign Language Studies, 5(1), 1–19. 10.1353/sls.1974.0005 [DOI] [Google Scholar]
  7. Battison, R. (1978). Lexical borrowing in American Sign Language. Linstok Press. [Google Scholar]
  8. Baus, C. , Carreiras, M. , & Emmorey, K. (2013). When does iconicity in sign language matter? Language and Cognitive Processes, 28(3), 261–271. 10.1080/01690965.2011.620374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bolinger, D. L. M. (1986). Pitch. In Intonation and its parts: Melody in spoken English. Stanford University Press. [Google Scholar]
  10. Bosworth, R. G. , & Emmorey, K. (2010). Effects of iconicity and semantic relatedness on lexical access in American Sign Language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(6), 1573–1581. 10.1037/a0020934 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brentari, D. (1998). A prosodic model of sign language phonology. In Language. 10.1016/j.jsams.2012.08.006 [DOI] [Google Scholar]
  12. Caselli, N. K. , Sevcikova Sehyr, Z. , Cohen‐Goldberg, A. M. , & Emmorey, K. (2017). ASL‐LEX: A lexical database of American Sign Language. Behavior Research Methods, 49(2), 784–801. 10.3758/s13428-016-0742-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Childs, G. T. (1995). African ideophones. In Hinton L., Nichols J., & Ohala J. J. (Eds.), Sound symbolism (1st ed., pp. 178–204). Cambridge University Press. 10.1017/CBO9780511751806.013 [DOI] [Google Scholar]
  14. Clark, H. H. , & Gerrig, R. J. (1990). Quotations as Demonstrations. Language, 66(4), 764–805. [Google Scholar]
  15. Cormier, K. , Smith, S. , & Sevcikova Sehyr, Z. (2013). Predicate structures, gesture, and simultaneity in the representation of action in British sign language: Evidence from deaf children and adults. Journal of Deaf Studies and Deaf Education, 18(3), 370–390. 10.1093/deafed/ent020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. de Saussure, F. (1915). Course in general linguistics: Ferdinand de Saussure. McGraw Hill Book Company. [Google Scholar]
  17. Dingemanse, M. (2012). Advances in the cross‐linguistic study of ideophones. Linguistics and Language Compass, 6(10), 654–672. 10.1002/lnc3.361 [DOI] [Google Scholar]
  18. Dingemanse, M. (2017). Expressiveness and system integration: On the typology of ideophones, with special reference to Siwu. STUF ‐ Language Typology and Universals, 70(2), 363–385. 10.1515/stuf-2017-0018 [DOI] [Google Scholar]
  19. Dingemanse, M. , & Akita, K. (2016). An inverse relation between expressiveness and grammatical integration: On the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics, 53(3), 501–532. 10.1017/S002222671600030X [DOI] [Google Scholar]
  20. Dingemanse, M. , Blasi, D. E. , Lupyan, G. , Christiansen, M. H. , & Monaghan, P. (2015). Arbitrariness, iconicity, and systematicity in language. Trends in Cognitive Sciences, 19(10), 603–615. 10.1016/j.tics.2015.07.013 [DOI] [PubMed] [Google Scholar]
  21. Dingemanse, M. , Perlman, M. , & Perniss, P. (2020). Construals of iconicity: Experimental approaches to form‐meaning resemblances in language. Language and Cognition, 12(1), 1–14. 10.1017/langcog.2019.48 [DOI] [Google Scholar]
  22. Dingemanse, M. , & Thompson, B. (2020). Playful iconicity: Structural markedness underlies the relation between funniness and iconicity. Language and Cognition, 12(1), 203–224. 10.1017/langcog.2019.49 [DOI] [Google Scholar]
  23. Duncan, S. (2005). Gesture in signing: A case study from Taiwan Sign Language. Language and Linguistics, 6(2), 279–318. [Google Scholar]
  24. Emmorey, K. (1999). Do signers gesture? In Messing L. and Campbell R. (Eds.), Gesture, speech, and sign, 133–159, Oxford University Press: New York. [Google Scholar]
  25. Emmorey, K. , & Herzig, M. (2003). Categorical versus gradient properties of classifier constructions in ASL. In Emmorey K. (Ed.), Perspectives on Classifier Constructions in Sign Languages, 1960, 215–240. [Google Scholar]
  26. Ferrara , C. , & Napoli, D. J. (2019). Manual movement in sign languages: One hand versus two in communicating shapes. Cognitive Science, 43(9), 1–36. 10.1111/cogs.12741 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Frishberg, N. (1975). Arbitrariness and iconicity: Historical change in American Sign Language. Language, 51(3), 696–719. [Google Scholar]
  28. Fuks, O. (2014). Gradient and categorically: Handshape's two semiotic dimensions in Israeli Sign Language discourse. Journal of Pragmatics, 60, 207–225. 10.1016/j.pragma.2013.08.023 [DOI] [Google Scholar]
  29. Fuks, O. (2016). Intensifier actions in Israeli Sign Language (ISL) discourse. Gesture, 15(2), 192–223. 10.1075/gest.15.2.03fuk [DOI] [Google Scholar]
  30. Gasser, M. , Sethuraman, N. , & Hockema, S. (2010). Iconicity in expressives: An empirical investigation. In Rice S. & Newman J. (Eds.), Experimental and empirical methods in the study of conceptual structure, discourse, and language (pp. 163–180). CSLI Publications. [Google Scholar]
  31. Hamano, S. (1988). The syntax of mimetic words and iconicity. Journal of the Association of Teachers of Japanese, 22(2), 135–149. [Google Scholar]
  32. Hinton, L. , Nichols, J. , & Ohala, J. (1994). Introduction: Sound‐symbolic processes. In Sound symbolism. Cambridge University Press. [Google Scholar]
  33. Hiraga, M. K. (1994). Diagrams and metaphors: Iconic aspects in language. Journal of Pragmatics, 22(1), 5–21. 10.1016/0378-2166(94)90053-1 [DOI] [Google Scholar]
  34. Hochgesang, J. , Crasborn, O. A. , & Lillo‐Martin, D. (2017). ASL Signbank [Computer software]. Retrieved from https://aslsignbank.haskins.yale.edu/
  35. Hockett, C. F. (1960). The origin of speech. Scientific American, 203, 88–111. [PubMed] [Google Scholar]
  36. Hodge, G. , & Ferrara, L. (2022). Iconicity as multimodal, polysemiotic, and plurifunctional. Frontiers in Psychology, 13, 808896. 10.3389/fpsyg.2022.808896 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Johnston, T. A. , & Ferrara, L. (2012). Lexicalization in signed languages: When is an idiom not an idiom? In Selected Papers from the Third UK Cognitive Linguistics Conference (pp. 229–248).
  38. Johnston, T. A. , & Schembri, A. (1999). On defining lexeme in a signed language. Sign Language & Linguistics, 2(2), 115–185. 10.1075/sll.2.2.03joh [DOI] [Google Scholar]
  39. Kawahara, S. , & Braver, A. (2013). The phonetics of multiple vowel lengthening in Japanese. Open Journal of Modern Linguistics, 03(02), 141–148. 10.4236/ojml.2013.32019 [DOI] [Google Scholar]
  40. Kawahara, S. , & Braver, A. (2014). Durational properties of emphatically lengthened consonants in Japanese. Journal of the International Phonetic Association, 44(3), 237–260. 10.1017/S0025100314000085 [DOI] [Google Scholar]
  41. Kelso, J. S. , Southard, D. L. , & Goodman, D. (1979). On the coordination of two‐handed movements. Journal of Experimental Psychology: Human Perception and Performance, 5(2), 229–238. 10.1037/0096-1523.5.2.229 [DOI] [PubMed] [Google Scholar]
  42. Kendon, A. (1980). Gesticulation and speech: Two aspects of the process of utterance. In Key M. R. (Ed.), The relationship of verbal and nonverbal communication (pp. 207–228). De Gruyter Mouton. 10.1515/9783110813098.207 [DOI] [Google Scholar]
  43. Klima, E. S. , & Bellugi, U. (1979). The Signs of Language. Harvard University Press. [Google Scholar]
  44. Laing, C. E. , Vihman, M. , & Keren‐Portnoy, T. (2017). How salient are onomatopoeia in the early input? A prosodic analysis of infant‐directed speech. Journal of Child Language, 44(5), 1117–1139. 10.1017/S0305000916000428 [DOI] [PubMed] [Google Scholar]
  45. Liddell, S. K. (2003). Grammar, gesture, and meaning in American Sign Language .
  46. Liddell, S. K. , & Johnson, R. E. (1987). The analysis of spatial‐locative predicates in American Sign Language. In Fourth International Symposium on Sign Language Research (pp. 15–19).
  47. Lu, J. C. , & Goldin‐Meadow, S. (2018). Creating images with the stroke of a hand: Depiction of size and shape in sign language. Frontiers in Psychology, 9, (July), 1–15. 10.3389/fpsyg.2018.01276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Mandel, M. (1981). Phonotactics and morphology in American Sign Language . 10.11436/mssj.15.250 [DOI]
  49. McGregor, W. B. (2001). Ideophones as the source of verbs in Northern Australian languages. In Voeltz E. F. K. & Kilian‐Hatz C. (Eds.), Ideophones (pp. 205–221). 10.1075/tsl.44.17mcg [DOI] [Google Scholar]
  50. McNeill, D. (1992). Hand and mind: What gestures reveal about thought. University of Chicago Press. [Google Scholar]
  51. Msimang, C. T. , & Poulos, G. (2001). The ideophone in Zulu: A re‐examination of conceptual and descriptive notions. In Voeltz E. F. K. & Kilian‐Hatz C. (Eds.), Ideophones (Vol. 44, pp. 235–249). [Google Scholar]
  52. Napoli, D. J. , & Wu, J. (2003). Morpheme structure constraints on two‐handed signs in American Sign Language. Sign Language & Linguistics, 6(2), 123–205. 10.1075/sll.6.2.03nap [DOI] [Google Scholar]
  53. Nasu, A. (2002). Nihongo‐onomatope‐no gokeisei‐to inritu‐koozoo [Word formation and prosodic structure of Japanese mimetics]. University of Tsukuba. [Google Scholar]
  54. Newmeyer, F. J. (1992). Iconicity and generative grammar. Language, 68(4), 756–796. [Google Scholar]
  55. Nuckolls, J. B. (2001). Ideophones in Pastaza Quechua. In Voeltz E. F. K., & Kilian‐Hatz, C. (Eds.), Ideophones (pp. 271–285). [Google Scholar]
  56. Okrent, A. (2002). A modality‐free notion of gesture and how it can help us with the morpheme vs. Gesture question in sign language linguistics (Or at least give us some criteria to work with). In Meier R. P., Cormier K., & Quinto‐Pozos D. (Eds.), Modality and Structure in Signed and Spoken Languages (pp. 175–198). [Google Scholar]
  57. Peirce, C. S. (1897). Logic as Semiotic: The theory of signs. In Buchler J. (Ed.), Philosophical Writings of Peirce (pp. 98–119). Dover Publications. https://commfoundations.info.yorku.ca/files/2013/09/Peirce.pdf?x63096#:~:text=LOGIC%20AS%20SEMIOTIC%3A%20THE%20THEORY%20OF%20SIGNS%20107%20mathematicians%20consists,parts%20that%20their%20likeness%20consists. [Google Scholar]
  58. Perlman, M. (2010). Talking fast: The use of speech rate as iconic gesture. In Meaning, form and body. [Google Scholar]
  59. Perlman, M. , Clark, N. , & Johansson Falck, M. (2015). Iconic prosody in story reading. Cognitive Science, 39(6), 1348–1368. 10.1111/cogs.12190 [DOI] [PubMed] [Google Scholar]
  60. Perlman, M. , Little, H. , Thompson, B. , & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: A comparison between American Sign Language, British Sign Language, English, and Spanish. Frontiers in Psychology, 9, 1–16. 10.3389/fpsyg.2018.01433 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Perniss, P. , Lu, J. C. , Morgan, G. , & Vigliocco, G. (2018). Mapping language to the world: The role of iconicity in the sign language input. Developmental Science, 21(2), 1–23. 10.1111/desc.12551 [DOI] [PubMed] [Google Scholar]
  62. Perniss, P. , Thompson, R. L. , & Vigliocco, G. (2010). Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology, 1, 1–15. 10.3389/fpsyg.2010.00227 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Perry, L. K. , Perlman, M. , & Lupyan, G. (2015). Iconicity in English and Spanish and its relation to lexical category and age of acquisition. PLOS ONE, 10(9), e0137147. 10.1371/journal.pone.0137147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Rhodes, R. (1995). Aural images. In Hinton L., Nichols J., & Ohala J. J. (Eds.), Sound symbolism (1st ed., pp. 276–292). Cambridge University Press. 10.1017/CBO9780511751806.019 [DOI] [Google Scholar]
  65. Schaefer, R. P. (2001). Ideophonic adverbs and manner gaps in Emai. In Voeltz E. F. K. & Kilian‐Hatz C. (Eds.), Ideophones (pp. 339–354). 10.1075/tsl.44.26sch [DOI] [Google Scholar]
  66. Schembri, A. (2003). Rethinking “classifiers” in signed languages. In Perspectives on Classifier Constructions in Sign Languages, September (pp. 3–34).
  67. Schembri, A. , Jones, C. , & Burnham, D. (2005). Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education, 10(3), 272–290. 10.1093/deafed/eni029 [DOI] [PubMed] [Google Scholar]
  68. Schmidtke, D. S. , Conrad, M. , & Jacobs, A. M. (2014). Phonological iconicity. Frontiers in Psychology, 5, 1–6. 10.3389/fpsyg.2014.00080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Sevcikova Sehyr, Z. , & Emmorey, K. (2019). The perceived mapping between form and meaning in American Sign Language depends on linguistic knowledge and task: Evidence from iconicity and transparency judgments. Language and Cognition, 11(2), 208–234. 10.1017/langcog.2019.18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Shintel, H. , Nusbaum, H. C. , & Okrent, A. (2006). Analog acoustic expression in speech communication. Journal of Memory and Language, 55(2), 167–177. 10.1016/j.jml.2006.03.002 [DOI] [Google Scholar]
  71. Singleton, J. L. , Morford, J. P. , & Goldin‐Meadow, S. (1993). Once is not enough: Standards of well‐formedness in manual communication created over three different timespans. Language, 69(4), 683–715. [Google Scholar]
  72. Stokoe, W. C. (1980). Sign language structure. Annual Review of Anthropology, 9(1), 365–390. 10.1146/annurev.an.09.100180.002053 [DOI] [Google Scholar]
  73. Sundberg, U. , & Klintfors, E. (2009). Acoustic characteristics of onomatopoetic expressions in child‐directed speech . In FONETIK 2009, The XXIIth Swedish Phonetic Conference, 40‐41.
  74. Supalla, T. (1982). Structure and acquisition of verbs of motion and location in American Sign Language. Unpublished dissertation, University of California San Diego. [Google Scholar]
  75. Supalla, T. (1986). The classifier system in American Sign Language. In Noun Classes and Categorization: Proceedings of a Symposium on Categorization and Noun Classification (pp. 181–214).
  76. Supalla, T. (2003). Revisiting visual analogy in ASL classifier predicates. In Perspectives on Classifier Constructions in Sign Languages .
  77. Supalla, T. , Newport, E. L. , Singleton, J. L. , Supalla, S. , Metlay, D. , & Coulter, G. (1995). The test battery for American Sign Language morphology and syntax. Unpublished manuscript and videotape materials . New York: University of Rochester. [Google Scholar]
  78. Taub, S. (1997). Language in the body: Iconicity and metaphor in American Sign Language . University of California, Berkeley. 10.11436/mssj.15.250 [DOI] [Google Scholar]
  79. Thompson, R. L. , Vinson, D. P. , Woll, B. , & Vigliocco, G. (2012). The road to language learning is iconic: Evidence from British Sign Language. Psychological Science, 23(12), 1443–1448. 10.1177/0956797612459763 [DOI] [PubMed] [Google Scholar]
  80. van der Hulst, H. (1993). Units in the analysis of signs. Phonology, 10(2), 209–241. [Google Scholar]
  81. Vinson, D. P. , Cormier, K. , Denmark, T. , Schembri, A. , & Vigliocco, G. (2008). The British Sign Language (BSL) norms for age of acquisition, familiarity, and iconicity. Behavior Research Methods, 40(4), 1079–1087. 10.3758/BRM.40.4.1079 [DOI] [PubMed] [Google Scholar]
  82. Vinson, D. P. , Thompson, R. L. , Skinner, R. , & Vigliocco, G. (2015). A faster path between meaning and form? Iconicity facilitates sign recognition and production in British Sign Language. Journal of Memory and Language, 82, 56–85. 10.1016/j.jml.2015.03.002 [DOI] [Google Scholar]
  83. Watson, R. L. (2001). A comparison of some Southeast Asian ideophones with some African ideophones. In Voeltz E. F. K. & Kilian‐Hatz C. (Eds.), Ideophones (pp. 385–405). 10.1075/tsl.44.29wat [DOI] [Google Scholar]
  84. Whitney, W. D. (1874). Fusei or Qesei—Natural or conventional? Transactions of the American Philological Association, 5, 95–116. [Google Scholar]
  85. Wilcox, S. (2004). Cognitive iconicity: Conceptual spaces, meaning, and gesture in signed languages. Cognitive Linguistics, 15(2), 119–147. 10.1515/cogl.2004.005 [DOI] [Google Scholar]
  86. Winter, B. , Perlman, M. , Perry, L. K. , & Lupyan, G. (2017). Which words are most iconic? Iconicity in English sensory words. Interaction Studies, 18(3), 430–451. 10.1075/is.18.3.07win [DOI] [Google Scholar]
  87. Wittenburg, P. , Brugman, H. , Russel, A. , Klassmann, A. , & Sloetjes, H. (2006). ELAN: A professional framework for multimodality research. In Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC 2006 (pp. 1556–1559).
  88. Zwicky, A. M. , & Pullum, G. K. (1987). Plain morphology and expressive morphology. In Proceedings of the 13th Annual Meeting of the Berkeley Linguistics (pp. 330–340).
  89. Zwitserlood, I. (2008). Morphology below the level of the sign: Frozen forms and classifier predicates. Proceedings of the 8th Conference on Theoretical Issues in Sign Language Research (TISLR), 1, 251–272. [Google Scholar]
  90. Zwitserlood, I. (2012). Classifiers. In Pfau R., Steinbach M., & Woll B. (Eds.), Sign language: An international handbook (pp. 158–186). [Google Scholar]

Articles from Cognitive Science are provided here courtesy of Wiley

RESOURCES